Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sourcery refactored master branch #1

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open

Conversation

sourcery-ai[bot]
Copy link

@sourcery-ai sourcery-ai bot commented Jun 30, 2022

Branch master refactored by Sourcery.

If you're happy with these changes, merge this Pull Request using the Squash and merge strategy.

See our documentation here.

Run Sourcery locally

Reduce the feedback loop during development by using the Sourcery editor plugin:

Review changes via command line

To manually merge these changes, make sure you're on the master branch, then run:

git fetch origin sourcery/master
git merge --ff-only FETCH_HEAD
git reset HEAD^

Help us improve this pull request!

@sourcery-ai sourcery-ai bot requested a review from xinetzone June 30, 2022 02:38
if inp.sum() > 0.:
output = self.weight + inp
else:
output = self.weight - inp
return output
return self.weight + inp if inp.sum() > 0. else self.weight - inp
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function SimpleIf.forward refactored with the following changes:

Comment on lines -28 to +26
if inp.mean() > 0.:
output = self.weight + inp
else:
output = self.weight - inp
return self.weight + inp if inp.mean() > 0. else self.weight - inp
else:
if inp.mean() > 0.:
output = self.weight * inp
else:
output = self.weight / inp

return output
return self.weight * inp if inp.mean() > 0. else self.weight / inp
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function NestedIf.forward refactored with the following changes:

Comment on lines -54 to +42
for i in range(inp.size(0)):
for _ in range(a.size(0)):
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function SimpleLoop.forward refactored with the following changes:

Comment on lines -64 to +52
for i in range(inp.size(0)):
for _ in range(a.size(0)):
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function LoopWithIf.forward refactored with the following changes:

Comment on lines -98 to +86
while i < inp.size(0):
while i < a.size(0):
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function SimpleWhileLoop.forward refactored with the following changes:

Comment on lines -173 to -189
if True:
with auto_scheduler.ApplyHistoryBest("logs/maskrcnn_rtx3070.log"):
with tvm.transform.PassContext(opt_level=3, config={"relay.backend.use_auto_scheduler": True}):
desired_layouts = {'nn.conv2d': ['NHWC', 'default'], "vision.roi_align": ["NHWC", "default"]}
seq = tvm.transform.Sequential([relay.transform.ConvertLayout(desired_layouts)])
mod = seq(mod)
vm_exec = relay.vm.compile(mod, target=target, params=params)
else:
# with auto_scheduler.ApplyHistoryBest("logs/maskrcnn_nvptx.log"):
# # with auto_scheduler.ApplyHistoryBest("logs/maskrcnn_cuda.log"):
# with tvm.transform.PassContext(opt_level=3, config={"relay.backend.use_auto_scheduler": True}):
with tvm.transform.PassContext(opt_level=3):
# desired_layouts = {'nn.conv2d': ['NHWC', 'default'], "vision.roi_align": ["NHWC", "default"]}
# seq = tvm.transform.Sequential([relay.transform.ConvertLayout(desired_layouts)])
# mod = seq(mod)
with auto_scheduler.ApplyHistoryBest("logs/maskrcnn_rtx3070.log"):
with tvm.transform.PassContext(opt_level=3, config={"relay.backend.use_auto_scheduler": True}):
desired_layouts = {'nn.conv2d': ['NHWC', 'default'], "vision.roi_align": ["NHWC", "default"]}
seq = tvm.transform.Sequential([relay.transform.ConvertLayout(desired_layouts)])
mod = seq(mod)
vm_exec = relay.vm.compile(mod, target=target, params=params)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function bench_tvm refactored with the following changes:

This removes the following comments ( why? ):

# desired_layouts = {'nn.conv2d': ['NHWC', 'default'], "vision.roi_align": ["NHWC", "default"]}
# seq = tvm.transform.Sequential([relay.transform.ConvertLayout(desired_layouts)])
# with auto_scheduler.ApplyHistoryBest("logs/maskrcnn_nvptx.log"):
# # with auto_scheduler.ApplyHistoryBest("logs/maskrcnn_cuda.log"):
# mod = seq(mod)
#     with tvm.transform.PassContext(opt_level=3, config={"relay.backend.use_auto_scheduler": True}):

rois = torch.cat([ids, concat_boxes], dim=1)
return rois
return torch.cat([ids, concat_boxes], dim=1)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function MultiScaleRoIAlign.convert_to_roi_format refactored with the following changes:

Comment on lines -144 to +143
assert len(image_shapes) != 0
assert image_shapes
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function MultiScaleRoIAlign.setup_scales refactored with the following changes:

Comment on lines -179 to +178
x_filtered = []
for k, v in x.items():
if k in self.featmap_names:
x_filtered.append(v)
x_filtered = [v for k, v in x.items() if k in self.featmap_names]
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function MultiScaleRoIAlign.forward refactored with the following changes:

Comment on lines -128 to -131
mask_loss = F.binary_cross_entropy_with_logits(
mask_logits[torch.arange(labels.shape[0], device=labels.device), labels], mask_targets
return F.binary_cross_entropy_with_logits(
mask_logits[
torch.arange(labels.shape[0], device=labels.device), labels
],
mask_targets,
)
return mask_loss
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function maskrcnn_loss refactored with the following changes:

Comment on lines -308 to +310
keypoint_loss = F.cross_entropy(keypoint_logits[valid], keypoint_targets[valid])
return keypoint_loss
return F.cross_entropy(keypoint_logits[valid], keypoint_targets[valid])
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function keypointrcnn_loss refactored with the following changes:

Comment on lines -342 to +343
boxes_exp = torch.stack((boxes_exp0, boxes_exp1, boxes_exp2, boxes_exp3), 1)
return boxes_exp
return torch.stack((boxes_exp0, boxes_exp1, boxes_exp2, boxes_exp3), 1)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function _onnx_expand_boxes refactored with the following changes:

im_mask = torch.cat((zeros_x0,
concat_0,
zeros_x1), 1)[:, :im_w]
return im_mask
return torch.cat((zeros_x0, concat_0, zeros_x1), 1)[:, :im_w]
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function _onnx_paste_mask_in_image refactored with the following changes:

Comment on lines -483 to +484
if len(res) > 0:
ret = torch.stack(res, dim=0)[:, None]
else:
ret = masks.new_empty((0, 1, im_h, im_w))
return ret
return (
torch.stack(res, dim=0)[:, None]
if res
else masks.new_empty((0, 1, im_h, im_w))
)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function paste_masks_in_image refactored with the following changes:

Comment on lines -609 to +606
for img_idx, (pos_inds_img, neg_inds_img) in enumerate(
zip(sampled_pos_inds, sampled_neg_inds)
):
for pos_inds_img, neg_inds_img in zip(sampled_pos_inds, sampled_neg_inds):
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function RoIHeads.subsample refactored with the following changes:

for i in range(3):
for _ in range(3):
pt_model(**inputs)

t1 = time.time()
for i in range(n_repeat):
for _ in range(n_repeat):
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function perf_bench_torch refactored with the following changes:

Comment on lines -298 to +294
quantized_output_dir = configs.output_dir + "quantized/"
quantized_output_dir = f"{configs.output_dir}quantized/"
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lines 298-298 refactored with the following changes:

(args.output_dir, args.output_dir + "-MM")
(args.output_dir, f"{args.output_dir}-MM")
if args.task_name == "mnli"
else (args.output_dir,)
)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function evaluate_tvm refactored with the following changes:

Comment on lines -437 to +440
# PyTorch eval
if True:
# # Evaluate the original FP32 BERT model
# print("Evaluating PyTorch full precision accuracy and performance:")
# time_model_evaluation(model, configs, tokenizer)
# # Evaluate the original FP32 BERT model
# print("Evaluating PyTorch full precision accuracy and performance:")
# time_model_evaluation(model, configs, tokenizer)

# Evaluate the INT8 BERT model after the dynamic quantization
print("Evaluating PyTorch quantization accuracy and performance:")
time_model_evaluation(quantized_model, configs, tokenizer)
# Evaluate the INT8 BERT model after the dynamic quantization
print("Evaluating PyTorch quantization accuracy and performance:")
time_model_evaluation(quantized_model, configs, tokenizer)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lines 437-445 refactored with the following changes:

This removes the following comments ( why? ):

# PyTorch eval

Comment on lines 67 to -82

for idx, task in enumerate(tasks):
print("========== Task %d (workload key: %s) ==========" % (idx, task.workload_key))
print(task.compute_dag)

measure_ctx = auto_scheduler.LocalRPCMeasureContext(repeat=1, min_repeat_ms=300, timeout=100)

tuner = auto_scheduler.TaskScheduler(tasks, task_weights)
# tuner = auto_scheduler.TaskScheduler(tasks, task_weights, load_log_file=log_file)
tune_option = auto_scheduler.TuningOptions(
num_measure_trials=50000, # change this to 20000 to achieve the best performance
runner=measure_ctx.runner,
measure_callbacks=[auto_scheduler.RecordToFile(log_file)],
)

tuner.tune(tune_option)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function auto_schedule refactored with the following changes:

This removes the following comments ( why? ):

# tuner = auto_scheduler.TaskScheduler(tasks, task_weights, load_log_file=log_file)
# change this to 20000 to achieve the best performance

@sourcery-ai
Copy link
Author

sourcery-ai bot commented Jun 30, 2022

Sourcery Code Quality Report

✅  Merging this PR will increase code quality in the affected files by 0.43%.

Quality metrics Before After Change
Complexity 5.14 ⭐ 4.98 ⭐ -0.16 👍
Method Length 80.63 🙂 79.44 🙂 -1.19 👍
Working memory 10.02 😞 9.97 😞 -0.05 👍
Quality 62.61% 🙂 63.04% 🙂 0.43% 👍
Other metrics Before After Change
Lines 3132 3061 -71
Changed files Quality Before Quality After Quality Change
dynamic_test.py 78.78% ⭐ 79.55% ⭐ 0.77% 👍
rnn_test.py 59.68% 🙂 59.53% 🙂 -0.15% 👎
test_quantized_1.6.py 73.38% 🙂 74.03% 🙂 0.65% 👍
deeplabv3/auto_schedule.py 77.06% ⭐ 77.54% ⭐ 0.48% 👍
detr/detr_test.py 73.45% 🙂 73.83% 🙂 0.38% 👍
detr/transformer_test.py 52.41% 🙂 55.71% 🙂 3.30% 👍
fx-int8/test.py 81.57% ⭐ 81.57% ⭐ 0.00%
maskrcnn/maskrcnn_test.py 74.12% 🙂 75.76% ⭐ 1.64% 👍
maskrcnn/torchvision_mod/poolers.py 61.83% 🙂 63.89% 🙂 2.06% 👍
maskrcnn/torchvision_mod/roi_head.py 49.57% 😞 49.51% 😞 -0.06% 👎
nms-perf-issue/maskrcnn_test.py 73.85% 🙂 73.81% 🙂 -0.04% 👎
pycls/test.py 44.65% 😞 44.59% 😞 -0.06% 👎
retinanet/retinanet_test.py 78.14% ⭐ 87.80% ⭐ 9.66% 👍
transformers/download_glue_data.py 62.56% 🙂 62.88% 🙂 0.32% 👍
transformers/test_dynamic_qbert.py 46.83% 😞 46.81% 😞 -0.02% 👎
yolo5/auto_schedule.py 81.40% ⭐ 83.48% ⭐ 2.08% 👍

Here are some functions in these files that still need a tune-up:

File Function Complexity Length Working Memory Quality Recommendation
maskrcnn/torchvision_mod/roi_head.py RoIHeads.forward 43 ⛔ 549 ⛔ 24 ⛔ 7.24% ⛔ Refactor to reduce nesting. Try splitting into smaller methods. Extract out complex expressions
transformers/test_dynamic_qbert.py evaluate 26 😞 382 ⛔ 18 ⛔ 18.70% ⛔ Refactor to reduce nesting. Try splitting into smaller methods. Extract out complex expressions
rnn_test.py custom_lstm_test 13 🙂 372 ⛔ 18 ⛔ 28.24% 😞 Try splitting into smaller methods. Extract out complex expressions
transformers/test_dynamic_qbert.py load_and_cache_examples 17 🙂 279 ⛔ 17 ⛔ 28.32% 😞 Try splitting into smaller methods. Extract out complex expressions
transformers/test_dynamic_qbert.py evaluate_tvm 17 🙂 311 ⛔ 13 😞 32.00% 😞 Try splitting into smaller methods. Extract out complex expressions

Legend and Explanation

The emojis denote the absolute quality of the code:

  • ⭐ excellent
  • 🙂 good
  • 😞 poor
  • ⛔ very poor

The 👍 and 👎 indicate whether the quality has improved or gotten worse with this pull request.


Please see our documentation here for details on how these metrics are calculated.

We are actively working on this report - lots more documentation and extra metrics to come!

Help us improve this quality report!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants