Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ASPLOS] Add torch to lit cfg for programming_examples #1371

Closed
wants to merge 10 commits into from

Conversation

jgmelber
Copy link
Collaborator

No description provided.

Copy link
Contributor

github-actions bot commented Apr 22, 2024

Coverage Report

Created: 2024-04-22 18:19

Click here for information about interpreting this report.

FilenameFunction CoverageLine CoverageRegion CoverageBranch Coverage
Conversion/AIEVecToLLVM/AIEVecToLLVM.cpp 82.05% 87.48% 74.30% 60.16%
Dialect/AIE/Transforms/AIEObjectFifoRegisterProcess.cpp 100.00% 93.63% 96.77% 90.74%
Dialect/AIE/Transforms/AIEObjectFifoStatefulTransform.cpp 100.00% 95.66% 94.44% 89.50%
Dialect/AIEVec/TransformOps/AIEVecTransformOps.cpp 100.00% 98.44% 95.77% 76.92%
Dialect/AIEVec/Transforms/VectorToAIEVecConversions.cpp 70.87% 63.72% 49.70% 41.68%
Dialect/AIEX/IR/AIEXDialect.cpp 100.00% 85.09% 84.44% 75.00%
Dialect/AIEX/Transforms/AIEDmaToIpu.cpp 100.00% 93.21% 88.75% 76.79%
Dialect/AIEX/Transforms/AIEXToStandard.cpp 75.00% 75.00% 66.67% 50.00%
Totals 81.71% 78.53% 65.98% 58.23%
Generated by llvm-cov -- llvm version 14.0.0

@@ -14,177 +14,183 @@
import os
import numpy as np
from aie.utils.xrt import setup_aie, extract_trace, write_out_trace, execute

import aie.utils.test as test_utils
torch.use_deterministic_algorithms(True)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[black] reported by reviewdog 🐶

Suggested change
torch.use_deterministic_algorithms(True)
torch.use_deterministic_algorithms(True)

)

print("\nPASS!\n")
def main(opts):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[black] reported by reviewdog 🐶

Suggested change
def main(opts):
def main(opts):

trace_size=trace_size,
)


Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[black] reported by reviewdog 🐶

Suggested change

torch.round(self.relu2(conv2_out) / inp_scale3), min, max
)
conv3_out = self.conv3(relu2_out) * inp_scale3 * weight_scale3
same_scale_init = torch.clamp(torch.round(conv3_out / inp_scale1), -128, 127)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[black] reported by reviewdog 🐶

Suggested change
same_scale_init = torch.clamp(torch.round(conv3_out / inp_scale1), -128, 127)
same_scale_init = torch.clamp(
torch.round(conv3_out / inp_scale1), -128, 127
)

)
return final_out


Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[black] reported by reviewdog 🐶

Suggested change

block_2_final_out = block_2_relu_3 * (
torch.clamp(
torch.round(self.block_2_relu3(block_2_skip_add) / block_2_relu_3),
def main(opts):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[black] reported by reviewdog 🐶

Suggested change
def main(opts):
def main(opts):

Comment on lines 54 to 55
block_0_int_weight_3 = torch.randint(10, 20, (256, 64, 1, 1)).type(torch.FloatTensor)
block_0_int_weight_skip = torch.randint(10, 20, (256, 64, 1, 1)).type(torch.FloatTensor)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[black] reported by reviewdog 🐶

Suggested change
block_0_int_weight_3 = torch.randint(10, 20, (256, 64, 1, 1)).type(torch.FloatTensor)
block_0_int_weight_skip = torch.randint(10, 20, (256, 64, 1, 1)).type(torch.FloatTensor)
block_0_int_weight_3 = torch.randint(10, 20, (256, 64, 1, 1)).type(
torch.FloatTensor
)
block_0_int_weight_skip = torch.randint(10, 20, (256, 64, 1, 1)).type(
torch.FloatTensor
)

block_0_int_weight_3 = torch.randint(10, 20, (256, 64, 1, 1)).type(torch.FloatTensor)
block_0_int_weight_skip = torch.randint(10, 20, (256, 64, 1, 1)).type(torch.FloatTensor)

block_1_int_weight_1 = torch.randint(20, 30, (64, 256, 1, 1)).type(torch.FloatTensor)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[black] reported by reviewdog 🐶

Suggested change
block_1_int_weight_1 = torch.randint(20, 30, (64, 256, 1, 1)).type(torch.FloatTensor)
block_1_int_weight_1 = torch.randint(20, 30, (64, 256, 1, 1)).type(
torch.FloatTensor
)


block_1_int_weight_1 = torch.randint(20, 30, (64, 256, 1, 1)).type(torch.FloatTensor)
block_1_int_weight_2 = torch.randint(20, 30, (64, 64, 3, 3)).type(torch.FloatTensor)
block_1_int_weight_3 = torch.randint(20, 30, (256, 64, 1, 1)).type(torch.FloatTensor)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[black] reported by reviewdog 🐶

Suggested change
block_1_int_weight_3 = torch.randint(20, 30, (256, 64, 1, 1)).type(torch.FloatTensor)
block_1_int_weight_3 = torch.randint(20, 30, (256, 64, 1, 1)).type(
torch.FloatTensor
)

block_1_int_weight_2 = torch.randint(20, 30, (64, 64, 3, 3)).type(torch.FloatTensor)
block_1_int_weight_3 = torch.randint(20, 30, (256, 64, 1, 1)).type(torch.FloatTensor)

block_2_int_weight_1 = torch.randint(30, 40, (64, 256, 1, 1)).type(torch.FloatTensor)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[black] reported by reviewdog 🐶

Suggested change
block_2_int_weight_1 = torch.randint(30, 40, (64, 256, 1, 1)).type(torch.FloatTensor)
block_2_int_weight_1 = torch.randint(30, 40, (64, 256, 1, 1)).type(
torch.FloatTensor
)

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
torch.FloatTensor
)
block_2_int_weight_2 = torch.randint(30, 40, (64, 64, 3, 3)).type(torch.FloatTensor)
block_2_int_weight_3 = torch.randint(30, 40, (256, 64, 1, 1)).type(torch.FloatTensor)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[black] reported by reviewdog 🐶

Suggested change
block_2_int_weight_3 = torch.randint(30, 40, (256, 64, 1, 1)).type(torch.FloatTensor)
block_2_int_weight_3 = torch.randint(30, 40, (256, 64, 1, 1)).type(
torch.FloatTensor
)

trace_size=trace_size,
)


Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[black] reported by reviewdog 🐶

Suggested change

# Bottleneck 0
self.block_0_conv1 = nn.Conv2d(in_planes, planes, kernel_size=1, bias=False)
self.block_0_conv2 = nn.Conv2d(
planes, planes, kernel_size=3, padding=1, padding_mode="zeros", bias=False
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[black] reported by reviewdog 🐶

Suggested change
planes, planes, kernel_size=3, padding=1, padding_mode="zeros", bias=False
planes,
planes,
kernel_size=3,
padding=1,
padding_mode="zeros",
bias=False,

self.expansion * planes, planes, kernel_size=1, bias=False
)
self.block_1_conv2 = nn.Conv2d(
planes, planes, kernel_size=3, padding=1, padding_mode="zeros", bias=False
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[black] reported by reviewdog 🐶

Suggested change
planes, planes, kernel_size=3, padding=1, padding_mode="zeros", bias=False
planes,
planes,
kernel_size=3,
padding=1,
padding_mode="zeros",
bias=False,

self.expansion * planes, planes, kernel_size=1, bias=False
)
self.block_2_conv2 = nn.Conv2d(
planes, planes, kernel_size=3, padding=1, padding_mode="zeros", bias=False
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[black] reported by reviewdog 🐶

Suggested change
planes, planes, kernel_size=3, padding=1, padding_mode="zeros", bias=False
planes,
planes,
kernel_size=3,
padding=1,
padding_mode="zeros",
bias=False,

)
return block_2_final_out


Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[black] reported by reviewdog 🐶

Suggested change

# ------------------------------------------------------
ds = DataShaper()
before_input = int_inp.squeeze().data.numpy().astype(dtype_in)
before_input.tofile(log_folder + "/before_ifm_mem_fmt_1x1.txt", sep=",", format="%d")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[black] reported by reviewdog 🐶

Suggested change
before_input.tofile(log_folder + "/before_ifm_mem_fmt_1x1.txt", sep=",", format="%d")
before_input.tofile(
log_folder + "/before_ifm_mem_fmt_1x1.txt", sep=",", format="%d"
)

stop = time.time_ns()

if enable_trace:
aie_output, trace = extract_trace(aie_output, shape_out, dtype_out, trace_size)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[black] reported by reviewdog 🐶

Suggested change
aie_output, trace = extract_trace(aie_output, shape_out, dtype_out, trace_size)
aie_output, trace = extract_trace(
aie_output, shape_out, dtype_out, trace_size
)

temp_out = aie_output.reshape(32, 32, 32, 8)
temp_out = ds.reorder_mat(temp_out, "CDYX", "YCXD")
ofm_mem_fmt = temp_out.reshape(256, 32, 32)
ofm_mem_fmt.tofile(log_folder + "/after_ofm_mem_fmt_final.txt", sep=",", format="%d")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[black] reported by reviewdog 🐶

Suggested change
ofm_mem_fmt.tofile(log_folder + "/after_ofm_mem_fmt_final.txt", sep=",", format="%d")
ofm_mem_fmt.tofile(
log_folder + "/after_ofm_mem_fmt_final.txt", sep=",", format="%d"
)


print("\nPASS!\n")

if __name__ == "__main__":
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[black] reported by reviewdog 🐶

Suggested change
if __name__ == "__main__":
if __name__ == "__main__":

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
@jgmelber jgmelber closed this Apr 23, 2024
@jgmelber jgmelber deleted the torch-ci-asplos branch April 23, 2024 02:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants