Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dialects: (vector) Add for vector.transfer_read and vector.transfer_write operations #3650

Open
wants to merge 18 commits into
base: main
Choose a base branch
from

Conversation

watermelonwolverine
Copy link
Contributor

Added stubs for "vector.transfer_read" and "vector.transfer_write".
Verification is missing but that also seems to be a lot of work.

Also fixed a small bug in TensorOrMemrefOf.verify (?)

@watermelonwolverine watermelonwolverine marked this pull request as draft December 17, 2024 23:23
Copy link

codecov bot commented Dec 17, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 91.29%. Comparing base (346e1d4) to head (b331962).

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #3650      +/-   ##
==========================================
- Coverage   91.30%   91.29%   -0.01%     
==========================================
  Files         466      466              
  Lines       58357    58541     +184     
  Branches     5624     5649      +25     
==========================================
+ Hits        53283    53447     +164     
- Misses       3627     3640      +13     
- Partials     1447     1454       +7     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Collaborator

@compor compor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM modulo comments

if isinstance(attr, VectorType) or isinstance(attr, TensorType):
attr = cast(VectorType[Attribute] | TensorType[Attribute], attr)
if isinstance(attr, MemRefType) or isinstance(attr, TensorType):
attr = cast(MemRefType[Attribute] | TensorType[Attribute], attr)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would you mind moving this to a separate PR with a test case?
We seem to be lacking any verification testing on this type.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seeing the changes, I'd still recommend doing this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will do that in the next few days

xdsl/dialects/vector.py Outdated Show resolved Hide resolved
xdsl/dialects/vector.py Outdated Show resolved Hide resolved
xdsl/dialects/vector.py Outdated Show resolved Hide resolved
@compor compor added the dialects Changes on the dialects label Dec 18, 2024
@compor compor changed the title dialects: (vector) Added stubs for transfer_read and transfer_write dialects: (vector) Add for vector.transfer_read and vector.transfer_write operations Dec 18, 2024
@watermelonwolverine
Copy link
Contributor Author

watermelonwolverine commented Dec 18, 2024

  • I started implementing the verification and finished most of it
    • Only verification for cases where the element_type of source is a vector type is missing
  • Problem: To fully implement verification as it is done in MLIR the VectorType needs to be changed first
    • It needs a bitmap (ArrayAttr[BoolAttr]) for the scalable dimensions instead of a single IntegerAttr for the number of scalable dimensions.
    • I could skip this verification step and do everything else
    • I did some of those changes so you can see for yourself what I mean.
  • I also had to add several functions to builtin classes like AffineMap and AffineExpr.
    • Should I do a separate PR for those, too?

@watermelonwolverine
Copy link
Contributor Author

@compor
Copy link
Collaborator

compor commented Dec 20, 2024

You can always verify what is currently here and document what deviates (as you have done), this is fine IMHO.

Thanks for the work so far!

Transferred more filecheck tests for transfer ops from MLIR
Added the (now mandatory) in_bounds property
@watermelonwolverine watermelonwolverine marked this pull request as ready for review December 28, 2024 17:48
Comment on lines 484 to 489
# WJOG9GVF: TODO fix: uncomment this when 7S4F0FZA has been fixed
# if mask_type:
# if mask_type != inferred_mask_type:
# raise VerifyException(
# f'"{op.name}" inferred mask type ({inferred_mask_type}) and mask operand type ({mask_type}) don\'t match'
# )
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as below. I will move it to a new pull request

Comment on lines 514 to 535
# 7S4F0FZA
# TODO uncomment and test this once VectorType has been fixed, see issue #3654
# When you do this also fix all WJOG9GVF

# inverse_permutation_map = affine_map.compress_dims(
# affine_map.unused_dims_bit_vector()
# ).inverse_permutation()

# assert inverse_permutation_map

# mask_shape = inverse_permutation_map.compose_with_values(vector_type.get_shape())

# scalable_dims = inverse_permutation_map.eval(
# [1 if dim_scalable else 0 for dim_scalable in vector_type.get_scalable_dims()],
# [],
# )

# return VectorType(
# i1,
# mask_shape,
# [dim_scalable == 1 for dim_scalable in scalable_dims],
# )
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem is issue #3654
As soon as this is fixed the code above can be uncommented. I can remove it but then how should I document this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can move it to a new pull request and link it to the issue

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah yes do you think you have the bandwidth to fix this also? I have a feeling that this PR could be a draft until the other points are fixed, would help calibrate how pedantic the reviewers should be when looking at this one :)

Comment on lines 510 to 512
"""
TODO test
"""
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I will clean that up


irdl_options = [AttrSizedOperandSegments(as_property=True), ParsePropInAttrDict()]

# assembly_format = "$source `[` $indices `]` `,` $padding ( `,` $mask^ )? attr-dict `:` type($source) `,` type($result)"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

commented code

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will clean that up

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dialects Changes on the dialects
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants