Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[LAYOUTS] [NFC] Make order accept a RankedTensorType #6007

Merged
merged 3 commits into from
Feb 25, 2025
Merged

[LAYOUTS] [NFC] Make order accept a RankedTensorType #6007

merged 3 commits into from
Feb 25, 2025

Conversation

lezcano
Copy link
Contributor

@lezcano lezcano commented Feb 24, 2025

This is in preparation for moving the order to be implemented
generically.

We expose getDefault.*Order functions that implement hand-written
order. We expect these functions to be used just in the LinearLayout
creation. Eventually we'll inline them in that file and remove them
completely.

// Order of the elements in the shared memory as defined at layout creation
// If this layout is associated with a MemDesc with a different shape
// it may return a different order than the actual order of the elements
SmallVector<unsigned> getDefaultOrder(SharedEncodingTrait layout);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Making this function accept an argument might be unclear. Cannot we just call SmallVector<unsigned> getOrder(MemDescType type);?

@@ -134,8 +134,8 @@ sharedToLinearLayoutNoLeadingOffset(ArrayRef<int64_t> shape,
// Construct bases for the 2 most minor dimensions of the layout. These are
// the dims that get swizzled.
assert(shape.size() >= 2);
int colDim = shared.getOrder()[0];
int rowDim = shared.getOrder()[1];
int colDim = getDefaultOrder(shared)[0];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For example, it's not clear to me why we do not call getOrder here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As described in the OP, the idea is that getDefaultOrder should (in the future) just called in LinearLayoutConversions.cpp. It's basically just used in the definition of the LL.

For shared layouts, this may or may not be necessary, as we don't broadcast them over the size, so perhaps we could simply implement getOrder manually and call it a day.

I'm happy to do it that way, so that we just touch distributed for now.

Comment on lines 216 to 232
SmallVector<unsigned> getOrder(SharedEncodingTrait layout,
ArrayRef<int64_t> shape) {
return getDefaultOrder(layout);
if (auto swizzledLayout =
mlir::dyn_cast<SwizzledSharedEncodingAttr>(layout)) {
return llvm::to_vector(swizzledLayout.getOrder());
}
if (auto sharedLayout = mlir::dyn_cast<NVMMASharedEncodingAttr>(layout)) {
return sharedLayout.getOrder();
}
llvm::report_fatal_error("Unimplemented usage of getOrder for MemDescType");
return {};
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I went this awkward way because I did not want to add a getOrder() API to SharedEncodingTrait, as I don't think it'll be useful in the future.

This is in preparation for moving the order to be implemented
generically.

We expose `getDefault.*Order` functions that implement hand-written
order. We expect these functions to be used just in the LinearLayout
creation. Eventually we'll inline them in that file and remove them
completely.
@lezcano lezcano enabled auto-merge (squash) February 25, 2025 09:47
@lezcano lezcano merged commit dce695e into main Feb 25, 2025
7 checks passed
@lezcano lezcano deleted the order branch February 25, 2025 10:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants