Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update fft3d.cpp: add timer and FFT3D GFLOPS #729

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 18 additions & 0 deletions benchmarks/gbench/mhp/fft3d.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -195,8 +195,26 @@ void fft(std::size_t nreps, std::size_t x, std::size_t y, std::size_t z) {
if (nreps == 0) {
fft3d.check();
} else {
double elapsed = 0;
for (int iter = 0; iter < nreps; ++iter) {
auto begin = std::chrono::steady_clock::now();
fft3d.compute();
auto end = std::chrono::steady_clock::now();
if (iter)
elapsed += std::chrono::duration<double>(end - begin).count();
}

if (comm_rank == 0) {
std::size_t volume = x * y * z;
std::size_t fft_flops =
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would scaling flops by number of GPUs be the proper way to do weak scaling experiment?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should use the total flops for both.

I've modified mhp/fft3d.cpp to run multiple times with --reps and compute FFT3D flops and was planning a PR. I think we can discard this one.

std::size_t volume = x * y * z;
std::size_t fft_flops = 2 * static_caststd::size_t(5. * volume * std::log2(static_cast(volume)));

Stats stats(state, 2 * sizeof(real_t) * volume, 4 * sizeof(real_t) * volume, fft_flops);

distributed_fft<real_t> fft3d(x, y, z);
fft3d.compute();

for (auto _ : state) {
for (std::size_t i = 0; i < default_repetitions; i++) {
stats.rep();
fft3d.compute();
}
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, by answer was off. The size of interest is in BW-limited regime and a fixed volume is fine. In any case, weak-scaling does not make a lot of sense because FFT performance widely varies.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, you don't expect that small data size is limiting scalability.

2 * static_cast<std::size_t>(5. * volume *
std::log2(static_cast<double>(volume)));
double t_avg = elapsed / (nreps - 1);
fmt::print("fft3d-mhp {0} {3} AvgTime {1:.3f} GFLOPS {2:.3f}\n", x, t_avg,
fft_flops / t_avg * 1e-9, comm_size);
for (int iter = 0; iter < nreps; ++iter) {
fft3d.compute();
}
}
}
}
Expand Down
Loading