Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

MPI Refactor #831

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 26 commits into
base: master
Choose a base branch
Loading
from
Open

MPI Refactor #831

wants to merge 26 commits into from

Conversation

wilfonba
Copy link
Contributor

@wilfonba wilfonba commented May 8, 2025

Description

This PR refactors a lot of MPI code to reduce duplicated code and shorten the MPI-related code in the codebase. Significant testing is needed to verify the changes' correctness, but I'm opening this as a draft now so that people know what's being changed and can start reviews/make suggestions early.

Type of change

Please delete options that are not relevant.

  • Something else

Scope

  • This PR comprises a set of related changes with a common goal

How Has This Been Tested?

Black lines in all videos show where processor boundaries and ghost cell regions are.

  • 2D advection one rank vs. multi-rank comparison (for good measure) -- This test is done using a slightly modified version of the examples/2D_advection case file. It is ran on 1 and 4 ranks with A100 GPUs. The video shows the advection of volume fraction through MPI boundaries.
test.mp4
  • 3D advection one rank vs. multi-rank comparison (for good measure) -- This test is done using a 3D analog of the examples/2D_advection case file. It is ran on 1 and 8 ranks with A100 GPUs. The video shows the advection of volume fraction contour through MPI boundaries. This video shows the advecting sphere for the case with 1 and 8 ranks. The half of the sphere from the one rank simulation is shown in red, and the half from the eight rank simulation is in blue.
test.mp4
  • 3D surface tension one rank vs. multi-rank comparison -- This test uses a modified version of the examples/3D_recovering_sphere case (remove the use of symmetry and have meaningful halo exchange of the color function and move square off center). It is ran with 1 and 8 ranks on A100 GPUs. This video shows slices of the color function in all three dimensions with 1 and 4 ranks.
test.mp4
  • 2D surface tension one rank vs. multi-rank comparison -- This test uses a 2D analog of the examples/3D_recovering_sphere case in 2D with the square off center. It is ran on 1 and 4 ranks with A100 GPUs. The video shows the volume fraction and color function with 1 and 4 ranks.
test.mp4
  • 3D QBMM one rank vs. multi-rank comparison -- This case is adapted from /examples/1D_qbmm. A high-pressure region is placed off-center in the middle of the bubble cloud to break symmetry. The video shows nV003 along three slices across the domain for the one and eight rank case on A100 GPUs.
test.mp4
  • 2D QBMM one rank vs. multi-rank comparison -- This case is adapted from /examples/1D_qbmm. Two high-pressure regions are added to create blast waves and break symmetry. The video shows pressure and nV001 for the one and four rank case on A100 GPUs.
test.mp4
  • EL Bubbles simulation and post-process verification -- This test uses the /examples/3D_lagrange_bubblescreen case. It is ran on 1 and 8 ranks on A100 GPUs. The video shows the void fraction in the bubble cloud through three slices. The left column is one rank, and the right is eight ranks.
test.mp4
  • Verify that the existing 2MPI rank golden files are correct

Checklist

  • I ran ./mfc.sh format before committing my code
  • New and existing tests pass locally with my changes, including with GPU capability enabled (both NVIDIA hardware with NVHPC compilers and AMD hardware with CRAY compilers) and disabled
  • This PR does not introduce any repeated code (it follows the DRY principle)
  • I cannot think of a way to condense this code and reduce any introduced additional line count

If your code changes any code source files (anything in src/simulation)

To make sure the code is performing as expected on GPU devices, I have:

  • Checked that the code compiles using NVHPC compilers

  • Checked that the code compiles using CRAY compilers

  • Ran the code on either V100, A100, or H100 GPUs and ensured the new feature performed as expected (the GPU results match the CPU results)

  • Ran the code on MI200+ GPUs and ensure the new features performed as expected (the GPU results match the CPU results)

  • Ran a Nsight Systems profile using ./mfc.sh run XXXX --gpu -t simulation --nsys, and have attached the output file (.nsys-rep) and plain text results to this PR
    MPIRefactor.txt
    https://drive.google.com/file/d/1pmM3s8q2UbqNmLsumdCs12u-6p3Tm_8C/view?usp=sharing

  • Ran an Omniperf profile using ./mfc.sh run XXXX --gpu -t simulation --omniperf, and have attached the output file and plain text results to this PR. The trace results are gathered for a 200^3 instance of examples/3D_performance_test.
    master.csv
    pr.csv

  • Ran my code using various numbers of different GPUs (1, 2, and 8, for example) in parallel and made sure that the results scale similarly to what happens if you run without the new code/feature. Strong scaling is performed using a 300^3 instance of examples/3D_performance_test. Weak scaling is done with a 300^3 instance of examples/3D_performance_test on each processor.
    scaling

src/post_process/m_global_parameters.fpp Show resolved Hide resolved
Copy link

codecov bot commented May 14, 2025

Codecov Report

Attention: Patch coverage is 48.59967% with 312 lines in your changes missing coverage. Please review.

Project coverage is 45.62%. Comparing base (db44da1) to head (5736730).

Files with missing lines Patch % Lines
src/common/m_mpi_common.fpp 41.69% 132 Missing and 26 partials ⚠️
src/simulation/m_mpi_proxy.fpp 8.00% 65 Missing and 4 partials ⚠️
src/common/m_boundary_common.fpp 69.66% 55 Missing and 9 partials ⚠️
src/post_process/m_data_input.f90 62.50% 5 Missing and 1 partial ⚠️
src/simulation/m_viscous.fpp 0.00% 0 Missing and 6 partials ⚠️
src/simulation/m_weno.fpp 0.00% 4 Missing ⚠️
src/post_process/m_start_up.f90 75.00% 1 Missing and 2 partials ⚠️
src/post_process/m_global_parameters.fpp 83.33% 0 Missing and 1 partial ⚠️
src/simulation/m_ibm.fpp 0.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master     #831      +/-   ##
==========================================
+ Coverage   42.95%   45.62%   +2.67%     
==========================================
  Files          69       68       -1     
  Lines       19504    18656     -848     
  Branches     2366     2250     -116     
==========================================
+ Hits         8377     8511     +134     
+ Misses       9704     8785     -919     
+ Partials     1423     1360      -63     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@wilfonba wilfonba marked this pull request as ready for review June 10, 2025 18:12
@wilfonba wilfonba requested a review from a team as a code owner June 10, 2025 18:12
@wilfonba wilfonba requested a review from a team as a code owner June 10, 2025 18:12
@sbryngelson
Copy link
Member

I'm going to do a code review, but I think I only have two lingering questions for verification:

  • Can you confirm that RDMA MPI works on Frontier and an NVIDIA machine? You don't need to test a bunch of cases; just one is fine.
  • Can you confirm that the output files, in particular the serial output files, are the same as they were before the PR? I noticed you modified those files. Can't remember if you modified the probe output, but if so then confirming that as well would be useful.

Mostly asking because the above two things aren't covered in CI (though probably should be).

@wilfonba
Copy link
Contributor Author

The scaling plots I included show RDMA MPI on Frontier, so that's covered. I'm not sure which NVIDIA machines we have that support RDMA MPI. I'll try Phoenix since I know that's what Max has been using it on. I'll have to go back and look at my changes to serial output files to remember what they even were, but I can check on that as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

2 participants
Morty Proxy This is a proxified and sanitized view of the page, visit original site.