Skip to content

Conversation

@shifangx
Copy link
Contributor

@shifangx shifangx commented Jan 29, 2026

What does this PR do ?

fix this issue #3135
Global pg groups, such as _TENSOR_MODEL_PARALLEL_GROUP , in parallel_state.py will be deprecated after M4.
So we need pass pg_collection while init TECudaGraphHelper, and use pg group in pg_collection later.

⚠️ For major changes (either in lines of code or in its impact), please make sure to first share a design doc with the team. If you're unsure what's the best way to do so, contact the @mcore-oncall.

Contribution process

flowchart LR
    A[Pre-checks] --> B[PR Tests]
    subgraph Code Review/Approval
        C1[Expert Review] --> C2[Final Review]
    end
    B --> C1
    C2 --> D[Merge]
Loading

Pre-checks

  • I want this PR in a versioned release and have added the appropriate Milestone (e.g., Core 0.8)
  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

The following process is enforced via the CODEOWNERS file for changes into megatron/core. For changes outside of megatron/core, it is up to the PR author whether or not to tag the Final Reviewer team.

For MRs into `main` branch

Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

(Step 1): Add PR label Expert Review

(Step 2): Collect the expert reviewers reviews

  1. Attach the Expert Review label when your PR is ready for review.
  2. GitHub auto-assigns expert reviewers based on your changes. They will get notified and pick up your PR soon.

⚠️ Only proceed to the next step once all reviewers have approved, merge-conflict are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

(Step 3): Final Review

  1. Add Final Review label
  2. GitHub auto-assigns final reviewers based on your changes. They will get notified and pick up your PR soon.

(Optional Step 4): Cherry-pick into release branch

If this PR also needs to be merged into core_r* release branches, after this PR has been merged, select Cherry-pick to open a new PR into the release branch.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either [email protected] or [email protected].

Merging your PR

Any member of core-adlr and core-nemo will be able to merge your PR.

@shifangx shifangx requested review from a team as code owners January 29, 2026 07:37
@copy-pr-bot
Copy link

copy-pr-bot bot commented Jan 29, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@ko3n1g ko3n1g requested a review from a team January 29, 2026 07:38
@shifangx shifangx changed the title fix issue with ccuda graph and m4 fix issue with cuda graph and m4 Jan 29, 2026
@shifangx shifangx force-pushed the shifang/cuda_graph_m4 branch 2 times, most recently from 534de98 to bfce1c9 Compare January 29, 2026 11:51
@shifangx shifangx force-pushed the shifang/cuda_graph_m4 branch from bfce1c9 to 98d5fcf Compare January 29, 2026 13:47
@yaoyu-33 yaoyu-33 added the Expert Review Apply this label to indicate that your PR is ready for expert review. label Jan 29, 2026
self.tp_group = self.pg_collection.tp
self.dp_cp_group = self.pg_collection.dp_cp
self.pp_group = self.pg_collection.pp
from megatron.core.pipeline_parallel.p2p_communication import P2PCommunicator
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit; good to have imports top of file

self.dp_cp_group = self.pg_collection.dp_cp
self.pp_group = self.pg_collection.pp
from megatron.core.pipeline_parallel.p2p_communication import P2PCommunicator
self.p2p_communicator = P2PCommunicator(pp_group=self.pp_group, config=self.config)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is this self.config? looks like p2p communicator needs ModelParallelConfig, not TransformerConfig

@jiemingz jiemingz self-requested a review January 29, 2026 19:06
@shifangx shifangx force-pushed the shifang/cuda_graph_m4 branch from ecc7a1a to 14c15b0 Compare January 30, 2026 13:27
@shifangx shifangx force-pushed the shifang/cuda_graph_m4 branch from 14c15b0 to 844848b Compare January 30, 2026 13:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

complexity: low Expert Review Apply this label to indicate that your PR is ready for expert review.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants