Skip to content

Expand LoRA test coverage with dtype, gradient, and edge case tests#5280

Open
us wants to merge 1 commit intogoogle:mainfrom
us:worktree-lora-test
Open

Expand LoRA test coverage with dtype, gradient, and edge case tests#5280
us wants to merge 1 commit intogoogle:mainfrom
us:worktree-lora-test

Conversation

@us
Copy link
Contributor

@us us commented Feb 26, 2026

What does this PR do?

Expands test coverage for flax/nnx/nn/lora.py (LoRA and LoRALinear), addressing the gap identified in the contributions list (Level 2, item 5).

New tests added:

  • test_rank_one — rank=1 edge case with shape and output verification
  • test_dtypes — parameterized dtype/param_dtype combinations (float32/float16) with proper tolerance
  • test_initial_output_zero — verifies zero-initialized lora_b produces exact zero output
  • test_gradient_flow — confirms gradients flow through both lora_a and lora_b
  • test_gradient_flow_with_frozen_base — verifies LoRA params get gradients while base params are excluded via DiffState
  • test_lora_linear_dtypes — parameterized lora_dtype/lora_param_dtype combinations for LoRALinear
  • test_noncallable_base_module_raises — error path when base_module is not callable

Existing test improvements:

  • Upgraded test class to parameterized.TestCase
  • Replaced bare assert with self.assertEqual/self.assertIs/self.assertIsNone
  • Used Variable[...] access instead of direct Variable in matmul
  • Added explicit rtol to all assert_allclose calls
  • Fixed import order to match sibling test files

Checklist

  • This PR fixes a minor issue (e.g.: typo or small bug) or improves the docs (you can dismiss the other checks if that's the case).
  • This change is discussed in a Github issue/discussion (please add a link).
  • The documentation and docstrings adhere to the documentation guidelines.
  • This change includes necessary high-coverage tests. (No quality testing = no merge!)

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @us, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the test suite for the LoRA (Low-Rank Adaptation) implementation within Flax NNX. The changes focus on improving the robustness and reliability of the LoRA module by adding comprehensive tests for data types, gradient propagation, and various operational edge cases. The overall goal is to ensure the stability and correctness of LoRA functionality across different configurations and usage patterns.

Highlights

  • Expanded Test Coverage for LoRA: Introduced new test cases to cover various aspects of LoRA functionality, including rank-one edge cases, dtype combinations, initial output verification, and gradient flow behavior.
  • Dtype and Gradient Flow Tests: Added specific tests to validate LoRA's behavior with different dtype and param_dtype settings, ensuring correct precision handling. Gradient flow tests confirm that gradients propagate through LoRA parameters, even when base module parameters are frozen.
  • Improved Test Assertions and Structure: Upgraded the test class to parameterized.TestCase and replaced bare assert statements with more robust self.assertEqual, self.assertIs, and self.assertIsNone calls. Explicit rtol values were added to assert_allclose for better numerical stability checks.
  • Edge Case and Error Handling Tests: Included tests for edge cases like rank=1 and lora_b zero-initialization, as well as an error path test for non-callable base_module inputs.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • tests/nnx/nn/lora_test.py
    • Updated the test class name from TestLora to TestLoRA and changed its base class to parameterized.TestCase for enhanced test parameterization.
    • Replaced direct assert statements with self.assertEqual, self.assertIs, and self.assertIsNone for more explicit and informative test failures.
    • Added rtol=1e-6 to np.testing.assert_allclose calls to specify relative tolerance for floating-point comparisons.
    • Modified tensor access within matrix multiplication operations to use [...] for Variable instances.
    • Introduced test_rank_one to verify LoRA behavior with a rank of one.
    • Added test_dtypes using parameterized.product to test various combinations of dtype and param_dtype for LoRA.
    • Implemented test_initial_output_zero to confirm that LoRA produces zero output when lora_b is zero-initialized.
    • Created test_gradient_flow to ensure gradients correctly propagate through lora_a and lora_b parameters.
    • Added test_gradient_flow_with_frozen_base to verify gradient flow for LoRA parameters while base module parameters are excluded via DiffState.
    • Included test_lora_linear_dtypes using parameterized.product to test dtype combinations for LoRALinear.
    • Added test_noncallable_base_module_raises to check for proper error handling when base_module is not callable.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request significantly expands the test coverage for the LoRA implementation in flax.nnx. The new tests cover dtypes, gradient flow, and various edge cases, which is a great improvement. The existing tests are also improved by using proper test assertions and being more explicit. I've found a couple of minor opportunities to make one of the tests even more complete. Overall, this is a high-quality contribution that improves the robustness of the LoRA module.

@us us force-pushed the worktree-lora-test branch from 5369b6d to c31157b Compare February 26, 2026 20:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant