Skip to content

Support new Open AI model configuration#717

Open
Ayush8923 wants to merge 7 commits intomainfrom
feat/support-new-models-configuration
Open

Support new Open AI model configuration#717
Ayush8923 wants to merge 7 commits intomainfrom
feat/support-new-models-configuration

Conversation

@Ayush8923
Copy link
Copy Markdown
Collaborator

@Ayush8923 Ayush8923 commented Mar 25, 2026

Summary

Target issue is: ProjectTech4DevAI/kaapi-frontend#88
Explain the motivation for making this change. What existing problem does the pull request solve?

Checklist

Before submitting a pull request, please ensure that you mark these task.

  • Ran fastapi run --reload app/main.py or docker compose up in the repository root and test.
  • If you've fixed a bug or added code that is tested and has test cases.

Notes

Please add here if any other information is required for the reviewer.

Summary by CodeRabbit

  • New Features

    • Expanded OpenAI model support with seven additional models: GPT‑5.4 Pro, GPT‑5.4 Mini, GPT‑5.4 Nano, GPT‑5, GPT‑4 Turbo, GPT‑4, and GPT‑3.5 Turbo.
  • Style

    • Readability-focused formatting refinements for background task definitions and job-start helpers.
  • Chores

    • Minor whitespace/trailing-line cleanup.
    • LLM temperature is no longer forwarded by default — omitted unless explicitly provided.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 25, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Added new OpenAI text model identifiers to LLM constants, made KaapiCompletionConfig avoid persisting an implied temperature unless explicitly provided, and applied formatting-only edits across Celery modules (celery_app.py, tasks/job_execution.py, utils.py).

Changes

Cohort / File(s) Summary
OpenAI Model Support
backend/app/models/llm/constants.py
Added gpt-5.4-pro, gpt-5.4-mini, gpt-5.4-nano, gpt-5, gpt-4-turbo, gpt-4, gpt-3.5-turbo to SUPPORTED_MODELS for ("openai", "text").
Kaapi completion param handling
backend/app/models/llm/request.py
KaapiCompletionConfig.validate_params() now detects if "temperature" was explicitly provided and removes an implied/default temperature from self.params when the user did not set it.
Celery app config (trivial)
backend/app/celery/celery_app.py
Removed trailing blank line(s); no behavioral changes.
Celery task definitions (formatting)
backend/app/celery/tasks/job_execution.py
Reformatted several Celery task function definitions to multi-line signatures and adjusted import formatting; no signature or behavior changes.
Celery helpers & starters (formatting + logging)
backend/app/celery/utils.py
Reformatted function signature line breaks and .delay(...) calls to multi-line keyword args; wrapped log messages; no change to dispatch or returned task.id.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

  • Feature/celery new execute job #714 — modifies the same Celery task and helper files (backend/app/celery/tasks/job_execution.py, backend/app/celery/utils.py) and likely overlaps with formatting/signature edits.

Suggested labels

enhancement

Suggested reviewers

  • vprashrex
  • Prajna1999
  • kartpop

Poem

🐰 I hopped through constants, adding names anew,
Kept a quiet temperature if you never knew.
I straightened task lines, trimmed a trailing dot,
Nibbled code crumbs, then danced — a tidy little plot! 🥕

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 5.56% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Support new Open AI model configuration' aligns with the main change: adding new OpenAI model identifiers to SUPPORTED_MODELS constant.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/support-new-models-configuration

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@Ayush8923 Ayush8923 assigned Ayush8923 and unassigned Ayush8923 Mar 25, 2026
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
backend/app/models/llm/request.py (1)

33-34: Add a regression test for “unset temperature” serialization behavior.

Please add a test that verifies unset temperature is omitted (not null) after validation/serialization, so this new behavior stays protected against future regressions.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/app/models/llm/request.py` around lines 33 - 34, Add a unit test that
constructs the request model (class name: LLMRequest or the request model
defined in request.py) without setting temperature, then validate/serialize it
using the model's .dict(exclude_none=True) or .json(exclude_none=True) and
assert that the serialized output does not include a "temperature" key (i.e.,
not present and not set to null). Ensure the test also covers the reverse case
(set temperature to a float and assert the key is present with the correct
value) so both behaviors are guarded against regressions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@backend/app/models/llm/request.py`:
- Around line 33-34: Add a unit test that constructs the request model (class
name: LLMRequest or the request model defined in request.py) without setting
temperature, then validate/serialize it using the model's
.dict(exclude_none=True) or .json(exclude_none=True) and assert that the
serialized output does not include a "temperature" key (i.e., not present and
not set to null). Ensure the test also covers the reverse case (set temperature
to a float and assert the key is present with the correct value) so both
behaviors are guarded against regressions.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 19a4f0a4-d0df-4713-82d0-5481af2e620a

📥 Commits

Reviewing files that changed from the base of the PR and between 6b3abab and 17374e4.

📒 Files selected for processing (1)
  • backend/app/models/llm/request.py

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
backend/app/models/llm/request.py (1)

254-254: Add regression coverage for omitted vs explicit temperature.

This path now depends on whether the caller included the temperature key, which is subtle and easy to regress. Please add at least one case with no temperature, one with an explicit numeric temperature, and—if null is intended to be supported—a case for that too.

Also applies to: 292-293


ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 68e21f9d-d44d-4bdc-9ebe-ce9a19f08b4d

📥 Commits

Reviewing files that changed from the base of the PR and between 17374e4 and c15fbde.

📒 Files selected for processing (1)
  • backend/app/models/llm/request.py

@codecov
Copy link
Copy Markdown

codecov bot commented Mar 27, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.

📢 Thoughts on this report? Let us know!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants