Skip to content

pre-import llm jobs module on worker starutp#715

Merged
Prajna1999 merged 1 commit intomainfrom
hotfix/celery-warm-start-llm-jobs
Mar 24, 2026
Merged

pre-import llm jobs module on worker starutp#715
Prajna1999 merged 1 commit intomainfrom
hotfix/celery-warm-start-llm-jobs

Conversation

@kartpop
Copy link
Collaborator

@kartpop kartpop commented Mar 24, 2026

Summary

Target issue is #PLEASE_TYPE_ISSUE_NUMBER
Explain the motivation for making this change. What existing problem does the pull request solve?

Checklist

Before submitting a pull request, please ensure that you mark these task.

  • Ran fastapi run --reload app/main.py or docker compose up in the repository root and test.
  • If you've fixed a bug or added code that is tested and has test cases.

Notes

Please add here if any other information is required for the reviewer.

Summary by CodeRabbit

  • Performance Improvements
    • Enhanced backend worker initialization to accelerate task processing speed.
    • Increased worker resource limits to improve system stability and throughput capacity.

@kartpop kartpop requested a review from Prajna1999 March 24, 2026 02:39
@coderabbitai
Copy link

coderabbitai bot commented Mar 24, 2026

📝 Walkthrough

Walkthrough

This PR updates Celery worker initialization and configuration. It adds a signal handler to preload LLM modules on worker startup and adjusts worker resource limits by increasing max tasks per child worker from 1 to 150 and max memory per child worker from 200000 to 300000.

Changes

Cohort / File(s) Summary
LLM Module Preloading
backend/app/celery/celery_app.py
Added warm_llm_modules signal handler connected to worker_process_init to import app.services.llm.jobs when each worker process starts, with info-level logging.
Worker Resource Configuration
backend/app/core/config.py
Increased Celery worker resource limits: CELERY_WORKER_MAX_TASKS_PER_CHILD from 1 to 150 and CELERY_WORKER_MAX_MEMORY_PER_CHILD from 200000 to 300000.

Possibly related PRs

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Poem

🐰 Carrots for the workers, pre-loaded and warm,
LLM modules spring to life in every swarm,
More tasks, more memory, the limits expand,
Celery hops faster across the backend land! 🌱

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Title check ⚠️ Warning The title contains a typo ('starutp' instead of 'startup') and is partially related to the changeset, but the main objective of increasing worker task/memory limits is not reflected in the title. Correct the typo and ensure the title reflects the primary changes: pre-importing LLM modules and adjusting Celery worker configuration limits.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch hotfix/celery-warm-start-llm-jobs

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@backend/app/celery/celery_app.py`:
- Line 13: The function warm_llm_modules currently uses an untyped variadic
kwargs parameter (**_), which breaks the repository typing rule; update the
signature to use an explicit type hint for ignored signal kwargs (change **_ to
**_: Any) and ensure Any is imported from typing if not already present, keeping
the existing return annotation -> None and the function name warm_llm_modules to
locate the change.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 6eb5d39c-cf89-4966-8b96-54b404640cc9

📥 Commits

Reviewing files that changed from the base of the PR and between 3428cb7 and 5e84451.

📒 Files selected for processing (2)
  • backend/app/celery/celery_app.py
  • backend/app/core/config.py



@worker_process_init.connect
def warm_llm_modules(**_) -> None:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

cat -n backend/app/celery/celery_app.py | head -30

Repository: ProjectTech4DevAI/kaapi-backend

Length of output: 1110


🏁 Script executed:

rg "def.*\*\*" backend/app/celery/celery_app.py -A 2 -B 2

Repository: ProjectTech4DevAI/kaapi-backend

Length of output: 215


🏁 Script executed:

rg "def.*\*\*" --type py -A 1 | head -50

Repository: ProjectTech4DevAI/kaapi-backend

Length of output: 3467


🏁 Script executed:

head -20 backend/app/celery/celery_app.py

Repository: ProjectTech4DevAI/kaapi-backend

Length of output: 673


🏁 Script executed:

rg "from typing import|import typing" backend/app/celery/ --type py

Repository: ProjectTech4DevAI/kaapi-backend

Length of output: 131


🏁 Script executed:

wc -l backend/app/celery/celery_app.py

Repository: ProjectTech4DevAI/kaapi-backend

Length of output: 111


🏁 Script executed:

cat backend/app/celery/celery_app.py | grep -E "def |import"

Repository: ProjectTech4DevAI/kaapi-backend

Length of output: 386


🏁 Script executed:

rg "worker_process_init" --type py -B 2 -A 5

Repository: ProjectTech4DevAI/kaapi-backend

Length of output: 1067


🏁 Script executed:

python3 << 'EOF'
# Check if object is the right type or if Any is better for **kwargs
# Standard Python typing conventions
import inspect
from typing import get_type_hints

# According to PEP 484 and Python typing best practices:
# When accepting **kwargs with a type hint:
# - **kwargs: Any is the standard approach (requires: from typing import Any)
# - **kwargs: object is valid but less idiomatic
# - No type hint is common but violates strict typing requirements

print("Python typing best practice for **kwargs:")
print("1. **kwargs: Any (most common and idiomatic)")
print("2. **kwargs: object (valid but less common)")
print("\nFor Celery signal handlers specifically:")
print("Signal handlers receive kwargs but often don't use them")
print("Using **_: Any or **_: object both indicate the handler ignores kwargs")
EOF

Repository: ProjectTech4DevAI/kaapi-backend

Length of output: 374


Add a type hint for the variadic signal kwargs.

Line 13 leaves **_ untyped, which violates the repo typing rule for Python functions.

✅ Suggested fix
+from typing import Any
+
 import logging

 from celery import Celery
 from celery.signals import worker_process_init
 from kombu import Exchange, Queue

 from app.core.config import settings

 logger = logging.getLogger(__name__)


 `@worker_process_init.connect`
-def warm_llm_modules(**_) -> None:
+def warm_llm_modules(**_: Any) -> None:

As per coding guidelines, **/*.py: Always add type hints to all function parameters and return values in Python code. Use **_: Any to match the codebase pattern for signal handlers that intentionally ignore kwargs.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/app/celery/celery_app.py` at line 13, The function warm_llm_modules
currently uses an untyped variadic kwargs parameter (**_), which breaks the
repository typing rule; update the signature to use an explicit type hint for
ignored signal kwargs (change **_ to **_: Any) and ensure Any is imported from
typing if not already present, keeping the existing return annotation -> None
and the function name warm_llm_modules to locate the change.

@Prajna1999 Prajna1999 merged commit 23b0306 into main Mar 24, 2026
1 of 2 checks passed
@Prajna1999 Prajna1999 deleted the hotfix/celery-warm-start-llm-jobs branch March 24, 2026 06:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants