Shepherd is a powerful and elegant web application designed to be the ultimate companion for your Ollama server. Built with a focus on usability and aesthetics, it provides a comprehensive suite of tools to manage your local LLM infrastructure. Whether you are a developer testing new models, a researcher organizing a library, or an enthusiast exploring local AI, Shepherd gives you full control through a sleek, dark-mode interface.
- Real-time Monitoring: Visualize server status and key metrics at a glance.
- Library Overview: Track your total installed models and their aggregate size.
- Quick Actions: Access your most frequently used tools directly from the home screen.
Tip
View Full Screenshot Gallery - Take a visual tour of all features including Settings, Model Factory, and more.
- One-Click Installation: Search and pull models directly from the official Ollama library.
- Detailed Metadata: Inspect technical details including parameter size (7B, 70B), quantization level (Q4_K_M, FP16), and model family.
- Smart Filters: Easily sort and filter your library to find the right model for the job.
- Deep Inspection: View the raw
Modelfile, prompt templates, system parameters, and license information with syntax highlighting. - Copy & Backup: Duplicated models with a single click to create experimental branches or backups (
my-model-v1->my-model-v2).
- Custom Model Creation: Build new models on top of existing ones without touching a terminal.
- Prompt Engineering: Define custom system prompts and prompt templates.
- Fine-Grained Control: Adjust inference parameters such as:
- Temperature: Control creativity vs. determinism.
- Context Window: Set custom context lengths (e.g., 8192, 32k).
- Stop Sequences: Define custom stop tokens.
- Seeds & Sampling: Set specific seeds for reproducibility.
- Embeddings Factory: Create specialized embedding models tailored to your RAG pipelines.
- File-to-Vector: Upload text files (
.txt,.md,.json) and instantly generate vector embeddings. The result is a downloadable JSON file ready for your vector database. - Integrated Chat Playground: Test any installed model instantly in a streaming chat interface to verify behavior and prompt effectiveness.
- Multi-Server Support: Manage multiple Ollama instances (local, remote, or cloud-hosted) from a single interface.
- Responsive Design: fully responsive UI that works on desktop, tablet, and mobile.
- Ollama: You must have an Ollama server running.
- Download Ollama
- By default, it runs on
http://localhost:11434.
- Python 3.10+: Ensure you have a compatible Python version installed.
-
Clone the Repository
# Clone to your preferred location (e.g., /opt/shepherd or ~/shepherd) git clone https://git.mamber.in/Personal/Shepherd.git /opt/shepherd cd /opt/shepherd
-
Create a Virtual Environment
python3 -m venv venv source venv/bin/activate -
Install Dependencies
pip install -r requirements.txt
-
Launch the Application
uvicorn backend.main:app --host 0.0.0.0 --port 8000
-
Shepherd is Live! Open your browser and navigate to: http://localhost:8000
For production or always-on deployments, run Shepherd as a systemd service.
sudo useradd -r -s /bin/false shepherd
sudo chown -R shepherd:shepherd /opt/shepherdA sample service file is provided at shepherd.service. You must edit it to match your setup:
| Setting | Default Value | What to Change |
|---|---|---|
User |
shepherd |
The Linux user that will run the service |
Group |
shepherd |
The group for the service user |
WorkingDirectory |
/opt/shepherd |
The path where you cloned the repository |
ExecStart |
/opt/shepherd/venv/bin/uvicorn ... |
Update the path if you cloned elsewhere |
Example: If you cloned to /home/myuser/apps/shepherd and want to run as myuser:
User=myuser
Group=myuser
WorkingDirectory=/home/myuser/apps/shepherd
ExecStart=/home/myuser/apps/shepherd/venv/bin/uvicorn backend.main:app --host 0.0.0.0 --port 8000# Copy the service file to systemd
sudo cp shepherd.service /etc/systemd/system/
# Reload systemd to recognize the new service
sudo systemctl daemon-reload
# Enable the service to start on boot
sudo systemctl enable shepherd
# Start the service now
sudo systemctl start shepherd# Check the service status
sudo systemctl status shepherd
# View live logs
sudo journalctl -u shepherd -fsudo systemctl stop shepherd # Stop the service
sudo systemctl restart shepherd # Restart after config changes
sudo systemctl disable shepherd # Prevent auto-start on bootShepherd is built to empower your experience with Ollama. Here are essential resources for the ecosystem:
| Resource | Description |
|---|---|
| Ollama Website | The official home of the project. Download the runner here. |
| Model Library | Browse thousands of community models (Llama 3, Mistral, Gemma, Phi-3). |
| Documentation | Deep dive into installation, API, and customization. |
| Modelfile Guide | Learn how to craft custom Modelfiles for your specific needs. |
| API Reference | Technical details for the REST API that powers Shepherd. |
We welcome contributions from the community! Whether it's adding a new feature, fixing a bug, or improving documentation, your help is appreciated.
- Fork the repository.
- Create your feature branch (
git checkout -b feature/AmazingFeature). - Commit your changes (
git commit -m 'Add some AmazingFeature'). - Push to the branch (
git push origin feature/AmazingFeature). - Open a Pull Request.
Shepherd is open-source software licensed under the GNU General Public License v3.0.




