Ollama-run models not appearing in “Prompts” or “Review” dropdowns in Qt AI Assistant
-
What happens
- I start the Ollama server with:
ollama run codellama:7b-code
ollama run phi3-
Verifying with ollama ps shows both models as running.
-
In Qt Creator → Preferences → AI Assistant → General, I see the “through Ollama” entries under Code completion, and I can successfully select and use them for code completion.
-
However, under the Prompts and Review dropdowns, no “through Ollama” options appear at all.
What I’ve checked
Model names exactly match those listed in ollama ps (including the colon in codellama:7b-code).
Qt Creator was fully restarted after each change.
Advanced tab does not show a “+” button, so I manually edited aiassistant.json in my Qt settings folder:
{
"customProviders": [
{
"name": "phi3 via Ollama",
"apiType": "chat",
"serverUrl": "http://localhost:11434",
"model": "phi3"
}
]
}But “phi3 via Ollama” still does not appear under Prompts or Review.
I have confirmed that Prompts/Review require chat-compatible models, and phi3 should qualify.
Questions
-
Is this a current limitation or bug in the Qt AI Assistant plugin?
-
Has anyone successfully used Ollama-run models (especially non-CodeLlama ones) in the Prompts or Review features? If so, what extra steps are needed?
-
Are there any hidden settings or log files I should inspect to troubleshoot model discovery for chat features?
Thank you in advance for any insights or suggestions!