LLM VSCode Extension with Ollama providing bizarre inference results for Rust

I have a MacBook Pro M2 Pro with 16GB of RAM. I installed Ollama and downloaded the Starcoder model.

ollama pull starcoder:latest

I installed the Hugging Face LLM extension for Visual Studio Code.
I created an API token and logged into the extension.
I opened the User Settings (JSON) in VSCode, and added the basic configuration for the extension.

{
    "llm.backend": "huggingface",
    "llm.enableAutoSuggest": true,
    "llm.configTemplate": "hf/bigcode/starcoder",
}

I created a new Rust code file and let inference run.
I observed some very bizarre results, as if Starcoder is attempting to generate Python code for my Rust file.

What’s going on here, and how can I make sure I’m getting accurate inference results for Rust instead of Python, using Starcoder, or some other model?