All Models

Phi-3-mini-4k-instruct

Phi-3-mini-4k-instruct

Text

Phi‑3‑mini‑4k‑instruct (Microsoft)

Instruction‑tuned small language model with strong reasoning and a longer context window.

  • Compact but modern. A ~3.8B‑parameter model from the Phi‑3 family, trained on high‑quality synthetic and curated data, then instruction‑tuned for chat and task following.

  • Good reasoning, clearer answers. Performs well on logic, math, coding, and structured tasks compared to older small models, with more consistent instruction adherence.

  • 4K context. Supports up to 4,000 tokens, making it usable for longer prompts, documents, and multi‑turn conversations.

  • single‑GPU or optimized CPU deployments.

  • Straightforward setup. Single model, standard tokenizer, no external tools or pipelines required.

Why pick it for Norman AI?

Phi‑3‑mini‑4k‑instruct is a strong default for instruction‑following workloads when size and cost matter. It fits assistants, reasoning-heavy tasks, and evaluations where you want modern behavior without large‑model overhead.

messages = [
    {"role": "system", "content": "You are a helpful AI assistant."},
    {"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},
    {"role": "assistant",
     "content": "Sure! Here are some ways to eat bananas and dragonfruits together"},
    {"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"},
]

response = await norman.invoke(
    {
        "model_name": "qwen3-4b",
        "inputs": [
            {
                "display_title": "Prompt",
                "data": messages
            }
        ]
    }
)