System Prompts

System Prompts#

System prompts are used to provide consistent, task-specific instructions to the model. They are used to ensure that the model is oriented towards the task at hand.

This is useful in batch inference jobs, where the task is typically uniform across all inputs.

Basic System Prompt Example#

For example, let’s say you are classifying customer support tickets.

You can provide a consistent system prompt to the model when you run the batch inference job, like so:

import materialized_intelligence as mi

dialogues = ... # (code to retrieve dialogues from a database)

json_schema = {
    "type": "object",
    "properties": {
        "was_handled_properly": {
            "type": "boolean"
        }
    }
}

system_prompt = """
You are a overseeing customer support agents.
Your job is to review dialogues between customers and customer support agents, and ensure that they were handled properly.
You will be provided a dialogue, and you should respond with True if the dialogue was handled properly, and False otherwise."
"""

results = mi.infer(
    inputs=dialogues,
    system_prompt=system_prompt,
    json_schema=json_schema
)

print(results)