Prompt Design

Introduction

A conversation with Alice, like on ChatGPT, takes place in a chat window consisting of messages assigned to three roles:

  • System: This is an invisible instruction in the chat field that defines the behavior of the assistant and/or active snippet, which we can change in the settings.
  • User: These are messages that we send as users.
  • Assistant: These are messages generated by our assistant.

The above fact directly translates into how you converse with Alice, as:

  • The content of the generated response is influenced by the entire previous conversation, which includes not only the model's statements but also the system instruction and our messages. This means we can steer the model's behavior in the desired direction by properly managing the entire conversation.
  • Content generation is billed based on the number of tokens that have been processed and generated. Thus, both the content we send to the model and the content we receive from it are considered. Therefore, in the case of longer conversations, the costs become noticeable, as each subsequent message considers the need to process the entire existing content.
  • The system message defines the model's behavior, but its content can eventually be "overwritten" by further messages that are part of the conversation. Thus, the longer the conversation, the greater the risk that our assistant will stop adhering to the initial instructions.
  • Activating the snippet expands the system message with an additional instruction. For OpenAI and Groq models, the snippet instruction is added above the last user message, and for Anthropic models, it is **attached to the current system message.