) next to any chat-based prompt to get code snippets in TypeScript, Python, or cURL.
The generated code includes all the prompt configuration, including the model, messages, and any additional parameters you've set.
## Custom models
To configure custom models, see the [Custom models](/docs/guides/proxy#custom-models) section of the proxy docs.
Endpoint configurations, like custom models, are automatically picked up by the playground.
---
file: ./content/docs/guides/projects.mdx
meta: {
"title": "Projects",
"description": "Create and configure projects"
}
# Projects
A project is analogous to an AI feature in your application. Some customers create separate projects for development and production to help track workflows. Projects contain all [experiments](/docs/guides/evals), [logs](/docs/guides/logging), [datasets](/docs/guides/datasets) and [playgrounds](/docs/guides/playground) for the feature.
For example, a project might contain:
* An experiment that tests the performance of a new version of a chatbot
* A dataset of customer support conversations
* A prompt that guides the chatbot's responses
* A tool that helps the chatbot answer customer questions
* A scorer that evaluates the chatbot's responses
* Logs that capture the chatbot's interactions with customers
## Project configuration
Projects can also house configuration settings that are shared across the project.
### Tags
Braintrust supports tags that you can use throughout your project to curate logs, datasets, and even experiments. You can filter based on tags in the UI to track various kinds of data across your application, and how they change over time. Tags can be created in the **Configuration** tab by selecting **Add tag** and entering a tag name, selecting a color, and adding an optional description.
Any headers you add to the configuration will be passed through in the request to the custom endpoint.
The values of the headers can also be templated using Mustache syntax.
Currently, the supported template variables are `{{email}}` and `{{model}}`.
which will be replaced with the email of the user whom the Braintrust API key belongs to and the model name, respectively.
If the endpoint is non-streaming, set the `Endpoint supports streaming` flag to false. The proxy will
convert the response to streaming format, allowing the models to work in the playground.
Each custom model must have a flavor (`chat` or `completion`) and format (`openai`, `anthropic`, `google`, `window` or `js`). Additionally, they can
optionally have a boolean flag if the model is multimodal and an input cost and output cost, which will only be used to calculate and display estimated
prices for experiment runs.
#### Specifying an org
If you are part of multiple organizations, you can specify which organization to use by passing the `x-bt-org-name`
header in the SDK: