openai/gpt-4o-mini
Low latency, low cost version of OpenAI's GPT-4o model
Capabilities
Cost
Community model (estimated from hardware time)
Input Parameters
| Name | Type | Description | Default | Constraints |
|---|---|---|---|---|
frequency_penalty | number | Frequency penalty parameter - positive values penalize the repetition of tokens. | 0 | min: -2, max: 2 |
image_input | array | List of images to send to the model | | — |
max_completion_tokens | integer | Maximum number of completion tokens to generate | 4096 | — |
messages | array | A JSON string representing a list of messages. For example: [{"role": "user", "content": "Hello, how are you?"}]. If provided, prompt and system_prompt are ignored. | | — |
presence_penalty | number | Presence penalty parameter - positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. | 0 | min: -2, max: 2 |
prompt | string | The prompt to send to the model. Do not use if using messages. | — | — |
system_prompt | string | System prompt to set the assistant's behavior | — | — |
temperature | number | Sampling temperature between 0 and 2 | 1 | min: 0, max: 2 |
top_p | number | Nucleus sampling parameter - the model considers the results of the tokens with top_p probability mass. (0.1 means only the tokens comprising the top 10% probability mass are considered.) | 1 | min: 0, max: 1 |
frequency_penalty number Frequency penalty parameter - positive values penalize the repetition of tokens.
0 min: -2, max: 2 image_input array List of images to send to the model
max_completion_tokens integer Maximum number of completion tokens to generate
4096 messages array A JSON string representing a list of messages. For example: [{"role": "user", "content": "Hello, how are you?"}]. If provided, prompt and system_prompt are ignored.
presence_penalty number Presence penalty parameter - positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
0 min: -2, max: 2 prompt string The prompt to send to the model. Do not use if using messages.
system_prompt string System prompt to set the assistant's behavior
temperature number Sampling temperature between 0 and 2
1 min: 0, max: 2 top_p number Nucleus sampling parameter - the model considers the results of the tokens with top_p probability mass. (0.1 means only the tokens comprising the top 10% probability mass are considered.)
1 min: 0, max: 1 86d7f12d34e3 Updated: 2/26/2026 30.5M runs
cinemasetfree.com