external help file | Module Name | online version | schema |
---|---|---|---|
PSOpenAI-help.xml |
PSOpenAI |
2.0.0 |
Creates a model response.
Request-Response
[[-Message] <String>]
[-Role <String>]
[-Model <String>]
[-SystemMessage <String[]>]
[-DeveloperMessage <String[]>]
[-Instructions <String>]
[-PreviousResponseId <String>]
[-Images <String[]>]
[-ImageDetail <String>]
[-Files <String[]>]
[-ToolChoice <Object>]
[-ParallelToolCalls <Boolean>]
[-Functions <IDictionary[]>]
[-UseFileSearch]
[-FileSearchVectorStoreIds <String[]>]
[-FileSearchMaxNumberOfResults <Int32>]
[-FileSearchRanker <String>]
[-FileSearchScoreThreshold <Double>]
[-UseWebSearch]
[-WebSearchType <String>]
[-WebSearchContextSize <String>]
[-WebSearchUserLocationCity <String>]
[-WebSearchUserLocationCountry <String>]
[-WebSearchUserLocationRegion <String>]
[-WebSearchUserLocationTimeZone <String>]
[-UseComputerUse]
[-ComputerUseEnvironment <String>]
[-ComputerUseDisplayHeight <Int32>]
[-ComputerUseDisplayWidth <Int32>]
[-Include <String[]>]
[-Truncation <String>]
[-Temperature <Double>]
[-TopP <Double>]
[-Store <Boolean>]
[-Stream]
[-StreamOutputType <String>]
[-ReasoningEffort <String>]
[-ReasoningSummary <String>]
[-MetaData <IDictionary>]
[-MaxOutputTokens <Int32>]
[-OutputType <Object>]
[-OutputRawResponse]
[-JsonSchema <String>]
[-JsonSchemaName <String>]
[-JsonSchemaDescription <String>]
[-JsonSchemaStrict <Boolean>]
[-ServiceTier <String>]
[-User <String>]
[-Organization <String>]
[-AsBatch]
[-CustomBatchId <String>]
[-TimeoutSec <Int32>]
[-MaxRetryCount <Int32>]
[-ApiBase <Uri>]
[-ApiKey <SecureString>]
[-History <Object[]>]
[<CommonParameters>]
Creates a model response. Provide text or image inputs to generate text or JSON outputs.
PS C:\> Request-Response "How do I make sauerkraut?" -Model 'gpt-4o' | select output_text
Making sauerkraut is a simple process that involves fermenting cabbage. ...
PS C:\> Request-Response "What is this?" -Images 'C:\donut.png' -Model 'gpt-4o'
PS C:\> Request-Response "Summarize this document" -Images 'C:\recipient.pdf' -Model 'o3-mini'
PS C:\> Request-Response "Tell me a recent top 3 tech news." -UseWebSearch -Model 'gpt-4o'
PS C:\> Request-Response "Implement Zeller's congruence in PowerShell." -Stream | Write-Host -NoNewline
A text input to the model.
Type: String
Aliases: UserMessage, input
Required: False
Position: 0
The role of the message input. One of user
, system
, or developer
. The default is user
.
Type: String
Required: False
Position: Named
The name of model to use.
The default value is gpt-4o-mini
.
Type: String
Required: False
Position: Named
Accept pipeline input: True (ByPropertyName)
Default value: gpt-4o-mini
(Instead of this parameter, the use of the -Instructions
parameter is recommended.)
Instructions that the model should follow.
Type: String[]
Required: False
Position: Named
(Instead of this parameter, the use of the -Instructions
parameter is recommended.)
Instructions that the model should follow.
Type: String[]
Required: False
Position: Named
Inserts a system (or developer) message as the first item in the model's context.
Type: String
Required: False
Position: Named
The unique ID of the previous response to the model. Use this to create multi-turn conversations.
Type: String
Aliases: previous_response_id
Required: False
Position: Named
Accept pipeline input: True (ByPropertyName)
A list of images to passing the model. You can specify local image file or remote url.
Type: String[]
Required: False
Position: Named
Controls how the model processes the image and generates its textual understanding. You can select from Low
or High
.
Type: String
Accepted values: auto, low, high
Required: False
Position: Named
Default value: auto
A file input to the model.
You can speciy a list of the local file path or the ID of the file to be uploaded.
Type: String[]
Required: False
Position: Named
How the model should select which tool (or tools) to use when generating a response.
Type: String
Aliases: tool_choice
Required: False
Position: Named
Whether to allow the model to run tool calls in parallel.
Type: Boolean
Aliases: parallel_tool_calls
Required: False
Position: Named
A list of functions the model may call.
Type: IDictionary[]
Required: False
Position: Named
If you want to use the File search built-in tool, Should specify this switch as enabled.
Type: SwitchParameter
Required: False
Position: Named
The IDs of the vector stores to search.
Type: String[]
Required: False
Position: Named
The maximum number of results to return.
Type: Int32
Required: False
Position: Named
The ranker to use for the file search.
Type: String
Required: False
Position: Named
The score threshold for the file search, a number between 0 and 1. Numbers closer to 1 will attempt to return only the most relevant results, but may return fewer results.
Type: Double
Required: False
Position: Named
If you want to use the Web search built-in tool, Should specify this switch as enabled.
Type: SwitchParameter
Required: False
Position: Named
The type of the web search tool.
Type: String
Required: False
Position: Named
High level guidance for the amount of context window space to use for the search. One of low
, medium
, or high
. medium
is the default.
Type: String
Accepted values: low, medium, high
Required: False
Position: Named
Default value: medium
Free text input for the city of the user, e.g. San Francisco
.
Type: String
Required: False
Position: Named
The two-letter ISO country code of the user, e.g. US
.
Type: String
Required: False
Position: Named
Free text input for the region of the user, e.g. California
.
Type: String
Required: False
Position: Named
The IANA timezone of the user, e.g. America/Los_Angeles
.
Type: String
Required: False
Position: Named
If you want to use the Computer-use built-in tool, Should specify this switch as enabled.
Type: SwitchParameter
Required: False
Position: Named
The type of computer environment to control.
Possible values: browser
, mac
, windows
, ubuntu
.
Type: String
Required: False
Position: Named
The height of the computer display.
Type: Int32
Required: False
Position: Named
The width of the computer display.
Type: Int32
Required: False
Position: Named
Specify additional output data to include in the model response.
Type: String[]
Required: False
Position: Named
The truncation strategy to use for the model response. disabled
(default) or auto
.
Type: String
Required: False
Position: Named
Default value: disabled
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random.
Type: Double
Required: False
Position: Named
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
Type: Double
Aliases: top_p
Required: False
Position: Named
Whether to store the generated model response for later retrieval via API. The default is $true
.
Type: Boolean
Required: False
Position: Named
Default value: True
If set, the model response data will be streamed to the client.
Type: SwitchParameter
Required: False
Position: Named
Specifying the format that the function output. This parameter is only valid for the stream output.
text
: Output only text deltas that the model generated. (Default)object
: Output all events that the API respond.
Type: String
Accepted values: text, object
Required: False
Position: Named
Default value: text
o-series models only
Constrains effort on reasoning for reasoning models. Currently supported values are low, medium, and high.
Type: String
Required: False
Position: Named
A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process. One of auto
, concise
or detailed
.
Type: String
Required: False
Position: Named
Developer-defined tags and values used for filtering completions in the dashboard.
Type: IDictionary
Required: False
Position: Named
An upper bound for the number of tokens that can be generated for a response.
Type: Int32
Aliases: max_output_tokens
Required: False
Position: Named
An object specifying the format that the model must output.
text
: Default response format. Used to generate text responses.json_schema
: Enables Structured Outputsjson_object
: Enables the older JSON mode (Not recommended)
Type: Object
Required: False
Position: Named
If specifies this switch, an output of this function to be a raw response value from the API. (Normally JSON formatted string.)
Type: SwitchParameter
Required: False
Position: Named
The schema for the response format, described as a JSON Schema object.
Type: String
Required: False
Position: Named
The name of the response format.
Type: String
Required: False
Position: Named
A description of what the response format is for, used by the model to determine how to respond in the format.
Type: String
Required: False
Position: Named
Whether to enable strict schema adherence when generating the output.
Type: Boolean
Required: False
Position: Named
Specifies the latency tier to use for processing the request. This parameter is relevant for customers subscribed to the scale tier service.
Type: String
Aliases: service_tier
Required: False
Position: Named
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.
Type: String
Required: False
Position: Named
Specifies Organization ID which used for an API request.
If not specified, it will try to use $global:OPENAI_ORGANIZATION
or $env:OPENAI_ORGANIZATION
Type: string
Aliases: OrgId
Required: False
Position: Named
If this is specified, this cmdlet returns an object for Batch input
It does not perform an API request to OpenAI. It is useful with Start-Batch
cmdlet.
Type: SwitchParameter
Required: False
Position: Named
Default value: False
A unique id that will be used to match outputs to inputs of batch. Must be unique for each request in a batch.
This parameter is valid only when the -AsBatch
swicth is used.
Type: String
Required: False
Position: Named
Specifies how long the request can be pending before it times out.
The default value is 0
(infinite).
Type: Int32
Required: False
Position: Named
Default value: 0
Number between 0
and 100
.
Specifies the maximum number of retries if the request fails.
The default value is 0
(No retry).
Note : Retries will only be performed if the request fails with a 429 (Rate limit reached)
or 5xx (Server side errors)
error. Other errors (e.g., authentication failure) will not be performed.
Type: Int32
Required: False
Position: Named
Default value: 0
Specifies an API endpoint URL such like: https://your-api-endpoint.test/v1
If not specified, it will use https://api.openai.com/v1
Type: System.Uri
Required: False
Position: Named
Default value: https://api.openai.com/v1
Specifies API key for authentication.
The type of data should [string]
or [securestring]
.
If not specified, it will try to use $global:OPENAI_API_KEY
or $env:OPENAI_API_KEY
Type: Object
Required: False
Position: Named
An object for keeping the conversation history.
Type: Object[]
Required: False
Position: Named
Accept pipeline input: True (ByPropertyName)