AI Service Configuration Preferences
Various service-related connection settings can be configured in . This preferences page is divided into two sections. The top Connections section is for those who want to use an AI service provider of their choice while the bottom AI Positron Service section is for those who want to use the built-in default service.
Connections
The connections table allows you to configure connections to various AI
service providers. Click the
New button under the table to configure a new connection or the
Edit button to edit an existing connection. You can also use
Delete to remove a connection or
Move
Up/
Move Down to move a selected connection.
When adding or editing a connection, the resulting configuration dialog box contains the following options:
- Name
- Specifies the name of the connection.
- Connection ID
- Displays the automatically generated ID for the connection.
- Connector type
- Specifies the connector type. The default options are: OpenAI, Microsoft Azure OpenAI, Claude, AWS Bedrock, Google Gemini, or xAI Grok. Additional add-ons are also available to enable connections to the Google Cloud's Vertex AI platform or a custom AI service.
- Connection details
- The remaining options depend on the chosen Connector type. The the details for each type below.
${env(ENV_NAME)} in all configuration and header parameter
values.Connector Types:
- Open AI
- If OpenAI is chosen as the connector type, the Connection
details section contains the following settings:
- Base URL
- The web address of the OpenAI service. By default:
https://api.openai.com. - API key
- The OpenAI API key necessary to work with the connector. Note:This option does not get saved in the Project-level options.
- Organization ID
- If you belong to multiple organizations, you can specify which organization is used for an API request. Usage from these API requests will count as usage for the specified organization.
- Default model
- The default model to be used for the chat view and for actions that do not explicitly specify a model.
- Enable text moderation
- This setting applies moderation (checks whether content complies with OpenAI's usage
policies) to both the input text sent to the AI service and the response received from
the AI service. It is enabled by default.Tip:
By default, when executing an action using the OpenAI connector, three requests are made:
- A moderation on input content request to
configured_web_address/v1/moderations. - A completion request to
configured_web_address/v1/chat/completions. -
A moderation on content returned by AI to
configured_web_address/v1/moderations.If your AI service does not require moderation (for example, moderation is already made by chat/completions endpoint) you can disable it by unchecking this checkbox.
- A moderation on input content request to
- Enable speech-to-text
- This setting allows for speech-to-prompt transcribing using the AI engine.
If enabled, the
Start recording
prompt using the microphone button should be displayed under the
chat box (in the AI Positron view). - Enable streaming
- This option controls whether streaming is enabled. When enabled (default), AI-generated answers are delivered in real time as a continuous flow. If disabled, the complete answer is delivered all at once after the processing is finished.
- Extra Headers
- Extra name/value parameters to set in the headers that are specific for the AI
requests.Tip:If the service uses Bearer Authentication, you can specify the key in the Key text field. If another authentication method is used, the Key field can be left empty, and the Extra Headers table can be used to set the authentication info on the request header. Note that editor variables can be used in this field and you can set your key in editor variables and specify the value in this table like this:
${env(AI_SERVICE_KEY)}to access pre-set values of environmental variables.
Notes:- You can use your own fine-tuned OpenAI models.
- The OpenAI connector might work with other AI engines that use the OpenAI APIs (like Grok or Deepseek).
- MS Azure OpenAI
- If Microsoft Azure OpenAI is chosen as the connector type, the
Connection details section contains the following settings:
- Base URL
- The web address where the connector service is located. This value can be found in the
Keys & Endpoint section when examining your resource from the Azure
portal. For example:
https://your-company-name.openai.azure.com/. - Deployment
- The deployment name that was chosen when the model was deployed in Microsoft Azure.
- API key
-
The Microsoft Azure OpenAI Service key necessary to work with the connector.
If an API key is not provided, the application will attempt to authenticate with Microsoft Entra ID by using one of the supported identity-based methods.
- Enable streaming
- This option controls whether streaming is enabled. When enabled (default), AI-generated answers are delivered in real time as a continuous flow. If disabled, the complete answer is delivered all at once after the processing is finished.
- Extra Headers
- Extra name/value parameters to set in the headers that are specific for the AI requests.
- Speech service region
- If a speech service is configured in the MS
Azure account, you can use this setting to set the region of the server (for
example:
eastus). As an alternative, you can set aSPEECH_REGIONenvironmental variable. Once the speech service is configured, the
Start
recording prompt using the microphone button should be displayed
under the chat box (in the AI Positron view). This
button allows for speech-to-prompt transcribing using the AI engine. - Speech service key
- If a speech service is configured in the MS
Azure account, you can use this setting to set the key to be used with the
service. As an alternative, you can set a
SPEECH_KEYenvironmental variable.
Note:You can use your own fine-tuned Microsoft Azure OpenAI models.Identity-based Authentication for Microsoft Azure:The Microsoft Azure OpenAI connector supports the following identity-based authentication flows:- OAuth 2.0 Authorization Code Flow (User Sign-in)
-
This method enables interactive user sign-in via a browser. It is recommended for desktop applications used by individual users with personal or work accounts.
When a user triggers an AI action and is not authenticated, the application displays a dialog box prompting them to sign in using their Microsoft account. Upon successful login, the application retrieves an access token and continues operation.
Required Environment Variables to configure authentication flow:
Variable Description AZURE_OAUTH_TENANT_ID Microsoft Entra (AAD) Tenant ID. AZURE_OAUTH_CLIENT_ID Application (client) ID. AZURE_OAUTH_REDIRECT_URI Redirect URI configured in the Entra app. This must be a localhost URL (for example: http://localhost:8085/callback).Note:You must register an application in Microsoft Entra ID that includes the specified redirect URI. The signed-in user must also be assigned a role that allows access to the Azure OpenAI service (e.g. Cognitive Services OpenAI User). - Service Principal Authentication
-
Service principal authentication is ideal for non-interactive scenarios, such as background processes or automation scripts.
Choose one of the options below if you need to authenticate without prompting the user to sign in.
Before you begin, create a service principal and assign a role to it that allows access to the Azure OpenAI service (e.g. the Cognitive Services OpenAI User role).
-
Client Secret
Use this method for simpler setups in trusted environments.
Be aware that on desktop systems, secrets stored in environment variables or config files are not encrypted by default and may be exposed to other local users or processes.
Required Environment Variables:
Variable name Value AZURE_CLIENT_ID ID of a Microsoft Entra application. AZURE_TENANT_ID ID of the application's Microsoft Entra tenant. AZURE_CLIENT_SECRET One of the application's client secrets. -
Client Certificate
Use this method in high-security environments where certificate-based authentication is preferred over storing client secrets.
Be aware that certificate files, like other secrets, must be protected properly—especially on desktop systems where local users may gain access to file contents if not secured appropriately.
Required Environment Variables:
Variable name Value AZURE_CLIENT_ID ID of a Microsoft Entra application. AZURE_TENANT_ID ID of the application's Microsoft Entra tenant. AZURE_CLIENT_CERTIFICATE_PATH Path to the PEM or PFX certificate file. AZURE_CLIENT_CERTIFICATE_PASSWORD Password for the certificate file (if any).
-
The connector will automatically detect and use the correct authentication flow based on which environment variables you have configured.
Note:The application should be restarted after each environment-variable change for the changes to take effect. - Claude
-
If Claude is chosen as the connector type, the Connection details section contains the following settings:
- Base URL
- The web address where the connector service is located. By default, it is
https://api.anthropic.com/. - API key
- The Claude API key necessary to work with the connector.
- Default model
- Use this drop-down to select the Claude default model to be used for the chat view and for actions that do not explicitly specify a model.
- Enable streaming
- This option controls whether streaming is enabled. When enabled (default), AI-generated answers are delivered in real time as a continuous flow. If disabled, the complete answer is delivered all at once after the processing is finished.
- Extra Headers
- Extra name/value parameters to set in the headers that are specific for the AI requests.
- AWS Bedrock
- If AWS Bedrock is chosen as the connector type, the
Connection details section contains the following settings:
- AWS Region
- The AWS region where the server is hosted (e.g., us-east-1). When you specify a region, the Base URL field is automatically generated based on that selection.
- Base URL
- The web address where the connector service is located. When specifying the AWS Region, this field is automatically generated. When both the AWS Region and Base URL are specified, AI Positron includes the region in the URL.
- AWS Bedrock API key
- The API key required for authenticating. For information about generating the API key, see Amazon Bedrock Documentation: Generate API Keys.
- Default model
- The default model to be used for the chat view and for actions that do not explicitly specify a model. For a list of supported models in AWS Bedrock, see Amazon Bedrock Documentation: Supported Foundation Models.
- Enable streaming
- This option controls whether streaming is enabled. When enabled (default), AI-generated answers are delivered in real time as a continuous flow. If disabled, the complete answer is delivered all at once after the processing is finished.
- Extra Headers
- Extra name/value parameters to set in the headers that are specific for the AI requests.
- Google Gemini
- If Google Gemini is chosen as the connector type, the
Connection details section contains the following settings:
- Base URL
- The web address where the connector service is located. By default, it is
https://generativelanguage.googleapis.com/. - API Key
- The API key required for authenticating. It can be generated in Google AI Studio.
- Default model
- Use this drop-down to select the Google Gemini default model to be used for the chat view and for actions that do not explicitly specify a model.
- xAI Grok
- If xAI Grok is chosen as the connector type, the Connection
details section contains the following settings:
- Base URL
- The web address where the service is located. By default, it is
https://api.x.ai. - API Key
- The API key required to authenticate with the xAI Grok service.
- Default model
- Use this drop-down to select the xAI Grok default model to be used for the chat view and for actions that do not explicitly specify a model.
- Custom AI Service
- This connector is available by installing the Oxygen AI Positron Custom Connector Add-on. It allows a connection to a custom AI service that exposes a REST API, similar to OpenAI's chat-completion API. Unlike the built-in Open AI connector, this add-on supports the OAuth Client Credentials Flow for authentication and offers more flexibility by letting you set query parameters.
- Vertex AI Service
- This connector is available by installing the Oxygen AI Positron Vertex AI Connector Add-on. It enables integration with Google Cloud's Vertex AI platform.
AI Positron Service
This section contains the following options for using the default AI Positron Service:
- Enable connections to the AI Positron Service
- If selected, the AI Positron add-on is allowed to access the AI Positron Service platform. If not selected, the AI Positron add-on is prohibited from connecting and using models from the AI Positron Service platform.
- Address
- Displays the address of the service connection. Currently, there is only one public platform that provides this service.
- Default model
- The default model is used for the chat pane and for actions that do not explicitly specify a fixed model. Each chosen model consumes a certain number of credits per token. The gpt-4.1 model is used by default if no other model is chosen.