SmartRealtimeAdapter

Wisej.AI.Adapters.SmartRealtimeAdapter

Namespace: Wisej.AI.Adapters

Assembly: Wisej.AI (3.5.0.0)

Represents a smart adapter that provides real-time data processing capabilities for use with the OpenAIEndpointRealtime endpoint.

public class SmartRealtimeAdapter : SmartAdapter

The SmartRealtimeAdapter class extends the SmartAdapter base class to enable real-time data handling and integration with OpenAI's real-time endpoint. This adapter is designed to facilitate seamless communication and data exchange in scenarios where immediate processing and response are required.

Constructors

SmartRealtimeAdapter()

Initializes a new instance of the SmartRealtimeAdapter class.

This constructor sets up the SmartRealtimeAdapter by initializing the internal prompt and subscribing to the application's refresh event. The prompt is initialized with a default label, and the adapter will respond to application-wide refresh events for real-time updates.

Properties

Enabled

Boolean: Gets or sets a value indicating whether the component accepts voice input from the user. (Default: True)

When set to true (the default value), the component is enabled to receive and process voice input from the user. Setting this property to false disables voice input functionality for the component.

Listening

Boolean: Gets a value indicating whether the component is currently listening for the user's voice.

The Listening property reflects the current listening state of the component.

Muted

Boolean: Gets or sets a value indicating whether the component is muted. (Default: False)

When set to true, the component is muted and may not produce sound or notifications. The default value is false.

TranscriptionEnabled

Boolean: Gets or sets a value indicating whether transcription functionality is enabled. (Default: False)

When set to true, transcription features are activated. The default value is false.

Voice

String: Gets or sets the voice of the model. (Default: "Alloy")

Supported voices include: Alloy (default), Ash, Ballad, Coral, Echo, Sage, Shimmer, Verse. New voices can be used as they become available.

Methods

CreateSession<T>(systemPrompt)

Parameter
Type
Description

T

systemPrompt

Returns: T.

OnAnswerReceived(e)

Raises the AnswerReceived event.

Parameter
Type
Description

e

The AnswerReceivedEventArgs instance containing the event data.

This method invokes the AnswerReceived event, passing the specified event arguments to all registered event handlers.

OnEnabledChanged(e)

Raises the EnabledChanged event.

Parameter
Type
Description

e

An EventArgs instance that contains the event data.

This method is called to trigger the EnabledChanged event. Derived classes can override this method to provide additional logic when the event is raised.

OnListeningChanged(e)

Raises the ListeningChanged event.

Parameter
Type
Description

e

An EventArgs instance containing the event data.

This method is called to notify subscribers that the listening state has changed. Derived classes can override this method to provide custom event data or additional logic.

OnMutedChanged(e)

Raises the MutedChanged event.

Parameter
Type
Description

e

An EventArgs instance containing the event data.

This method is called to notify subscribers that the muted state has changed. Derived classes can override this method to provide custom event data or additional logic.

OnTranscriptionReceived(e)

Raises the TranscriptionReceived event.

Parameter
Type
Description

e

The TranscriptionReceivedEventArgs instance containing the event data.

This protected virtual method allows derived classes to trigger the TranscriptionReceived event. Override this method to provide custom event invocation logic.

Reset()

Resets the current conversation with the OpenAI Realtime API endpoint.

This method clears the ongoing conversation state, allowing a new conversation to begin with the OpenAI endpoint. It is useful when you want to discard the current context and start fresh.

StartListening()

Starts capturing and processing user voice input through the OpenAI Realtime API.

Call this method to begin listening for user voice input. The component will capture audio from the input device, transmit it to the OpenAI Realtime API, and process the response in real time. This enables interactive voice-driven features within your application.

StopListening()

Stops capturing and processing user voice input.

Call this method to stop listening for user voice input. The component will cease capturing audio from the input device and terminate any ongoing communication with the OpenAI Realtime API. This is useful for conserving resources or when voice input is no longer required.

Events

AnswerReceived

AnswerReceivedEventHandler Occurs when an answer is received.

Subscribe to this event to handle actions when an answer is received. The event provides AnswerReceivedEventArgs containing details about the received answer.

EnabledChanged

EventHandler Occurs when the enabled state of the object changes.

Subscribe to this event to be notified whenever the enabled state of the object is modified. The event provides standard EventArgs data.

ListeningChanged

EventHandler Occurs when the listening state of the component changes.

Subscribe to this event to be notified when the listening state is updated, such as when the component starts or stops listening for input.

MutedChanged

EventHandler Occurs when the muted state of the component changes.

Subscribe to this event to be notified when the muted state is updated, such as when the component is muted or unmuted.

TranscriptionReceived

TranscriptionReceivedEventHandler Occurs when a transcription is received.

Subscribe to this event to handle actions when a new transcription is received. The event provides TranscriptionReceivedEventArgs containing details about the transcription.

Implements

Name
Description

Represents a provider that supplies tools.

Last updated