Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Wisej.AI: Connecting Business Applications to AI.
Welcome to Wisej.AI. This guide is designed to help you seamlessly integrate AI-driven features into your Wisej.NET applications. Whether you are starting from scratch or enhancing existing projects, this documentation will help you maximize the potential of your applications with Wisej.AI.
Wisej.AI is an advanced system that enhances Wisej.NET components, including third-party widgets, by incorporating AI features to create "Smart" applications. It is designed to be comprehensive, cohesive, standalone, open, and flexible. Wisej.AI operates above infrastructure libraries such as Microsoft Semantic Kernel (SK) or LangChain .NET.
Wisej.AI offers a range of ready-made components to enhance your application with AI features, allowing you to concentrate on desired outcomes instead of complex implementations. As an open system, Wisej.AI supports integration with other libraries and functionalities, including those built with SK, giving you flexibility in your development choices.
It's quite simple, add the Wisej-3-AI or Wisej-4-AI NuGet package, and start by dropping the SmartHub component on any design surface. Then click "Create Endpoint" to pick your AI provider, then click "Create Adapter" to pick the AI feature you would like to add.
For a comprehensive understanding of Large Language Models (LLMs) and to learn what they can do to enhance your application, please review all concepts in the Concepts section of this documentation. This section contains all the necessary information.

Step-by-step instructions to start using Wisej.AI.
Wisej.AI is distributed through two distinct NuGet packages: Wisej-4-AI.4.0.x.x.nupkg and Wisej-3-AI.3.5.x.x.nupkg. These packages include the necessary designer assemblies for both the .NET Framework and .NET Core, eliminating the need for separate designer packages.
The NuGet packages will not be available on NuGet.org, instead you will need to deploy them locally on your file system. To utilize these local NuGet packages, you'll need to add the file path to Visual Studio's list of NuGet sources.
As an alternative you can simply add a NuGet.config file to your solution (at the same level as the .sln file) to include the configuration with the project. See nuget.config for details.
After you've created a Wisej.NET application or opened an existing one, and added the Wisej-AI NuGet packages, you're ready to start using AI in your apps!
The fastest way is to drop a SmartHub component (you'll find it in the toolbox under the Wisej.AI tab) onto a Page (or any open designer).
Once you have the Hub (smartHub1) you have to select an endpoint (this is your AI provider, can be a public one or your own AI server, see for more information).
Selecting the endpoint first is important because it determines which adapters are compatible. Not all adapters work with every endpoint. For instance, some endpoints might support vision, while others require TTS or voice recognition.
Here are the initial steps:
Once everything is in place, the specific control or set of controls that have been connected to the SmartAdapter will have a new set of AI properties.
After adding a SmartHub component, you have two options to create an Endpoint. You can either click the quick actions arrow and select "Create Endpoint," or you can select the SmartHub component and click the "Create Endpoint" link located at the bottom of the property grid.
You will see the SmartEndpoint picker dialog:
Select the provider you want to use, verify that the default properties are set correctly, and click OK. See for more information.
Wisej.AI will create the selected SmartEndpoint component and assign it to the Endpoint property of the SmartHub component. It is important to always check and make sure the SmartHub has an endpoint assigned.
All the designer dialogs are under development and will change before the release.
After the SmartHub is associated with an endpoint, it can filter which adapters can work with that specific AI provider. Click the quick actions arrow and select "Create Adapter", or click the link "Create Adapter" in the property grid.
The SmartAdapter Picker shows all the built-in adapters that can use the endpoint plus any custom adapter that you have created in your application. Please note the Documentation link provided in the description field of each adapter. This link directs you straight to the documentation for the adapter, ensuring you have easy access to the relevant materials.
Click OK to create the adapter component.
Once the SmartAdapter is created, you have to connect it to the components to augment. Some adapters may support augmenting multiple controls, while others may allow extending only one control at a time, in which case you can add the same adater multiple times, if needed.
The screenshot above displays the SmartDataEntry adapter linked to the splitContainer1.Panel1 control. Once connected, the adapter will enhance all child controls, at every level, by integrating the new AI properties required for it to perform its functions effectively.
Most Endpoints, Vector Databases, and Web Search Engines require an Api Key to get authenticated. You can set Api Key for each component that requires one in three ways:
Set the ApiKey property on the component
Save the api key in the ~/AI/ApiKeys.json file
Set the api key in an environment variable
To set up your Wisej.NET project properly, create a new folder named "AI" at the root level of your project. This folder will serve as an organized space for various AI features. Once the "AI" folder is in place, create a file named ApiKeys.json within it. This file will allow you to manage your api keys in one central location.
To ensure the security of your ApiKeys.json file when deploying with .NET Core (ASP.NET Core), you should prevent the server from allowing its download. A simple way to do this is by modifying the code in Startup.cs.
When deploying using .NET Framework, all json files are blocked in web.config.
You can store various types of keys in the ApiKeys.json file. Wisej.AI utilizes these keys for its Endpoints, Vector Databases, and Web Search Engines by identifying the service by its name without suffixes. For instance, if the service is named "BingWebService", Wisej.AI will use "BingWeb" to locate the key within the ApiKeys.json file.
You can manage API keys securely by storing them in environment variables through your cloud provider. For example, Azure App Services lets you manage environment variables for each service efficiently.
The name of the environment variable is:
WISEJ_AI_APIKEY_ + Endpoint Name without the "Endpoint" suffix.
For the OpenAIEndpoint the api key environment variable is WISEJ_AI_APIKEY_OPENAI.
Environment variables override the settings in the ApiKeys.json file.
If you choose to store custom keys in ApiKeys.json or as environment variables, you can retrieve these keys using the class. Here’s how you can do it:
In the example above, the method first attempts to read the environment variable named "KEY_MYCOOLSERVICE" with the second parameter (the name postfix) left empty. If this environment variable does not exist, it then checks for the key "MyCoolService" in the ApiKeys.json file.
If the final parameter, the envPrefix, is not specified, it defaults to "WISEJ_AI_APIKEY_" as the prefix for environment variables.
Start by creating a Wisej.NET new application with either a main page or a main form. And add a few fields and a button, like this:
Use the NuGet Package Manager to add Wisej-AI and Wisej-AI-Design to the project.
Add a new folder "AI" to the root of the project.
From the Toolbox, under "Wisej.AI Test" select SmartHub and drop it on the page being designed.
Now click the "Create Endpoint" link in the property grid or click the quick action arrow and select "Create Endpoint". See above.
Pick your AI provider. In this case we use OpenAI. Make sure you have an api key from Open AI. Put the api key in the ApiKey property (see also ).
Now create the SmartDataEntryAdapter (see ).
Leave all the default properties unchanged. Select the Page and you should see this in the properties panel.
Select the smartDataEntryAdapter1 component to assign it to the Adapter property.
Once the adapter is connected to the Page, all its child controls will gain new AI properties. In this case you will find the FieldName, FieldPrompt and FieldRectangleproperties under the group "AI Features"
For each field from top down, set the FieldName property to "First Name", "Last Name", "Email Address", "Summary". Then double click on the button and add this code to the button1_Click event handler. Notice that the handler is async.
Run the app. Then go to an email you received from someone, select the text including the signature and copy to the clipboard. Go back to the app, make sure to click somewhere on the page first or the browser will block clipboard access, and click the Smart Paste button.
This is already something! Now add a checkbox, set the text to "SPAM" but now set the FieldPrompt to "Detect if the text is likely to be a SPAM email."
Run it again and try now.
You can also try by select and copying an image to the clipboard. The SmartDataEntryAdapter supports just about anything.
Wisej.AI's core components—including endpoint, prompt, session, and hub—utilize the fluent markup syntax, enabling developers to seamlessly chain property settings and event handling. This streamlined approach enhances code readability and efficiency by allowing developers to configure components in a more intuitive and concise manner.
The SmartEndpoint serves as the bridge connecting the AI provider with the adapters.











var endpoint = new OpenAIEndpoint()
.Name("openAI")
.Model("gpt-4o")
.ApiKey("...");
var session = new SmartSession(endpoint1)
.UseTools(new WebSearchTools())
.UseTools(new DocumentSearchTools())
.OnStart(() => AlertBox.Show("Thinking..."))
.OnConvertParameter(e => e.Value = Convert.ToString(e.Value));{
// Endpoints
"OpenAI": "<api key>",
"AzureAI": "<api key>",
// Vector DB Services
"Pinecone": "<api key",
// Web Search Eginens
"BingWeb" : "<api key>"
}// app.UseFileServer();
app.UseWhen(
cx => !cx.Request.Path.StartsWithSegments("/AI"),
app => app.UseFileServer());
var apikey = ApiKeys.GetApiKey("MyCoolService", "", "KEY_");
private async void button1_Click(object sender, EventArgs e)
{
await this.smartDataEntryAdapter1.FromClipboardAsync();
}Private Async Sub Button1_Click(sender As Object, e As EventArgs) Handles Button1.Click
Await Me.smartDataEntryAdapter1.FromClipboardAsync()
End Sub​​
Instance method, or property, or event.
​​
Static method or property or event.
​​
Protected method or property or event.
​​
Deprecated method or property or event.
Comment
The concept documentation and the API reference are a never-ending work in progress. We update it almost daily. Please check back if what you are looking for is not available.
We assume that you have a good working knowledge of the following tools and technologies:
C# or VB.NET
.NET in general
Visual Studio
JavaScript
Introduction to LLM Concepts in Wisej.AI
In this section, we will explore the foundational concepts of Large Language Models (LLMs) as they are utilized within Wisej.AI. This primer is designed to provide software developers with a comprehensive understanding of how LLMs work. We will cover the basic principles, architectural frameworks, and applications of LLMs, ensuring that you have the necessary background to effectively leverage these advanced models in your development projects through Wisej.AI and Wisej.NET.
Large Language Models are a type of artificial intelligence designed to understand, generate, and manipulate human language. These models are trained on vast datasets and use deep learning techniques to achieve natural language processing tasks with impressive accuracy and fluency. Some key attributes of LLMs include:
Contextual Understanding: LLMs comprehend the context of the information they process, enabling them to generate responses that are relevant and coherent within the given framework.
Adaptability: These models can be fine-tuned for a plethora of applications, ranging from content generation to automated customer support, and, in the case of Wisej.AI, augmenting user interface components and widgets.
Scalability: The architecture of LLMs allows them to handle concurrent requests efficiently, which is crucial for real-time interaction in browser-based applications.
LLMs (Large Language Models) are a component of Generative AI, which is built upon the foundational principles of traditional Machine Learning (ML) architecture.
Fundamentally, LLMs are straightforward to use. They operate by predicting the next token in a sequence and continue this process until they reach a specified stopping point or condition, such as the end of a sentence or a predefined token limit.
For example, if the prompt is "Hello, how are you?" the likelihood of the next token being "banana" is very low. In contrast, the probability of the sequence continuing with "Very," followed by "good," then ", thank," "you," "for," and "asking" is much higher.
The complexity of the prompt that a model can handle effectively typically depends on the size of its Artificial Neural Network (ANN). When reviewing a model's description, you might encounter a number like 90B next to its name. This number represents the model's parameters, which are the weights of the ANN, expressed as single-precision floating-point numbers (float32). Consequently, a model with 90 billion parameters consists of 90 billion float32 numbers, resulting in an approximate size of 360 GB.
The "Hello" example is quite trivial. To better illustrate how Wisej.AI can leverage LLMs to deliver valuable functionality to applications—primarily without engaging in chat—consider the following simplified example:
The example above demonstrates how Wisej.AI dynamically generates a prompt by analyzing the code to identify the fields required by a .NET class. The agent message, which is the LLM's response, is formatted as valid JSON. This approach enables the system to read the response, parse it, and populate the appropriate fields in the application. This process is accomplished without engaging in chat and without handling "conversational" responses.
In other words, we are utilizing the LLM in a manner similar to how one would use a SQL database or a cloud service.
This is just the first crucial step in understanding the Wisej.AI architecture. Once this concept is clear, you can appreciate how Wisej.AI leverages this highly structured request-response approach to seamlessly integrate AI features into virtually any aspect of a Wisej.NET application.
Building on the concept of "harnessing" the AI model, Wisej.AI develops agents and super-agents, along with tools that enhance the model's capabilities. For instance, while a model is inherently limited to appending words (or , to be precise) to a prompt, Wisej.AI constructs structured prompts with specific instructions for obtaining additional information. By parsing these responses and invoking code iteratively (), Wisej.AI creates agents that deliver significantly more value to the system.
Tools are essentially functions that the LLM can "invoke" within its response. See for details on how Wisej.AI defines tools.
The simplest way to understand tools is through an example: if you ask any model, "What time is it?" the model will typically respond with something like, "I'm sorry, but I don't have the capability to provide real-time information such as the current time."
If you ask ChatGPT "what time is it" it will respond correctly - that is because it utilized a tool! But when you ask directly any model anywhere it will respond that it doesn't know.
If we simulate the request-response cycle managed by the Wisej.AI internal agent and have provided the get_current_time tool to the AI, the query would look like this (the content is very simplified):
Now imagine equipping the LLM with a variety of tools: one for web searches, another for querying your company's database, another for scanning through documents, one for building charts, and yet another for sending emails, among others. These tools effectively expand the AI's capabilities. Consequently, when an application uses Wisej.AI to extract data from a PDF document, it can also fill in missing information, check for anomalies, send alerts via email, and correlate internal documents with public information—all through a single message to the AI provider via Wisej.AI.
LLMs cannot learn or retain information; they have no memory or state. Each request is independent and must include all the necessary information for the LLM to generate the correct subsequent tokens (the response) to the prompt.
If you've read that LLMs can process vast amounts of data in a single request, that's a misconception. In reality, the opposite is true. The combined size of requests and responses is quite limited. While the model itself is likely trained on a vast amount of information, its capacity to generate responses is constrained by what is known as the . This is the maximum number of tokens (for simplicity, you can think of a token as a word) that can be submitted to the ANN at once. It encompasses everything from the system prompt to the final question being submitted. You can think of it as a large string, similar to a lengthy SQL statement. A context window of 128,000 tokens is considered large with the current technology.
A common use case might involve a request to create a table listing all legal deadlines, along with summaries, from 8,000 documents in a folder. It is entirely impractical to concatenate the text from all these documents into a single prompt. Assuming an average of 20 pages per document and 400 words per page, you would end up with approximately 64 million words, which far exceeds the typical limit of 128K tokens.
Indeed, just as an AI (or a person, for that matter) cannot feasibly read billions of web pages but can submit a query to a search engine to receive a list of relevant pages, RAG is used to provide pertinent context to the AI. It does this by extracting the few chunks of text that are relevant to a query from a large collection of data.
For instance, if we want to create a chart of the "5-year cumulative returns" from an 80-page PDF 10-K filing from Apple, we would need to extract and process the relevant data from the document, as it cannot be included in its entirety within the prompt.
We accomplish this by splitting the document into chunks and generating an vector for each chunk. Then, we generate an embedding vector for the query (or a version of the query optimized by the LLM), select the top N chunks based on relevance, and submit only those chunks for processing.
Another example would be asking the AI to classify a line item in an invoice according to a list of codes in a chart of accounts stored in a database. Since we cannot submit the entire database, we need to extract the most relevant data to include with the prompt, providing the AI with sufficient information to complete the task accurately.
In essence, RAG (Retrieval-Augmented Generation) leverages traditional coding methods, which are capable of processing unlimited amounts of data, to extract a small, relevant subset of information for submission to the LLM (Large Language Model). Utilizing is just one of the many techniques available to accomplish this task.
Embeddings are vector representations of text that position the specific text in relation to other texts within a multidimensional space.
A vector representation is an array of floating-point numbers that indicates the text's position within the space, with the array's length corresponding to the number of dimensions. For instance, a vector with 1536 values places the text within a 1536-dimensional space.
This representation enables applications to efficiently filter large volumes of data by performing similarity searches, which involve calculating the distance between two vectors in the multidimensional space.
Before the introduction of embeddings, a developer would typically split the query into individual words, then iterate through each text chunk, perhaps counting how many keywords appeared in each. Additionally, they might have used traditional letter-swapping algorithms to account for similar words and possibly the relative positions of the keywords. This approach was highly inaccurate, as it only considered character matches and sequences, ignoring conceptual similarity. For instance, the character "5" and the word "five" would never match in such a search.
Using embeddings and various methods to calculate the distance between vectors is a straightforward mathematical operation. It allows applications to process vast amounts of data efficiently and quickly.
Wisej.AI offers a range of services and methods to generate embeddings, with options to store them in memory, on the file system, or in efficient vector databases such as Azure AI Search, Pinecone, or Chroma.
Embeddings stored in memory or the file system should be used solely for development purposes. For deployment, it is recommended to use a vector database for optimal performance and scalability.
All models have a limited context window, which is measured in tokens. Tokens are not the same as words, but for simplicity, they can be considered as such. To gain a better understanding of tokens in the context of LLMs, you can try using this tokenizer from OpenAI: .
When you type "Hello, how are you?" (which is 4 words, 3 spaces and 2 punctuations) you get 6 tokens. But if you type "Dearest, how are you?" you get 7 tokens.
There are various types of tokenizers, and different models may be trained using distinct tokenizers. On average, a token typically consists of about 3.5 characters.
Wisej.AI includes the ITokenizerService and offers a default implementation based on OpenAI's . This service can be accessed directly by the Wisej.NET application to count tokens, truncate strings, or split a string into tokens.
When a request to an LLM exceeds the maximum allowed tokens in the context window, the request will typically fail, resulting in an error. To resolve this, you may need to shorten the input or split it into multiple requests that fit within the context token limit.
Wisej.AI automatically manages content overflow before any errors occur by utilizing the SmartEndpoint.ContextWindow property as the token limit. When this limit is exceeded, the SmartSession object optimizes the history using various techniques to ensure smooth operation.
For a more detailed description of the automatic trimming employed by Wisej.AI, refer to the "" section under Usage Metrics.
Reranking refers to the process of reordering a list of items, typically search results or recommendations, to improve their relevance or quality with respect to a specific criterion. This is often achieved by applying machine learning algorithms that re-evaluate and adjust the initial ranking, utilizing additional information or more sophisticated models.
When utilizing one of the built-in implementations of the IEmbeddingStorageService service, the order of relevance is initially determined by the similarity score of the embedding vectors. However, there may be instances where a more precise ordering is desired. Reranking is a common technique used to achieve this. It involves submitting the results of the vector search to a more advanced model, which then reorders the text chunks based on their relevancy to the query. This approach leverages the enhanced capabilities of the model to provide a more accurate order of results.
Wisej.AI offers the and an overridable method named RerankAsync within the DocumentTools and DocumentSearchTools classes. The default implementation uses the IRerankingService to reorganize the RAG documents.
Utilizing a model typically involves sending a question and receiving a response. There are numerous examples available that demonstrate this process using Semantic Kernel or LangChain. Additionally, you will find many samples of workflows or pipelines that take a response and use it as a new request.
The process can be visualized as follows: Start with the question "What is the capital of China?" which results in the response "Beijing." This response is then used in a follow-up question, "How is the weather in Beijing?" leading to the response "Rainy."
All of this is based on the concept of "conversational AI."
However, Wisej.AI does not employ LLMs in this manner. Instead, it consistently supports context management, tools, and adapters through a composition pattern. Therefore, the smallest unit of AI utilization in Wisej.AI is always an agent.
Subsequently, since the Wisej.AI agents are utilized by adapters operating at the next layer, and these adapters have specific tasks and can incorporate multiple decisions and interactions, we can refer to them as super-agents.
Refer to the section in the documentation to see straightforward examples of this approach.
Wisej.AI.Endpoints.SambaNovaEndpoint
Namespace: Wisej.AI.Endpoints
Assembly: Wisej.AI (3.5.0.0)
Wisej.AI.Endpoints.DeepSeekEndpoint
Namespace: Wisej.AI.Endpoints
Assembly: Wisej.AI (3.5.0.0)
Wisej.AI.Endpoints.GroqCloudEndpoint
Namespace: Wisej.AI.Endpoints
Assembly: Wisej.AI (3.5.0.0)
Wisej.AI.Endpoints.NvidiaAIEndpoint
Namespace: Wisej.AI.Endpoints
Assembly: Wisej.AI (3.5.0.0)
Wisej.AI.Endpoints.LocalAIEndpoint
Namespace: Wisej.AI.Endpoints
Assembly: Wisej.AI (3.5.0.0)

Your job is to extract the values for the fields listed below.
## Rules:
- Complete the task without asking questions.
- Repeating field are indicated by --.
- Infer missing values from the provided text.
- Carefully follow the instructions for each field specified after the field name.
- Combine multiple values into one value, if necessary.
- Return null for missing values.
- Resolve all the missing values before returning the data.
- Output your response as a JSON object.
- When a value is missing, use all the available tools to find it.
```Fields
Name (First name of the person.)
LastName (Last name of the person.)
Email (Email address. Deduct it from the text if not specified.)
Summary (Summarize the text in less than 256 characters.)
Spam (Detect whether the text is likely to be a SPAM message.)
```Dear Client,
Here's introducing Glocal RPO, a global RPO that works with over 100+ clients and a team of 750+ employees.
We help clients reduce costs and support them in Sourcing and Recruiting for Healthcare, Life Sciences & Pharma sectors with our low-cost models.
I would be happy to run a trial on a few positions and showcase how we work.
If you have some time on Monday, can we connect?
Can we schedule a quick call so we can discuss this in more detail?
Eagerly awaiting your response!
Best Regards
Rohit Singh
Digital Marketing Head
[email protected]{
"Name": "Rohit",
"LastName": "Singh",
"Email": "[email protected]",
"Summary": "Glocal RPO offers cost-effective sourcing...",
"Spam": true
}You can utilize the following tools:
// Returns the current time.
get_current_time();What time is it?{
"tool": "get_current_time()"
}return get_current_time();get_current_time() = "10:10:44 AM"The current time is 10:10:44 AM.Represents a connection to SambaNova endpoints, inheriting from OpenAIEndpoint.
public class SambaNovaEndpoint : OpenAIEndpointPublic Class SambaNovaEndpoint
Inherits OpenAIEndpointThis class is used to configure and manage connections to the SambaNova API, providing properties to set the model and URL for the endpoint.
Initializes a new instance of the SambaNovaEndpoint class with default settings.
The default authentication method is set to "Bearer", the model is set to "Meta-Llama-3.1-8B-Instruct", and the URL is set to "https://api.sambanova.ai/v1".
String: Gets or sets the model used by the SambaNova endpoint. (Default: "Meta-Llama-3.1-8B-Instruct")
String: Gets or sets the URL of the SambaNova endpoint. (Default: "https://fast-api.snova.ai/v1")
Boolean: (Default: True)
Initializes a new instance of InvokeToolEventArgs.
session
context
SmartSession: Gets the session associated with the event.
Public Class InvokeToolEventArgs
Inherits HandledEventArgspublic class InvokeToolEventArgs : HandledEventArgsRepresents a connection to DeepSeek endpoints, inheriting from OpenAIEndpoint.
public class DeepSeekEndpoint : OpenAIEndpointPublic Class DeepSeekEndpoint
Inherits OpenAIEndpointThis class is used to configure and manage connections to the DeepSeek API, providing properties to set the model and URL for the endpoint.
Initializes a new instance of the DeepSeekEndpoint class with default settings.
The default authentication method is set to "Bearer", the model is set to "Meta-Llama-3.1-8B-Instruct", and the URL is set to "https://api.deepseek.com".
String: Gets or sets the model used by the DeepSeek endpoint. (Default: "deepseek-chat")
String: Gets or sets the URL of the DeepSeek endpoint. (Default: "https://api.deepseek.com")
Boolean: (Default: True)
Initializes a new instance of Parameter.
String: Gets or sets the name of the parameter.
Object: Gets or sets the value of the parameter.
Public Class Parameterpublic class Parameterhttps://console.groq.com/docs/openai
public class GroqCloudEndpoint : OpenAIEndpointPublic Class GroqCloudEndpoint
Inherits OpenAIEndpointInitializes a new instance of the GroqCloudEndpoint class with default settings.
The default authentication method is set to "Bearer", and the default model is "llama-3.2-90b-vision-preview". The URL is set to "https://api.groq.com/openai/v1".
String: Gets or sets the model used for the GroqCloud endpoint. (Default: "llama-3.2-90b-vision-preview")
String: Gets or sets the URL for the GroqCloud API endpoint. (Default: "https://api.groq.com/openai/v1")
Boolean: (Default: True)
Initializes a new instance of the FieldPromptAttribute class with the specified prompt.
prompt
The prompt text associated with the property or control.
String: Gets or sets the prompt text associated with the property or control.
Public Class FieldPromptAttribute
Inherits Attributepublic class FieldPromptAttribute : AttributeRepresents a connection to NVIDIA AI endpoints, providing access to various AI models and services.
public class NvidiaAIEndpoint : OpenAIEndpointPublic Class NvidiaAIEndpoint
Inherits OpenAIEndpointThis class extends the OpenAIEndpoint to specifically connect to NVIDIA's AI services. It sets default values for the authentication method, model, embedding model, and URL.
Initializes a new instance of the NvidiaAIEndpoint class with default settings.
The default authentication method is set to "Bearer", the model to "meta/llama-3.1-405b-instruct", the embedding model to "nvidia/embed-qa-4", and the URL to "https://integrate.api.nvidia.com/v1".
String: Gets or sets the embedding model used for AI processing. (Default: "nvidia/embed-qa-4")
String: Gets or sets the model used for AI processing. (Default: "meta/llama-3.1-405b-instruct")
String: Gets or sets the URL for the NVIDIA AI endpoint. (Default: "https://integrate.api.nvidia.com/v1")
Boolean: (Default: True)
Represents a connection to LocalAI endpoints, providing access to various AI models and services.
public class LocalAIEndpoint : OpenAIEndpointPublic Class LocalAIEndpoint
Inherits OpenAIEndpointThis class extends the OpenAIEndpoint to connect specifically to LocalAI's API. It initializes with default models and a URL for the TogetherAI service.
Initializes a new instance of the LocalAIEndpoint class with default settings.
String: Gets or sets the embedding model used for AI operations. (Default: "text-embedding-ada-002")
String: Gets or sets the model used for AI operations. (Default: "")
String: Gets or sets the URL for the LocalAI API endpoint. (Default: "http://localhost:8080/v1")
sender
e
Occurs when a parameter needs to be converted.
Public Delegate Sub ConvertParameterEventHandler(ByVal sender As [Object], ByVal e As ConvertParameterEventArgs)public delegate void ConvertParameterEventHandler(Object sender, ConvertParameterEventArgs e)Wisej.AI.Adapters.SmartCalendarAdapter
Namespace: Wisej.AI.Adapters
Assembly: Wisej.AI (3.5.0.0)
Represents a smart calendar adapter that extends the functionality of a SmartAdapter.
public class SmartCalendarAdapter : SmartAdapterThe SmartCalendarAdapter class provides methods to process text or clipboard content asynchronously.
Initializes a new instance of the class.
This constructor initializes the SmartPrompt with a default prompt message.
Processes the content from the clipboard asynchronously.
Returns: . A task that represents the asynchronous operation.
This method reads text or image data from the clipboard and processes it asynchronously.
Throws:
Thrown when the adapter is busy.
Processes the specified text asynchronously.
Returns: . A task that represents the asynchronous operation.
This method sets the internal text and initiates the asynchronous processing.
Throws:
Thrown when the text is null.
Thrown when the adapter is busy.
Executes the core logic of the adapter asynchronously.
Returns: . A task that represents the asynchronous operation, containing the message response.
This method uses a session to ask a question based on the provided text and tracks the usage.
Wisej.AI.SmartAdapter FieldRectangleAttribute
Namespace: Wisej.AI
Assembly: Wisej.AI (3.5.0.0)
Represents an attribute that defines a rectangular area using a Rectangle associated to a property or a control.
public class FieldRectangleAttribute : AttributePublic Class FieldRectangleAttribute
Inherits AttributeThis attribute can be used to specify a rectangle by providing either its dimensions or a string representation.
Initializes a new instance of the class with specified dimensions.
Initializes a new instance of the class with a rectangle specified as a string.
The string should be in a format that can be converted to a using the type converter.
: Gets or sets the associated with this attribute.
Wisej.AI.SmartAdapter FieldNameAttribute
Namespace: Wisej.AI
Assembly: Wisej.AI (3.5.0.0)
Represents an attribute that can be used to specify a custom field name for a property or control.
public class FieldNameAttribute : AttributePublic Class FieldNameAttribute
Inherits AttributeInitializes a new instance of the class with the specified field name.
: Gets or sets the custom field name associated with the property or control.
Wisej.AI.Adapters.SmartChartJS3Adapter
Namespace: Wisej.AI.Adapters
Assembly: Wisej.AI (3.5.0.0)
Represents an adapter that enhances a ChartJS control with several AI features.
This class extends the functionality of the ChartJS3 control by integrating AI capabilities. It is part of the SmartAdapter API category and does not allow multiple extensions.
Initializes a new instance of .
: Gets the ChartJS3 control associated with this adapter.
This property is not browsable in the property grid and is hidden from designer serialization.
Called when a control is created.
This method initializes the ChartJS3Tools with the associated ChartJS3 control. It ensures that the base class's OnControlCreated method is also called.
Wisej.AI.SmartRealtimeSession
Namespace: Wisej.AI
Assembly: Wisej.AI (3.5.0.0)
Initializes a new instance of the class with the specified hub and optional system prompt.
Throws:
Thrown when the hub is null.
Initializes a new instance of the class with the specified endpoint and optional system prompt.
Throws:
Thrown when the endpoint is null.
Wisej.AI.Endpoints.TogetherAIEndpoint
Namespace: Wisej.AI.Endpoints
Assembly: Wisej.AI (3.5.0.0)
Represents a connection to TogetherAI endpoints, providing access to various AI models and services.
This class extends the to connect specifically to TogetherAI's API. It initializes with default models and a URL for the TogetherAI service.
Initializes a new instance of the class with default settings.
The default authentication method is set to "Bearer". The default models and URL are pre-configured for TogetherAI services.
: Gets or sets the embedding model used for AI operations. (Default: "togethercomputer/m2-bert-80M-8k-retrieval")
: Gets or sets the model used for AI operations. (Default: "meta-llama/Meta-Llama-3.1-405B-Instruct-Turbo")
: Gets or sets the URL for the TogetherAI API endpoint. (Default: "https://api.together.xyz/v1")
Wisej.AI.Endpoints.CerebrasEndpoint
Namespace: Wisej.AI.Endpoints
Assembly: Wisej.AI (3.5.0.0)
Represents a connection to Cerebras endpoints, inheriting from OpenAIEndpoint.
This class is specifically designed to connect to Cerebras AI endpoints, providing default values for the model and URL.
Initializes a new instance of the class with default settings.
The default authentication method is set to "Bearer", the model to "llama3.1-70b", and the URL to "https://api.cerebras.ai/v1".
: Gets or sets the model used by the Cerebras endpoint. (Default: "llama-3.3-70b")
: Gets or sets the URL of the Cerebras endpoint. (Default: "https://api.cerebras.ai/v1")
: (Default: True)
Wisej.AI.SmartSession MessagesEventArgs
Namespace: Wisej.AI
Assembly: Wisej.AI (3.5.0.0)
Provides data for events that involve a collection of messages in the SmartSession API.
public class MessagesEventArgs : EventArgsPublic Class MessagesEventArgs
Inherits EventArgsInitializes a new instance of the class with the specified list of messages.
: Gets the list of messages sent and received.
: Gets the session associated with the event.
Wisej.AI.SmartSession InvokeToolEventHandler
Wisej.AI.SmartSession MessagesEventHandler
Namespace: Wisej.AI
Assembly: Wisej.AI (3.5.0.0)
Represents the method that will handle message-related events for the SmartSession object.
public delegate void MessagesEventHandler(Object sender, MessagesEventArgs e)Public Delegate Sub MessagesEventHandler(ByVal sender As [Object], ByVal e As MessagesEventArgs)Wisej.AI.Controls.UVLightOverlay
Namespace: Wisej.AI.Controls
Assembly: Wisej.AI (3.5.0.0)
Represents a control that displays an animated UV light overlay to simulate an optical scanner.
public class UVLightOverlay : ControlInitializes a new instance of .
Renders the UV light overlay on the web client.
Wisej.AI.Endpoints.XAIEndpoint
Namespace: Wisej.AI.Endpoints
Assembly: Wisej.AI (3.5.0.0)
Represents a connection to X.AI endpoints, extending the functionality of OpenAIEndpoint.
This class is used to interact with the X.AI API, providing properties and methods specific to X.AI's capabilities.
Initializes a new instance of the class with default settings.
The default authentication method is set to "Bearer", the model to "grok-beta", and the URL to "https://api.x.ai/v1".
: Gets or sets the model used by the X.AI endpoint. (Default: "grok-beta")
: Gets or sets the URL of the X.AI API endpoint. (Default: "https://api.x.ai/v1")
: (Default: True)
Wisej.AI.SmartSession ErrorEventHandler
Namespace: Wisej.AI
Assembly: Wisej.AI (3.5.0.0)
Represents the method that will handle an error event in a SmartSession.
public delegate void ErrorEventHandler(Object sender, ErrorEventArgs e)Public Delegate Sub ErrorEventHandler(ByVal sender As [Object], ByVal e As ErrorEventArgs)How to keep track of the tokens
Most AI cloud providers charge based on the . They differentiate between input tokens (the tokens you send) and output tokens (the tokens generated).
Wisej.AI tracks InputTokens, OutputTokens, and CallCount in the Usage property across multiple levels, providing detailed insights into how resources are being utilized:
How to handle errors and log AI activity
All logging within Wisej.AI is managed via the . The default implementation is provided by , which forwards the log entries to according to the specified TraceLevel.
You can set up your own logging implementation by registering a service that implements the ILoggerService interface. The optimal location for this registration is within the static constructor of the Program class.
Wisej.AI.SmartSession Message
Namespace: Wisej.AI
Assembly: Wisej.AI (3.5.0.0)
Represents a message within a SmartSession, containing various types of content such as text, image, and binary data.
The Message class is used to encapsulate different types of content that can be part of a session. It includes properties for text, image, and binary content, as well as methods to retrieve role and image data in specific formats.
Wisej.AI.Adapters.SmartFullCalendarAdapter
Namespace: Wisej.AI.Adapters
Assembly: Wisej.AI (3.5.0.0)
sender
e
Occurs before a tool is invoked.
Occurs after a tool is invoked.
sender
The source of the event.
e
A MessagesEventArgs that contains the event data.
Occurs before a request is sent.
Occurs after a response is received.
Occurs after the messages have been prepared and before they are send to the AI.
RollingWindow
Uses a rolling window strategy to handle context overflow.
Summarization
Uses summarization to handle context overflow.
Gets or sets the strategy for handling context overflow.
Gets or sets the strategy for handling context overflow.
sender
The source of the event.
e
An ErrorEventArgs that contains the event data.
Occurs when an error is encountered.
x
The x-coordinate of the upper-left corner of the rectangle.
y
The y-coordinate of the upper-left corner of the rectangle.
width
The width of the rectangle.
height
The height of the rectangle.
rectangle
A string representation of the rectangle.
name
The custom field name to associate with the property or field.
session
The SmartSession associated with the event.
messages
The list of messages associated with the event.
Represents the method that will handle message-related events for the SmartSession object.
chunks
similarity
config
The dynamic configuration object used to render the control.
Public Class UVLightOverlay
Inherits Controlcontrol
The control that was created.
Represents a provider that supplies tools.
Public Class SmartChartJS3Adapter
Inherits SmartChartAdapterpublic class SmartChartJS3Adapter : SmartChartAdapterhub
The smart hub associated with the session.
systemPrompt
The optional system prompt for the session. Default is null.
endpoint
The smart endpoint associated with the session.
systemPrompt
The optional system prompt for the session. Default is null.
Public Class SmartRealtimeSession
Inherits SmartSessionpublic class SmartRealtimeSession : SmartSessionPublic Class TogetherAIEndpoint
Inherits OpenAIEndpointpublic class TogetherAIEndpoint : OpenAIEndpointPublic Class CerebrasEndpoint
Inherits OpenAIEndpointpublic class CerebrasEndpoint : OpenAIEndpointPublic Class XAIEndpoint
Inherits OpenAIEndpointpublic class XAIEndpoint : OpenAIEndpointstatic class Program
{
static Program()
{
Application.Services.
.AddOrReplaceService<ILoggerService, MyLoggerService>();
}
}Module Program
Shared Sub New()
Application.Services.AddOrReplaceService( _
Of ILoggerService, MyLoggerService)()
End Sub
End ModuleWhen using an AI provider, several types of errors can occur: channel errors, provider errors, server errors, and model errors.
Channel Errors: These involve issues with communication, such as loss of connectivity or network problems.
Provider Errors: These are related to your account with the AI provider, such as exceeding credit or usage limits.
Server Errors: Related to issues on the server side which might affect availability or performance.
Model Errors: These generally pertain to payload construction, token limits, or other constraints specific to the AI model being used.
If an error occurs while using the AI directly through a SmartPrompt.AskAsync call, you can handle it by wrapping the call in a try/catch block. This allows you to manage exceptions effectively as your code processes the response.
If an error occurs while using the AI through a SmartAdapter, and the SmartAdapter is connected to a SmartHub (as it should be), you can manage the error using the SmartHub.Error event. However, if the adapter is not connected to a hub and you are using the RunAsync() method (or its variations) directly, you can manage errors by wrapping the call in a try/catch block.
If an error occurs while using a SmartSession instance, you can handle it by utilizing the SmartSession.Error event to manage the error.
The error event allows you to modify the last AI response and the assistant's response, effectively allowing you to "simulate" an AI response. To achieve this, you can handle the error (or override the OnError method) by adjusting the ReplacementMessage or changing the Text property of the predefined ReplacementMessage (which defaults to the exception message). Then, set the Handled property to true. This indicates that the error has been managed and that your custom response should be used.
text
The text to be processed.
control
The control associated with the operation.
Represents a provider that supplies tools.
Public Class SmartCalendarAdapter
Inherits SmartAdapterthis.hub1.Error += (s, e) =>
{
if (IsNetworkError(e.Exception)
{
e.ReplacementMessage.Text =
"We're currently experiencing a temporary communication issue. " +
"Please try again later or reach out to support at...";
e.Handled = true;
}
}AddHandler Me.hub1.Error, Sub(s, e)
If IsNetworkError(e.Exception) Then
e.ReplacementMessage.Text =
"We're currently experiencing a temporary communication issue. " & _
"Please try again later or reach out to support at..."
e.Handled = True
End If
End Sub
var adapter = new SmartCalendarAdapter();
await adapter.FromClipboardAsync();
var adapter = new SmartCalendarAdapter();
await adapter.FromTextAsync("Sample text");
The total usage related to the SmartPrompt instance is collected, allowing for a comprehensive analysis of resource consumption tied specifically to that instance.
The total usage for the lifetime of the session is collected, offering detailed insights into resource utilization for the entire duration of the session.
Usage metrics utilized by the message. Only the ToolCall and Assistant message roles carry these metrics, as they represent the responses from the LLM, which is what provides the metrics data.
The total usage for the lifetime of the endpoint instance is collected continuously, providing a complete view of resource consumption throughout the instance's duration.
The total usage related to the hub instance is collected continuously and does not reset even if the endpoint associated with the hub changes.
All the components mentioned above expose the Usage property of type Metrics. Wisej.AI ensures that this property is kept up-to-date at all times. The usage data is parsed directly from the LLM response using the ReadUsage method in the SmartEndpoint implementation, and it is then tallied across the different layers.
For example, if you use the SmartChatBoxAdapter without any tools and engage in a simple chat, the usage would look like this:
SmartPrompt
SmartSession
SmartSession.Message
We made one call by sending "Hello," which is 1 token, and received a response of 9 tokens. However, the AI provider logged 648 input tokens! This is because we didn't just send "Hello"; we also included the built-in Wisej.AI system prompt, which has a minimum size of 648 tokens. If you incorporated some tools, you could easily exceed 2,000 tokens.
SmartPrompt
SmartSession
SmartSession.Message
The cost with OpenAI for the queries described above would involve calculating the total number of input and output tokens used and applying the pricing model provided by OpenAI.
Input Tokens: $2.50 / 1M tokens * 1312 = $0.00328
Output Tokens: $10 / 1M tokens * 43 = $0.00043
Total Cost: $0.00371
If we repeated the scenario 1,000 times, the cost would be approximately $3.71. This estimation is based on the number of tokens used in each request and the pricing model typically applied by OpenAI for token usage.
Local hosting operates under a different cost structure. If you're using a hosted virtual machine (VM), the pricing is typically charged by the hour. In contrast, if you're using owned hardware, the costs involve the purchase of the hardware and its housing, either on-premises or in a data center.
The granular approach Wisej.AI employs to track usage is beneficial in this scenario as well, enabling you to compare the total cost of ownership when using local hosting against the usage cost of an AI cloud provider.
Wisej.AI utilizes the ContextWindow property value configured on the endpoint in use, along with the current usage metrics, to proactively optimize and trim messages before submitting them to the model.
When Wisej.AI detects that the payload about to be submitted exceeds the allowed context window size in tokens, it employs the ISessionTrimmingService to trim the messages within the session. This process aims to maximize the preservation of memory for the model. The trimming strategy used is governed by the service and is influenced by the values of the TrimmingStrategy and TrimmingPercentage properties.
RollingWindow Approach:
In this method, Wisej.AI reduces the entire message history by half. Crucially, it preserves the System Prompt at the top of the message list. It then methodically removes tool calls and tool responses in pairs to maintain a balanced history. If further reduction is necessary, it starts removing user and assistant messages from the top of the list.
Summarization Approach:
In the summarization method, Wisej.AI constructs a summarization payload that includes half of all the messages, except for the System Prompt at the top which remains intact. This payload is then sent to a summarization prompt (specified under the "[SmartSession.Summarization]" key). The outcome of this process is that all the messages are replaced with a single assistant message containing the summary of half of the history.
Custom:
To implement your own trimming strategy, you need to create a class that implements the ISessionTrimmingService interface and replace the default service with your implementation. Within this class, you can utilize other models or libraries to efficiently manage the overflow of session messages.
Initializes a new instance of Message.
BinaryContent: Gets or sets the binary content of the message.
String: Gets the "finish_reason" returned by the model.
Image: Gets or sets the image content of the message.
MessageRole: Gets or sets the role associated with the message.
String: Gets or sets the text content of the message.
Metrics: Gets the usage metrics associated with the message.
Converts the image content of the message to a Base64 string.
Returns: String. A Base64 string representation of the image content.
If the image is null, an empty string is returned. Otherwise, the image is serialized to a Base64 string.
Converts the image content of the message to a Base64 Data URL string.
Returns: String. A Base64 Data URL string representation of the image content.
If the image is null, an empty string is returned. Otherwise, the image is serialized to a Base64 Data URL string.
Returns the media type string for the image content.
Returns: String. The media type string, or an empty string if the image is null.
Retrieves the role of the message as a string.
Returns: String. A string representing the role of the message.
The method returns a string representation of the message role, which can be "system", "assistant", or "user".
Public Class Messagepublic class MessageInitializes a new instance of the ErrorEventArgs class.
session
The SmartSession where the error occurred.
exception
The exception that was thrown.
replacementMessage
The message to replace the original answer, if applicable.
Exception: Gets the exception that was thrown.
Message: Gets or sets the message to replace the original answer, if applicable.
SmartSession: Gets the SmartSession where the error occurred.
Represents the method that will handle an error event in a SmartSession.
Public Class ErrorEventArgs
Inherits HandledEventArgspublic class ErrorEventArgs : HandledEventArgspublic class SmartFullCalendarAdapter : SmartCalendarAdapterPublic Class SmartFullCalendarAdapter
Inherits SmartCalendarAdapterInitializes a new instance of SmartFullCalendarAdapter.
FullCalendar: Gets the FullCalendar control associated with this adapter.
This property retrieves the first FullCalendar control found within the adapter's controls collection.
Called when a control is created within the adapter.
control
The control that has been created.
This method initializes the FullCalendarTools with the associated FullCalendar control and then calls the base implementation.
Represents a provider that supplies tools.
Initializes a new instance of the SmartTool class with the specified target and method.
target
The target object on which the method will be invoked.
method
The method to be invoked on the target object.
Throws:
ArgumentNullException Thrown when the method is null.
String: Gets the description of the method.
String: Gets the full name of the method including the namespace.
String: Gets the name of the method.
String: Gets the description of the namespace.
Parameter[]: Gets the parameters of the method.
Type: Gets the return type of the method.
Object: Gets the object containing this tool.
Returns a JSON schema representing the tool's parameters or null when the function doesn't have arguments.
Returns: Object. JSON schema object
Public Class SmartToolpublic class SmartToolInitializes a new instance of the Embedding class with specified chunks, vectors, and model.
chunks
An array of strings representing the data chunks to be embedded.
vectors
A jagged array of floats representing the vectors associated with the chunks.
model
A string representing the model used for embedding.
This constructor initializes the Embedding class by setting the provided data chunks, their corresponding vectors, and the embedding model. The chunks parameter is used to input the raw data which will be embedded. The vectors parameter provides the associated vector representations for these chunks. The model specifies the model name used in the embedding process. Here is an example of how to create an instance of the Embedding class:
String[]: Gets the chunks of data that were embedded.
String: Gets the model name used for embedding.
Single[][]: Gets the vectors associated with the data chunks.
Adds the vectors and chunks from embedding .
embedding
The instance providing the values to combine.
Returns: Embedding.
Creates a deep copy of the current Embedding instance.
Public Class Embedding
Inherits ICloneablepublic class Embedding : ICloneableAssistant
Indicates that the message is from an assistant.
System
Indicates that the message is from the system.
ToolCalls
Indicates that the message involves tool calls.
ToolResults
Indicates that the message contains tool results.
User
Indicates that the message is from a user.
Adds a message with the specified role and content.
Gets or sets the role associated with the message.
Public Enum MessageRole As [Enum]public enum MessageRole : EnumRetrieves the API key associated with the specified name.
name
The name of the API key to retrieve.
namePostfix
An optional postfix to remove from the name when searching for the key.
envPrefix
The prefix for environment variables. Default is "WISEJ_AI_APIKEY_".
Returns: String. The API key if found; otherwise, null.
This method first attempts to retrieve the API key from environment variables using the specified prefix. If not found, it attempts to read the key from a JSON configuration file located at the path defined by AI_PATH and APIKEYS_FILENAME.
Public Class ApiKeyspublic class ApiKeysCreates a new instance of the ToolAttribute class.
Creates a new instance of the ToolAttribute class initialized with the specified tool name .
name
String: Gets the name of the tool function when it should be different from the method name.
Public Class ToolAttribute
Inherits Attributepublic class ToolAttribute : AttributeWisej.AI.Adapters.SmartChartAdapter
Namespace: Wisej.AI.Adapters
Assembly: Wisej.AI (3.5.0.0)
Represents an adapter that enhances a chart control with AI features.
public class SmartChartAdapter : SmartAdapterThe SmartChartAdapter class extends the functionality of a chart control by integrating AI capabilities. It allows for dynamic chart generation and interaction based on the provided data source and prompt.
Initializes a new instance of the class.
: Gets or sets the data source for the chart. (Default: null)
The data source is used by the AI to generate the chart based on the specified prompt.
: Gets or sets the default chart type to be used. (Default: "Bar")
: Gets the description of the chart generated by the adapter.
The description is generated based on the AI response to the provided prompt and data source.
: Gets or sets the prompt used to generate the chart. (Default: null)
Changing the prompt will trigger a re-run of the AI process if is enabled and controls are present.
Called when a control is created and initializes the AI process if is enabled.
Executes the core AI process asynchronously to generate a chart.
Returns: . A task representing the asynchronous operation, with a containing the AI response.
The method uses the provided data source and default chart type to interact with the AI session and generate a chart description.
Wisej.AI.Adapters.SmartComboBoxAdapter
Namespace: Wisej.AI.Adapters
Assembly: Wisej.AI (3.5.0.0)
Adds semantic filtering to the ComboBox auto-complete functionality.
public class SmartComboBoxAdapter : SmartAdapter, IExtenderProviderWorks with the . Semantic filtering (items embedding) is entirely hosted on the user's browser and runs in JavaScript. Uses the HuggingFace TransformersJs library to run in-browser embeddings.
Initializes a new instance of the class.
: Gets or sets a value indicating whether the adapter should run automatically. (Default: True)
: Gets or sets the embeddings model. See for available models. The default is "Xenova/all-MiniLM-L6-v2". (Default: null)
Gets the similarity level threshold needed for an item to qualify, ranging from 0.1 to 1.0 (1 means identical).
Returns: . The minimum similarity level.
Throws:
Thrown when the control is null.
Handles the event when a control is created.
Handles the event when a control is disposed.
Sets the similarity level threshold for an item to qualify.
Throws:
Thrown when the control is null.
Thrown when the value is not between 0 and 1.
Wisej.AI.Endpoints.LocalAIEndpointImageGen
Namespace: Wisej.AI.Endpoints
Assembly: Wisej.AI (3.5.0.0)
Represents an endpoint for connecting to LocalAI Image Generation services.
This class is used to interact with the LocalAI DallE API for generating images. It sets up the necessary authentication and model parameters required for the API requests.
Initializes a new instance of the class.
Sets default values for authentication, model, and URL specific to the DallE API.
: Gets or sets the model used for the DallE API. (Default: "dall-e-3")
: Gets or sets the URL for the DallE API endpoint. (Default: "https://api.LocalAI.com/v1/images/generations")
Adds user messages to the payload for the API request.
Throws:
Thrown when any of the parameters are null.
Reads the assistant's message from the API response.
Parses the response to extract image URLs or base64 encoded images and assigns them to the message.Throws:
Thrown when any of the parameters are null.
Reads the usage information from the API response.
This method does not process usage information as it is not returned by the API.
Wisej.AI.Endpoints.HuggingFaceEndpoint
Namespace: Wisej.AI.Endpoints
Assembly: Wisej.AI (3.5.0.0)
Represents a connection to HuggingFace serverless endpoints for model inference and embeddings.
This class is used to interact with HuggingFace's API for model inference and embeddings. It provides methods to construct API URLs, add options and messages to requests, and read responses.
Initializes a new instance of the class with default settings.
: Gets or sets the model used for generating embeddings. (Default: "sentence-transformers/all-MiniLM-L6-v2")
: Gets or sets the maximum number of output tokens. (Default: 2048)
: Gets or sets the model used for inference. (Default: "meta-llama/Llama-3.2-11B-Vision-Instruct")
: Gets or sets the base URL for the HuggingFace API. (Default: "https://api-inference.huggingface.co")
Builds the payload for an embeddings request.
Returns: . The constructed payload object.
This method constructs the payload for an embeddings request, including options such as wait_for_model.
Constructs the API URL for model inference.
Returns: . The constructed API URL.
Constructs the API URL for embeddings.
Returns: . The constructed embeddings URL.
Reads the embeddings from the API response.
Returns: . A two-dimensional array of floats representing the embeddings.
This method parses the response to extract the embeddings data.
Wisej.AI.Endpoints.LocalAIEndpointTTS
Namespace: Wisej.AI.Endpoints
Assembly: Wisej.AI (3.5.0.0)
Represents an endpoint for connecting to LocalAI's speech services.
This class is designed to interact with LocalAI's text-to-speech (TTS) API, allowing for the conversion of text into speech audio files.
Initializes a new instance of the class with default settings.
The default model is set to "tts-1", and the voice option is "alloy". The response format is set to "mp3".
: Gets or sets the model used for the TTS operation. (Default: "tts-1")
: Gets or sets the URL for the LocalAI TTS API endpoint. (Default: "http://localhost:8080/v1/audio/speech")
Adds user messages to the payload for processing.
Throws:
Thrown when session , payload , or messages is null.
Reads the assistant's message from the response.
Throws:
Thrown when response or message is null.
Reads the usage information from the reply.
This method does not return usage information as it is not applicable for TTS operations.
Wisej.AI.Adapters.SmartQueryAdapter
Namespace: Wisej.AI.Adapters
Assembly: Wisej.AI (3.5.0.0)
Represents an adapter that generates a data set from a database schema and a user-provided description.
public class SmartQueryAdapter : SmartAdapterThe class is designed to interact with a database schema and generate SQL queries based on user input. It utilizes a to facilitate the query generation process.
Initializes a new instance of the class.
: Gets or sets the database connection used to execute the query.
: Gets or sets the data source binding for the query results.
: Gets or sets the query description provided by the user.
When the property is set to true, setting this property will automatically execute the query asynchronously.
: Gets or sets the database schema used to generate the query.
: Gets or sets the type of the database server. (Default: "Microsoft Sql Server")
: Gets the SQL statement generated from the query description.
Returns the JSON string returned in the message by stripping the enclosing markers (sql and ) if present.
Returns: . JSON string.
Asynchronously runs the core logic to generate and execute the SQL query.
Returns: . A task representing the asynchronous operation, with a result containing the query execution details.
Wisej.AI.SmartEndpoint Response
Namespace: Wisej.AI
Assembly: Wisej.AI (3.5.0.0)
Represents a response that can be initialized with a stream or a string and provides methods to read its content.
public class ResponsePublic Class ResponseInitializes a new instance of the class with the specified stream.
Throws:
Thrown when the stream is null.
Initializes a new instance of the class with the specified string.
Throws:
Thrown when the response is null.
: Gets a copy of the underlying stream.
The returned stream is a copy of the original stream, allowing for independent reading without affecting the original stream's position.
Asynchronously reads the content of the response as a byte array.
Returns: . A task that represents the asynchronous read operation. The task result contains the content of the response as a byte array.
This method reads the entire content of the stream and returns it as a byte array.
Asynchronously reads the content of the response as a string.
Returns: . A task that represents the asynchronous read operation. The task result contains the content of the response as a string.
This method reads the entire content of the stream and returns it as a string.
Wisej.AI.SmartSession ConvertParameterEventArgs
Namespace: Wisej.AI
Assembly: Wisej.AI (3.5.0.0)
public class ConvertParameterEventArgs : HandledEventArgsPublic Class ConvertParameterEventArgs
Inherits HandledEventArgsInitializes a new instance of the class with the specified parameter.
: Gets the name of the parameter.
: Gets the associated with the event.
: Gets or sets the value of the parameter after conversion.
Wisej.AI.Endpoints.OpenAIEndpointTTS
Namespace: Wisej.AI.Endpoints
Assembly: Wisej.AI (3.5.0.0)
Wisej.AI.Endpoints.LocalAIEndpointWhisper
Namespace: Wisej.AI.Endpoints
Assembly: Wisej.AI (3.5.0.0)
Wisej.AI.Endpoints.OpenAIEndpointWhisper
Namespace: Wisej.AI.Endpoints
Assembly: Wisej.AI (3.5.0.0)
Wisej.AI.SmartAdapter WorksWithAttribute
Wisej.AI.Adapters.SmartPictureBoxAdapter
Namespace: Wisej.AI.Adapters
Assembly: Wisej.AI (3.5.0.0)
Represents an adapter for a PictureBox that generates images based on a description using the OpenAI DALL-E endpoint.
Wisej.AI.Adapters.SmartAudioWhisperAdapter
Namespace: Wisej.AI.Adapters
Assembly: Wisej.AI (3.5.0.0)
Transcribes the audio file of the of the associated control to its property in the original language of the audio.
Wisej.AI.Embeddings.EmbeddedDocument
Namespace: Wisej.AI.Embeddings
Assembly: Wisej.AI (3.5.0.0)
Represents a document that can be embedded with metadata and embedding data.
The class is designed to hold information about a document that can be embedded within a system. It contains properties for the document's name, metadata, and embedding data. This class provides constructors for initializing a document with or without embedding data. The embedded documents can be used in systems that require document similarity measures or need to store additional metadata for each document.
Wisej.AI.SmartAgentPrompt
Namespace: Wisej.AI
Assembly: Wisej.AI (3.5.0.0)
Wisej.AI.Endpoints.OpenAIEndpointDallE
Namespace: Wisej.AI.Endpoints
Assembly: Wisej.AI (3.5.0.0)
> User: Hello
> Assistant: Hello! How can I assist you today?> User: Tell me something nice
> Assistant: You are capable of achieving great things, and your
potential is limitless. Remember, every day is a new opportunity
to grow and make a positive impact. Keep shining bright!
string[] chunks = { "data1", "data2" };
float[][] vectors = { new float[] { 1.0f, 2.0f }, new float[] { 3.0f, 4.0f } };
string model = "exampleModel";
Embedding embedding = new Embedding(chunks, vectors, model);
string apiKey = ApiKeys.GetApiKey("MyService", "Postfix");
if (apiKey != null)
{
// Use the API key
}
SmartEndpoint
SmartHub
SmartEndpoint
SmartHub
control
The control that was created.
control
The control for which the chart is generated.
Represents an adapter that enhances a ChartJS control with several AI features.
Represents a provider that supplies tools.
Public Class SmartChartAdapter
Inherits SmartAdaptercontrol
The control for which to get the similarity level.
control
The control that was created.
control
The control that was disposed.
control
The control for which to set the similarity level.
value
The similarity level, must be between 0 and 1.
Represents a provider that supplies tools.
Public Class SmartComboBoxAdapter
Inherits SmartAdapter
Implements IExtenderProvidermessage
Message with the response text that may be a JSON string.
control
The control associated with the operation, if any.
Represents a provider that supplies tools.
Public Class SmartQueryAdapter
Inherits SmartAdapterstream
The stream to initialize the response with.
response
The string to initialize the response with.
session
The session associated with the event.
name
The name of the parameter to be converted.
value
Value to convert
payload
The dynamic payload object to be sent to the API.
session
The current session containing user context.
messages
The list of messages to be processed.
response
The response received from the API.
message
The message object to populate with the response data.
message
The message object to populate with usage data.
reply
The dynamic reply object containing the response data.
Public Class LocalAIEndpointImageGen
Inherits SmartHttpEndpointpublic class LocalAIEndpointImageGen : SmartHttpEndpointinputs
The input strings for which embeddings are requested.
response
The API response containing the embeddings.
Public Class HuggingFaceEndpoint
Inherits OpenAIEndpointpublic class HuggingFaceEndpoint : OpenAIEndpointpayload
The dynamic payload object to be sent to the API.
session
The current session containing user context.
messages
A list of messages to be processed.
response
The response received from the API.
message
The message object to populate with the response data.
message
The message object to populate with usage data.
reply
The dynamic reply object containing usage information.
Public Class LocalAIEndpointTTS
Inherits SmartHttpEndpointpublic class LocalAIEndpointTTS : SmartHttpEndpointRepresents an endpoint for connecting to OpenAI's speech services.
public class OpenAIEndpointTTS : SmartHttpEndpointPublic Class OpenAIEndpointTTS
Inherits SmartHttpEndpointThis class is designed to interact with OpenAI's text-to-speech (TTS) API, allowing for the conversion of text into speech audio files.
Initializes a new instance of the OpenAIEndpointTTS class with default settings.
The default model is set to "tts-1", and the voice option is "alloy". The response format is set to "mp3".
String: Gets or sets the model used for the TTS operation. (Default: "tts-1")
String: Gets or sets the URL for the OpenAI TTS API endpoint. (Default: "https://api.openai.com/v1/audio/speech")
Adds user messages to the payload for processing.
payload
The dynamic payload object to be sent to the API.
session
The current session containing user context.
messages
A list of messages to be processed.
Throws:
ArgumentNullException Thrown when session , payload , or messages is null.
Reads the assistant's message from the response.
response
The response received from the API.
message
The message object to populate with the response data.
Throws:
ArgumentNullException Thrown when response or message is null.
Reads the usage information from the reply.
message
The message object to populate with usage data.
reply
The dynamic reply object containing usage information.
This method does not return usage information as it is not applicable for TTS operations.
Represents an endpoint for connecting to LocalAI's Whisper model for speech-to-text transcriptions.
public class LocalAIEndpointWhisper : SmartHttpEndpointPublic Class LocalAIEndpointWhisper
Inherits SmartHttpEndpointThis class is designed to interact with LocalAI's Whisper model, providing functionality to send audio data and receive transcriptions. It extends the SmartHttpEndpoint to leverage HTTP communication.
Initializes a new instance of the LocalAIEndpointWhisper class with default settings.
Sets the authentication method to "Bearer", the model to "whisper-1", and the response format to "text". The URL is set to LocalAI's audio transcription endpoint.
String: Gets or sets the model used for transcription. (Default: "whisper-1")
String: Gets or sets the URL for the LocalAI audio transcription endpoint. (Default: "http://localhost:8080/v1/audio/transcriptions")
Adds user messages to the payload for the request.
payload
The payload to which messages will be added.
session
The current session containing user data.
messages
The list of messages to be processed.
Throws:
ArgumentNullException Thrown if any of the parameters are null.
Creates the HTTP content for the request using the provided data.
data
The data to be included in the request content.
Returns: HttpContent. A HttpContent object containing the request data.
Reads the assistant's message from the response and updates the message object.
response
The response received from the server.
message
The message object to be updated with the assistant's reply.
Throws:
ArgumentNullException Thrown if any of the parameters are null.
Reads the usage information from the reply. Not supported in this implementation.
message
The message object to be updated with usage information.
reply
The reply containing usage data.
Represents an endpoint for connecting to OpenAI's Whisper model for speech-to-text transcriptions.
public class OpenAIEndpointWhisper : SmartHttpEndpointPublic Class OpenAIEndpointWhisper
Inherits SmartHttpEndpointThis class is designed to interact with OpenAI's Whisper model, providing functionality to send audio data and receive transcriptions. It extends the SmartHttpEndpoint to leverage HTTP communication.
Initializes a new instance of the OpenAIEndpointWhisper class with default settings.
Sets the authentication method to "Bearer", the model to "whisper-1", and the response format to "text". The URL is set to OpenAI's audio transcription endpoint.
String: Gets or sets the model used for transcription. (Default: "whisper-1")
String: Gets or sets the URL for the OpenAI audio transcription endpoint. (Default: "https://api.openai.com/v1/audio/transcriptions")
Adds user messages to the payload for the request.
payload
The payload to which messages will be added.
session
The current session containing user data.
messages
The list of messages to be processed.
Throws:
ArgumentNullException Thrown if any of the parameters are null.
Creates the HTTP content for the request using the provided data.
data
The data to be included in the request content.
Returns: HttpContent. A HttpContent object containing the request data.
Reads the assistant's message from the response and updates the message object.
response
The response received from the server.
message
The message object to be updated with the assistant's reply.
Throws:
ArgumentNullException Thrown if any of the parameters are null.
Reads the usage information from the reply. Not supported in this implementation.
message
The message object to be updated with usage information.
reply
The reply containing usage data.
Initializes a new instance of the WorksWithAttribute class with the specified type and allowed status.
type
The type that the class works with. Must be a subclass of .
allowed
Indicates whether the type is allowed. Default is true.
Throws:
ArgumentNullException
Thrown when the type is null.
ArgumentException Thrown when the type is not a subclass of SmartEndpoint.
Initializes a new instance of the WorksWithAttribute class with the specified type name and allowed status.
type
The name of the type that the class works with. Must be a subclass of .
allowed
Indicates whether the type is allowed. Default is true.
Throws:
ArgumentNullException
Thrown when the type is null.
ArgumentException Thrown when the type is not a subclass of SmartEndpoint.
Boolean: Gets a value indicating whether the SmartEndpoint is allowed.
Type: Gets the SmartEndpoint that the attribute allows or disallows.
String: Gets the type name of the SmartEndpoint that the attribute allows or disallows.
Public Class WorksWithAttribute
Inherits Attributepublic class WorksWithAttribute : AttributePublic Class SmartPictureBoxAdapter
Inherits SmartAdapterThis adapter extends the functionality of a PictureBox by allowing it to generate images from text descriptions. It works with the OpenAI DALL-E endpoint and supports different image formats and sizes.
Initializes a new instance of SmartPictureBoxAdapter.
String: Gets or sets the format in which the generated images are returned. Must be one of "url" or "b64_json". (Default: "url")
String: Gets or sets the size of the generated images. Must be one of "256x256", "512x512", or "1024x1024". Smaller images are faster. (Default: "1024x1024")
For DALL-E-2, image sizes must be one of "256x256", "512x512", or "1024x1024". When using DALL-E-3, images can have a size of 1024x1024, 1024x1792, or 1792x1024 pixels.
Handles the event when the control is created.
control
The control that was created.
Handles the event when the control is disposed.
control
The control that was disposed.
Asynchronously runs the core logic for generating an image based on the control's text.
control
The control containing the text description.
Returns: Task<Message>. A task representing the asynchronous operation, with a Message result containing the generated image or image URL.
Throws:
ArgumentNullException Thrown when the control is null.
Represents a provider that supplies tools.
public class SmartPictureBoxAdapter : SmartAdapterBoolean: Gets a value indicating whether the provider has any tools available.
ToolCollection: Gets the collection of tools available from the provider.
TODO:
Represents a smart parallel prompt that can execute multiple tasks concurrently.
Represents an abstract base class for creating smart adapters that interact with AI endpoints.
Represents a SmartHub component that provides AI capabilities to controls within a container.
Represents a smart prompt component that can process and manage prompts with tools and parameters.
Converts the of the associated control into lifelike speech.
Public Interface IToolProviderpublic interface IToolProviderInitializes a new instance of MessageCollection.
Adds a message with the specified role and content.
role
The role of the message.
content
The content of the message.
Returns: MessageCollection. The current MessageCollection instance.
Throws:
ArgumentNullException Thrown when the content is null.
Adds a range of messages to the collection.
messages
The messages to add.
Returns: MessageCollection. The current MessageCollection instance.
Creates a shallow clone of the collection.
Returns: MessageCollection. A shallow clone of the MessageCollection.
Inserts a message at the specified index.
index
The zero-based index at which the message should be inserted.
item
The message to insert.
Moves a message from one index to another.
oldIndex
The zero-based index specifying the location to move from.
newIndex
The zero-based index specifying the location to move to.
Removes all messages that match the specified predicate.
predicate
The predicate to match messages.
Returns: MessageCollection. The current MessageCollection instance.
Removes the last message(s) from the collection.
count
The number of messages to remove. Default is 1.
Returns: MessageCollection. The current MessageCollection instance.
Public Class MessageCollection
Inherits ObservableCollection(Of )
Implements ICloneablepublic class MessageCollection : ObservableCollection<>, ICloneablePublic Class SmartAudioWhisperAdapter
Inherits SmartAdapterWorks with:
OpenAI Whisper
GroqCloud Whisper
Set the SourceURL property of the associated Audio control to the audio file that you want to transcribe to text. When AutoRun is true and the user presses the play button the adapter will automatically start the transcription. If AutoRun is set to false, use RunAsync (where control is the Audio control) to start the transcription. You can use the SystemPrompt to improve the quality of the transcripts generated by the Whisper model. The model will try to match the style of the prompt, so it will be more likely to use capitalization and punctuation if the prompt does too. For more information prompting.
Initializes a new instance of SmartAudioWhisperAdapter.
control
control
control
Returns: Task<Message>.
Represents a provider that supplies tools.
public class SmartAudioWhisperAdapter : SmartAdapterInitializes a new instance of the EmbeddedDocument class with the specified name, metadata, and embedding.
name
The name of the document. Cannot be null.
metadata
Optional metadata associated with the document. If null, a new instance is created.
embedding
Optional embedding data for the document.
Throws:
ArgumentNullException Thrown when name is null.
Metadata: Gets the metadata of the document.
String: Gets the name of the document.
Creates a deep copy of the current EmbeddedDocument instance.
includeEmbedding
Whether to clone the embeddings.
Returns: EmbeddedDocument. A new EmbeddedDocument instance that is a deep copy of the current instance.
Retrieves the embedding data associated with the document.
Returns: Embedding. The embedding data of the document.
Returns: Matches.
Sets the embedding data for the document.
embedding
The embedding data to set for the document.
Returns: EmbeddedDocument. The current instance of EmbeddedDocument with updated embedding data.
Use this method to update the embedding data of an existing document. This might be necessary when the document's context or representation changes.
matches
Returns: EmbeddedDocument.
Public Class EmbeddedDocument
Inherits ICloneablepublic class EmbeddedDocument : ICloneableTODO:
public class SmartAgentPrompt : SmartPromptPublic Class SmartAgentPrompt
Inherits SmartPromptInitializes a new instance of the SmartAgentPrompt class with optional text.
text
The text associated with the prompt. Default is null.
Executes the agent request asynchronously with the specified sender and message.
adapter
The sender adapter initiating the request.
message
Message to send to the agent for processing.
Returns: Task<Message>. A task representing the asynchronous operation, with a Message as the result.
Throws:
InvalidOperationException Thrown when the SmartAdapter is busy.
ArgumentNullException Thrown when the sender is null.
Subscribes the agent to the specified adapter.
adapter
The adapter to which the agent will be subscribed.
Throws:
ArgumentNullException Thrown when the adapter is null.
Unsubscribes the agent from the specified adapter.
adapter
The adapter from which the agent will be unsubscribed.
Throws:
ArgumentNullException Thrown when the adapter is null.
Represents a provider that supplies tools.
Represents an endpoint for connecting to OpenAI DallE services.
public class OpenAIEndpointDallE : SmartHttpEndpointPublic Class OpenAIEndpointDallE
Inherits SmartHttpEndpointThis class is used to interact with the OpenAI DallE API for generating images. It sets up the necessary authentication and model parameters required for the API requests.
Initializes a new instance of the OpenAIEndpointDallE class.
Sets default values for authentication, model, and URL specific to the DallE API.
String: Gets or sets the model used for the DallE API. (Default: "dall-e-3")
String: Gets or sets the URL for the DallE API endpoint. (Default: "https://api.openai.com/v1/images/generations")
Adds user messages to the payload for the API request.
payload
The dynamic payload object to be sent to the API.
session
The current session containing user context.
messages
The list of messages to be processed.
Throws:
ArgumentNullException Thrown when any of the parameters are null.
Reads the assistant's message from the API response.
response
The response received from the API.
message
The message object to populate with the response data.
Parses the response to extract image URLs or base64 encoded images and assigns them to the message.Throws:
ArgumentNullException Thrown when any of the parameters are null.
Reads the usage information from the API response.
message
The message object to populate with usage data.
reply
The dynamic reply object containing the response data.
This method does not process usage information as it is not returned by the API.
Wisej.AI.Endpoints.AzureAIEndpoint
Namespace: Wisej.AI.Endpoints
Assembly: Wisej.AI (3.5.0.0)
Represents an endpoint for connecting to Azure AI services, specifically designed to interact with OpenAI models.
This class provides properties and methods to configure and interact with Azure AI endpoints, allowing for operations such as generating text completions and embeddings.
Initializes a new instance of the class with default settings.
: Gets or sets the API version used for requests. (Default: "2024-02-01")
: Gets or sets the authentication method for the endpoint. (Default: "api-key")
: Gets or sets the model used for generating text embeddings. (Default: "text-embedding-3-small")
: Gets or sets the model used for generating text completions. (Default: "gpt-4o")
: Gets or sets the URL of the Azure AI endpoint. (Default: "https://YOUR_RESOURCE_NAME.openai.azure.com")
Adds options to the message for a session.
Builds the payload for an embeddings request.
Returns: . An object representing the payload for the embeddings request.
Throws:
Thrown when inputs is null.
Constructs the API URL for text completion requests.
Returns: . A string representing the API URL.
Constructs the API URL for embedding requests.
Returns: . A string representing the embeddings API URL.
Reads the assistant's message from the response.
Throws:
Thrown when response or message is null.
Reads the embeddings from the response.
Returns: . A jagged array of floats representing the embeddings.
Throws:
Thrown when response is null.
Reads the usage statistics from the reply and updates the message.
Wisej.AI.Endpoints.GroqCloudEndpointWhisper
Namespace: Wisej.AI.Endpoints
Assembly: Wisej.AI (3.5.0.0)
Represents a connection to GroqCloud speech endpoints for audio transcription.
This class is used to interact with the GroqCloud API for converting audio inputs into text transcriptions using the Whisper model.
Initializes a new instance of the class with default settings.
Sets the default authentication method, model, response format, and API URL.
: Gets or sets the model used for audio transcription. (Default: "whisper-large-v3")
: Gets or sets the URL for the GroqCloud API endpoint. (Default: "https://api.groq.com/openai/v1/audio/transcriptions")
Adds user messages to the payload for the API request.
This method extracts the last user message and adds its binary and text content to the payload.
Creates the HTTP content for the request.
Returns: . A object containing the request data.
This method constructs a multipart form data content from the provided data, supporting both binary and string content.
Reads the assistant's message from the API response.
This method parses the response content and updates the message text accordingly.
Reads the usage information from the API reply.
This functionality is not supported in the current implementation.
Wisej.AI.Embeddings.Metadata
Namespace: Wisej.AI.Embeddings
Assembly: Wisej.AI (3.5.0.0)
Represents metadata that can store key-value pairs with case-insensitive keys. Implements the ISerializable interface for JSON serialization.
public class Metadata : ISerializablePublic Class Metadata
Inherits ISerializableInitializes a new instance of the class with empty data.
Initializes a new instance of the class with the specified initial data.
Initializes a new instance of the class with data from a .
: Gets or sets the value associated with the specified name.
Throws:
Thrown when setting a value that is not of type string, number, boolean, or date.
Creates a new instance of the class that is a copy of the current instance.
Returns: . A new object that is a copy of this instance.
This method creates a deep copy of the current metadata, allowing modifications to the clone without affecting the original.
Retrieves all property names within the metadata.
Returns: . An array of strings containing all property names.
This method returns the keys of all stored metadata entries as an array of strings. If the metadata is empty, it returns an empty array.
Removes metadata entries with the specified keys.
This method will iterate through the provided keys and remove the corresponding entries from the metadata. If the metadata is empty or the keys do not exist, the method does nothing.
Wisej.AI.Adapters.SmartAudioTTSAdapter
Namespace: Wisej.AI.Adapters
Assembly: Wisej.AI (3.5.0.0)
Converts the Text of the associated Audio control into lifelike speech.
public class SmartAudioTTSAdapter : SmartAdapterThis class works with the OpenAI TTS platform to generate audio from text. To use this adapter, set the Text property of the control to the desired text. If is true, the audio is generated automatically when the text changes. The input text is limited to 4096 characters. If is false, call with the control to generate the audio. The generated audio file is saved in the directory at the root of the project. The file name is a combination of the control's and a unique hash code, with the extension specified by .
Initializes a new instance of .
: Gets or sets the path where audio files are stored. (Default: "~AI\AudioFiles")
The path should be relative to the application's root folder to allow the audio control to download the audio as a regular URL.
: Gets or sets the audio format for the generated speech. (Default: "mp3")
The default format is "mp3". Other available formats include "opus", "aac", "flac", and "pcm".
: Gets or sets the speed of the generated audio. (Default: 1)
The speed can range from 0.25 to 4.0, with 1.0 as the default value.
: Gets or sets the voice to use for speech synthesis. (Default: "Alloy")
The available voices are optimized for English. Supported voices include: Alloy (default), Echo, Fable, Onyx, Nova, and Shimmer. New voices can be used as they become available.
Asynchronously generates audio from the text of the specified control.
Returns: . A task representing the asynchronous operation, with a result containing the response.
Throws:
Thrown if or is null or empty.
Wisej.AI.SmartAdapter ExtendsAttribute
Namespace: Wisej.AI
Assembly: Wisej.AI (3.5.0.0)
Represents an attribute that specifies the type a SmartAdapter extends, allowing for multiple extensions if specified.
public class ExtendsAttribute : AttributePublic Class ExtendsAttribute
Inherits AttributeThis attribute can be applied to SmartAdapter classes to indicate that they extend a particular type. It supports specifying the type either by a Type object or by a type name string.
Initializes a new instance of the class with the specified type.
Throws:
Thrown when the type is null.
Initializes a new instance of the class with the specified type name.
Throws:
Thrown when the typeName is null.
: Gets a value indicating whether the adapter can extend multiple controls.
: Gets the object that the class extends.
If the type was specified by name, this property attempts to resolve it to a object.
: Gets the name of the type that the adapter can extend.
Supported AI providers
Wisej.AI is compatible with any LLM provider, whether it's on a public cloud, a private deployment, or a local server. Most providers offer a REST API that is compatible with the OpenAI API. In such cases, if you need to add a new SmartEndpoint, you can either use the SmartOpenAIEndpoint and specify a different URL or create a derived class.
Typically, private models are hosted exclusively by their owners. In contrast, open-source models can be hosted by various providers and can also be deployed on proprietary hardware.
The currently available implementations include:
Wisej.AI.Endpoints.HuggingFaceJavaScriptEndpoint
Namespace: Wisej.AI.Endpoints
Assembly: Wisej.AI (3.5.0.0)
Represents an endpoint that uses the transformers.js module in the user's browser to provide AI services to Wisej.AI components.
Wisej.AI.Endpoints.AmazonBedrockEndpoint
Namespace: Wisej.AI.Endpoints
Assembly: Wisej.AI (3.5.0.0)
Wisej.AI.Adapters.SmartCopilotAdapter
Namespace: Wisej.AI.Adapters
Assembly: Wisej.AI (3.5.0.0)
Turns the control into a AI-powered assistant. It can control and navigate an application, click menu items, navigation bar items, buttons, etc. It can also invoke methods in your applications as needed (see ).
var document = new EmbeddedDocument("SampleDoc");
var newEmbedding = new Embedding();
document.SetEmbedding(newEmbedding);
control
control
control
The control containing the text to convert to audio.
Represents a provider that supplies tools.
Public Class SmartAudioTTSAdapter
Inherits SmartAdaptertype
The type that the class extends.
allowMultiple
Indicates whether multiple extensions are allowed. Default is false.
typeName
The name of the type that the class extends.
allowMultiple
Indicates whether multiple extensions are allowed. Default is false.
Transcribes the audio file of the of the associated Audio control to its Text property in the original language of the audio. speech-to-text
Represents a smart calendar adapter that extends the functionality of a SmartAdapter.
Represents an adapter that enhances a chart control with AI features.
Represents an adapter that enhances a ChartJS control with several AI features.
Adds semantic filtering to the ComboBox auto-complete functionality.
Enhances all the controls in the associated container with the AI-powered capability to extract structured data from unstructured text.
Represents a document adapter that can perform AI tasks using a document as a data source and interact with the user through a ChatBox control.
Converts unstructured text into a structured .NET object.
Represents an adapter for a PictureBox that generates images based on a description using the OpenAI DALL-E endpoint.
Represents an adapter that generates a data set from a database schema and a user-provided description.
Represents a smart adapter that provides real-time data processing capabilities for use with the OpenAIEndpointRealtime endpoint.
Enhances the ChatBox control to allow seamless PDF report queries using an AI provider.
Enhances a TextBox control with several AI features, including suggestions, translation, and auto-correction.
message
The message to which options are added.
session
The session context.
inputs
The input strings for which embeddings are requested.
response
The response from the API.
message
The message object to populate.
response
The response from the API.
message
The message object to update.
reply
The dynamic reply object containing usage data.
Public Class AzureAIEndpoint
Inherits SmartHttpEndpointpublic class AzureAIEndpoint : SmartHttpEndpointpayload
The payload to be sent to the API.
session
The current session containing user data.
messages
The list of messages to be processed.
data
The data to be included in the request.
response
The response received from the API.
message
The message object to populate with the response data.
message
The message object to update with usage data.
reply
The reply data from the API.
Public Class GroqCloudEndpointWhisper
Inherits SmartHttpEndpointpublic class GroqCloudEndpointWhisper : SmartHttpEndpointdata
Initial data to populate the metadata. If data is null, an empty metadata is created.
data
A DynamicObject containing initial data. If data is null, an empty metadata is created.
keys
An array of keys identifying the metadata entries to remove.
var original = new Metadata();
var clone = original.Clone();
var metadata = new Metadata();
var properties = metadata.GetProperties();
var metadata = new Metadata();
metadata["key1"] = "value1";
metadata.Remove("key1");
OpenAI
1, 3, 4, 7
OpenAITTS
1, 5
OpenAIDallE
1, 7
OpenAIWhisper
1, 6
Notes:
Proprietary models
Open source models
Embeddings
Vision
Text to Speech
Speech to Text
Imaging
By "Local Hosting," we refer to using a server to provide AI features outside the typical cloud services. This server could be located on-premises, housed in a data center, or hosted as a virtual machine instance with any cloud provider. This setup offers flexibility in deploying AI solutions by allowing organizations to have more control over their data and resources while still benefiting from sophisticated AI capabilities.
Wisej.AI can seamlessly integrate with any non-cloud server, providing flexibility and adaptability in deploying AI features. One of the most common types of servers used for this purpose is Ollama, which allows for efficient hosting of AI models and services. This capability ensures that you can leverage Wisej.AI's advanced functionalities, whether your infrastructure is cloud-based or locally hosted.
To use an Ollama server, instantiate the OllamaEndpoint and provide the URL of your server:
To use other local servers, such as vLLM, Localai, LM Studio, and others, you can most likely use or extend the OpenAIEndpoint. This provides the flexibility to integrate a variety of servers seamlessly.
Another excellent local server option is LocalAI. It offers an API compatible with OpenAI and supports a comprehensive range of features. These features include text completion, embedding, image generation, text-to-speech, speech-to-text, and re-ranking.
Public Class HuggingFaceJavaScriptEndpoint
Inherits SmartEndpointThis class is part of the SmartEndpoint category and is designed to interact with the Hugging Face transformers.js library. It provides various AI services such as text generation, translation, and more, by utilizing different pipelines.
Initializes a new instance of the HuggingFaceJavaScriptEndpoint.
String: Gets the unique identifier for the component.
String: Gets or sets the model used by the endpoint. (Default: null)
TransformersPipeline: Gets or sets the pipeline used by the endpoint. (Default: None)
The pipeline determines the type of AI service provided, such as translation or text generation. index
String: Gets or sets the subtask for the selected pipeline. (Default: "")
This property allows further specification of the task within the chosen pipeline. index
String: Gets or sets the source URL for the transformers.js library. (Default: "https://cdn.jsdelivr.net/npm/@xenova/transformers")
Asynchronously processes a list of messages and returns a response message.
session
The current smart session.
messages
A list of messages to process.
Returns: Task<Message>. A task that represents the asynchronous operation. The task result contains the response message.
Throws:
ArgumentNullException Thrown when session or messages is null.
NotSupportedException Thrown when the pipeline is not supported.
Releases the unmanaged resources used by the component and optionally releases the managed resources.
disposing
true to release both managed and unmanaged resources; false to release only unmanaged resources.
Asynchronously calculates the similarity between a query and an array of text.
query
The query string to compare.
text
An array of text strings to compare against the query.
Returns: Task<Single[]>. A task that represents the asynchronous operation. The task result contains an array of similarity scores.
Reads the assistant message from the response and updates the message.
response
The response containing the assistant message.
message
The message to update with the assistant's response.
Throws:
ArgumentNullException Thrown when response or message is null.
Reads the usage information from the reply and updates the message.
message
The message to update with usage information.
reply
The reply containing usage information.
public class HuggingFaceJavaScriptEndpoint : SmartEndpointAdds or replaces a service in the service provider with the specified implementation type.
TService
The type of service to add or replace.
TImplementation
The type of the implementation to use.
services
The service provider to modify.
lifetime
The lifetime of the service.
Returns: ServiceProvider. The modified service provider.
Adds or replaces a service in the service provider with the specified implementation instance.
TService
The type of service to add or replace.
services
The service provider to modify.
implementation
The implementation instance to use.
lifetime
The lifetime of the service.
Returns: ServiceProvider. The modified service provider.
Adds or replaces a service in the service provider using a factory method.
TService
The type of service to add or replace.
services
The service provider to modify.
factory
The factory method to create the service instance.
lifetime
The lifetime of the service.
Returns: ServiceProvider. The modified service provider.
Computes clusters from the given embeddings using the K-Means clustering algorithm.
embeddings
The embeddings to cluster.
count
The number of clusters to create.
maxDivergence
The maximum divergence allowed for convergence.
Returns: ValueTuple`2[]. An array of tuples containing the centroid and vectors of each cluster.
K-Means clustering is a method of vector quantization, originally from signal processing, that is popular for cluster analysis in data mining. This method partitions the embeddings into count clusters.
Calculates the cosine similarity between two vectors.
vectorA
The first vector.
vectorB
The second vector.
Returns: Single. The cosine similarity between the two vectors.
Cosine similarity is a measure of similarity between two non-zero vectors of an inner product space. It is defined as the cosine of the angle between the two vectors.
Calculates the cosine similarity between a vector and an array of vectors.
vectorA
The vector to compare.
vectorB
The array of vectors to compare against.
Returns: Single[]. An array of cosine similarity values for each vector in the array.
This method extends the single vector cosine similarity calculation to handle multiple vectors, returning an array of similarity scores.
Converts the first character of the string to lowercase, making it camel case.
text
The string to convert.
Returns: String. The camel case version of the string.
This method is useful for converting PascalCase strings to camelCase, which is often used in JSON serialization and other contexts.
Public Class SmartExtensionspublic class SmartExtensionsRepresents an endpoint for connecting to Amazon Bedrock services.
public class AmazonBedrockEndpoint : SmartHttpEndpointPublic Class AmazonBedrockEndpoint
Inherits SmartHttpEndpointThis class is used to configure and manage connections to Amazon Bedrock endpoints, specifically for interacting with the Anthropic API.
Initializes a new instance of the AmazonBedrockEndpoint class.
Sets default values for authentication, model, URL, and headers.
String: Gets or sets the authentication method for the endpoint. (Default: "x-api-key")
String: Gets or sets the model used for the endpoint. (Default: "claude-3-opus-20240229")
String: Gets or sets the URL for the endpoint. (Default: "https://api.anthropic.com/v1/messages")
Adds messages to the payload.
payload
The payload to which messages are added.
session
The current session context.
messages
The list of messages to add.
Ensures the system message is complete and prepares the payload for sending.
Adds options to the message payload.
message
The message payload to modify.
session
The current session context.
Sets default values for temperature and max tokens in the message payload.
Builds a message object from a Message instance.
message
The message to convert.
Returns: Object. A dynamic object representing the message.
Handles different input types, including text and images.
Reads the assistant's message from the response.
response
The response containing the assistant's message.
message
The message object to populate.
Parses the response stream to extract the assistant's message content.
Reads the usage information from the reply.
message
The message object to update with usage data.
reply
The dynamic reply object containing usage information.
public class SmartCopilotAdapter : SmartAdapterPublic Class SmartCopilotAdapter
Inherits SmartAdapterWorks with:
AzureAI/OpenAI gpt-4
AzureAI/OpenAI gpt-4o
AzureAI/OpenAI gpt-3.5
AzureAI/Anthropic Claude
Google Gemini
Llama3:8b and 70b
When the user instructions contain extra information (i.e. "add Microsoft as a new client"), the additional parameters are extracted, processed and are available in the Parameters property delivered with the ExecuteAction event arguments. You are not limited to the predefined actions in the application. When you add a method decorated with the ToolAttribute the AI can invoke it when necessary. This is a simple example that sends an email when prompted by the user:
Initializes a new instance of the SmartCopilotAdapter class.
String: Gets or sets the icon of the AI bot. (Default: "resource.wx/Wisej.AI/Icons/wisej-avatar.svg")
String: Gets or sets the name of the AI bot. (Default: "Wisej.AI")
User: Gets the User associated with the AI bot.
control
control
Raises the ExecuteAction event.
e
The instance containing the event data.
Executes the core logic asynchronously.
control
The control associated with the operation.
Returns: Task<Message>. A task representing the asynchronous operation.
ExecuteActionEventHandler Occurs when an action is executed.
Represents a provider that supplies tools.
Initializes a new instance of the Metrics class.
Int32: Gets the number of calls.
Int32: Gets the number of input tokens.
Int32: Gets the number of output tokens.
Adds the specified number of input tokens, output tokens, and call count to the current metrics.
inputTokens
The number of input tokens to add.
outputTokens
The number of output tokens to add.
callCount
The number of calls to add. Default is 1.
Adds the metrics from another Metrics instance to the current metrics.
usage
The instance whose values are to be added.
This method updates the current metrics by adding the values from another Metrics instance.
Resets the input tokens, output tokens, and call count to 0.
Subtracts the metrics from another Metrics instance from the current metrics.
usage
The instance whose values are to be subtracted.
This method updates the current metrics by subtracting the values from another Metrics instance.
Public Class Metricspublic class MetricsWisej.AI.Adapters.SmartReportAdapter
Namespace: Wisej.AI.Adapters
Assembly: Wisej.AI (3.5.0.0)
Enhances the ChatBox control to allow seamless PDF report queries using an AI provider.
public class SmartReportAdapter : SmartAdapterUtilizes these services: . The class extends the functionality of a to provide capabilities for reading and interpreting reports. It utilizes prompts and sessions to interact with the user and process report data.
Initializes a new instance of the class.
: Gets or sets a value indicating whether the history of the conversation is cleared after each response. (Default: False)
: Gets or sets the avatar image source for the bot. (Default: "resource.wx/Wisej.AI/Icons/wisej-avatar.svg")
: Gets or sets the name of the bot. (Default: "Wisej.AI")
: Gets the control associated with this adapter.
: Gets or sets the document conversion service used for converting documents to text.
: Gets or sets the prompt to execute. Used only when the adapter is not connected to a . (Default: null)
: Gets or sets the PDF source stream for the report.
: Gets the user associated with the bot.
Returns: .
Throws:
When question is null.
Evaluates a mathematical expression asynchronously.
Returns: . A task that represents the asynchronous operation. The task result contains the evaluated result of the expression.
This method uses the browser's evaluation engine to compute the result of the given expression.
Raises the event.
Raises the event.
Resets the conversation history.
Executes the core logic asynchronously for running the session.
Returns: . A task representing the asynchronous operation.
Occurs when an answer is received from the session.
Occurs when a report is being read.
Wisej.AI.Adapters.SmartChatBoxAdapter
Namespace: Wisej.AI.Adapters
Assembly: Wisej.AI (3.5.0.0)
Turns the ChatBox control into a AI-powered bot. It can answer any question (depending on the AI model being used) and can invoke methods in your applications as needed (see SmartTool).
public class SmartChatBoxAdapter : SmartAdapterWorks with:
AzureAI/OpenAI gpt-4
AzureAI/OpenAI gpt-4o
AzureAI/OpenAI gpt-3.5
AzureAI/Anthropic Claude
You are not limited to what the model can answer. When you add a method decorated with the the AI can invoke it when necessary in order to retrieve any information or take any action. This is a simple example that provides current date/time knowledge to the AI bot, including the name of the day:
When the AI needs to know the current date/time it will invoke this method. For example: "When is the next national holiday in {country}?". In this example you can see how to allow the AI to take an action.
When the AI wants to change the header color it will invoke this method with the correct parameter. Use this feature with a lot of caution! A method like "LaunchThermonuclearStrike()" is not allowed.
Initializes a new instance of .
: Gets or sets a value indicating whether the history of the conversation is cleared after each response. (Default: False)
: Icon of the AI bot. (Default: "resource.wx/Wisej.AI/Icons/wisej-avatar.svg")
: Name of the AI bot. (Default: "Wisej.AI")
:
: The User associated to the AI bot. It's created using and .
Returns: .
Returns: .
Returns: .
Resets the conversation history.
Returns: .
Wisej.AI.Endpoints.OllamaEndpoint
Namespace: Wisej.AI.Endpoints
Assembly: Wisej.AI (3.5.0.0)
Represents an endpoint that connects to Ollama services, providing functionalities for chat and embeddings.
This class extends the to interact with Ollama endpoints. It provides methods to construct API URLs, add options to messages, and handle responses.
Initializes a new instance of the class with default settings.
: Gets or sets the embedding model used by the endpoint. (Default: "all-minilm")
: Gets or sets the maximum number of tokens the context window for the specified can hold. (Default: 8192)
Adds options to the message for the current session.
Throws:
Thrown when the message is null.
Asynchronously requests embeddings for the specified inputs.
Returns: . A task representing the asynchronous operation, with a result of type .
Throws:
Thrown when the inputs are null.
Builds the payload for the embeddings request.
Returns: . An object representing the payload for the embeddings request.
Throws:
Thrown when the inputs are null.
Constructs the API URL for chat interactions.
Returns: . A string representing the chat API URL.
Constructs the API URL for embeddings interactions.
Returns: . A string representing the embeddings API URL.
Reads the assistant's message from the response and updates the message object.
Throws:
Thrown when the response or message is null.
Reads the embeddings response and extracts the embeddings data.
Returns: . A jagged array of floats representing the embeddings.
Throws:
Thrown when the response is null.
Reads the usage statistics from the reply and updates the message usage.
Wisej.AI.Endpoints.AnthropicEndpoint
Namespace: Wisej.AI.Endpoints
Assembly: Wisej.AI (3.5.0.0)
Represents an endpoint for connecting to Anthropic services.
This class is used to configure and manage connections to Anthropic endpoints, providing methods to send messages and handle responses. It inherits from .
Initializes a new instance of the class.
Sets default values for authentication, model, URL, and headers.
: Gets or sets the authentication method for the endpoint. (Default: "x-api-key")
: Gets or sets the model used by the Anthropic endpoint. (Default: "claude-3-5-sonnet-20241022")
: Gets or sets the URL for the Anthropic API endpoint. (Default: "https://api.anthropic.com/v1/messages")
Adds messages to the payload for the Anthropic API request.
Prepares the system message and other messages for the API request.Throws:
Thrown if session , payload , or messages is null.
Adds options to the message before sending it to the Anthropic endpoint.
Sets default values for temperature and max tokens in the message.
Builds a message object for the Anthropic API.
Returns: . An object representing the message for the API.
Handles different input types, including text and images.Throws:
Thrown if message is null.
Reads the assistant's message from the response.
Parses the response content and updates the message text.Throws:
Thrown if response or message is null.
Reads the usage information from the API reply.
Wisej.AI.SmartTool ToolContext
Namespace: Wisej.AI
Assembly: Wisej.AI (3.5.0.0)
Represents the context in which a smart tool operates, providing properties and methods to manage tool execution.
public class ToolContextPublic Class ToolContextThe ToolContext class encapsulates the state and behavior required to execute a smart tool, including its arguments, session, and endpoint.
Initializes a new instance of .
: Gets a value indicating whether the tool execution should be aborted.
: Gets the arguments for the tool execution.
: Gets the endpoint associated with the tool context.
: Gets the unique identifier for the tool context.
: Gets the current iteration count for the tool execution.
: Gets or sets the return value of the tool execution.
: Gets the session associated with the tool context.
: Gets the tool associated with the context.
Invokes the tool synchronously.
Returns: . The result of the tool invocation.
Invokes the tool asynchronously.
Returns: . A task representing the asynchronous operation.
This method handles exceptions and manages the tool invocation lifecycle, including raising events before and after invocation.
Wisej.AI.Adapters.SmartRealtimeAdapter
Namespace: Wisej.AI.Adapters
Assembly: Wisej.AI (3.5.0.0)
Represents a smart adapter that provides real-time data processing capabilities for use with the endpoint.
Wisej.AI.Endpoints.OpenAIEndpoint
Namespace: Wisej.AI.Endpoints
Assembly: Wisej.AI (3.5.0.0)
var ollama = new OllamaEndpoint { URL = "http:\\localhost:8080" };
[SmartTool.Tool]
[Description("Sends an email message.")]
public void SendEmailMessage(
[Description("Email address of the recipient.")]
string destinationEmail,
[Description("Subject line.")]
string subject,
[Description("Message text.")]
string message) {
// code to send an email, or open a dialog box.
}
question
expression
The mathematical expression to evaluate.
e
The AnswerReceivedArgs instance containing the event data.
control
control
e
The ReadingReportEventArgs instance containing the event data.
removeMessages
Indicates whether all messages from the ChatBox control should also be removed.
control
The control associated with the session.
Represents a provider that supplies tools.
Public Class SmartReportAdapter
Inherits SmartAdaptersession
The session invoking the tool.
message
The dynamic message object to which options are added.
session
The current session containing model options.
inputs
An array of input strings for which embeddings are requested.
inputs
An array of input strings for which the payload is built.
response
The response received from the API.
message
The message object to be updated with the assistant's response.
response
The response received from the API.
message
The message object to be updated with usage statistics.
reply
The dynamic reply object containing usage information.
Public Class OllamaEndpoint
Inherits SmartHttpEndpointpublic class OllamaEndpoint : SmartHttpEndpointpayload
The payload to be sent.
session
The current session.
messages
The list of messages to include.
message
The message to be sent.
session
The current session.
message
The message to be converted.
response
The response from the API.
message
The message object to populate.
message
The message object to update.
reply
The dynamic reply object from the API.
Public Class AnthropicEndpoint
Inherits SmartHttpEndpointpublic class AnthropicEndpoint : SmartHttpEndpointLlama3:8b and 70b
answer
sources
args
control
control
args
text
text
removeMessages
Indicates whether all messages from the ChatBox control should also be removed.
control
Represents a provider that supplies tools.
Public Class SmartChatBoxAdapter
Inherits SmartAdapter
[SmartTool.Tool]
[Description("Returns the current date and time.")]
public DateTime GetCurrentDateTime() {
return DateTime.Now.ToLongDateString() + " " + DateTime.Now.ToLongTimeString();
}
[SmartTool.Tool]
[Description("Changes the background color of the application header.")]
public void SetHeaderColor(
[Description("Name of the new color. Use @toolbar to reset to the original color.")]
string color)
{
((MainPage)Application.MainPage).panelHeader.BackColor = Color.FromName(color);
}
Public Class SmartRealtimeAdapter
Inherits SmartAdapterThe SmartRealtimeAdapter class extends the SmartAdapter base class to enable real-time data handling and integration with OpenAI's real-time endpoint. This adapter is designed to facilitate seamless communication and data exchange in scenarios where immediate processing and response are required.
Initializes a new instance of the SmartRealtimeAdapter class.
This constructor sets up the SmartRealtimeAdapter by initializing the internal prompt and subscribing to the application's refresh event. The prompt is initialized with a default label, and the adapter will respond to application-wide refresh events for real-time updates.
Boolean: Gets or sets a value indicating whether the component accepts voice input from the user. (Default: True)
When set to true (the default value), the component is enabled to receive and process voice input from the user. Setting this property to false disables voice input functionality for the component.
Boolean: Gets a value indicating whether the component is currently listening for the user's voice.
The Listening property reflects the current listening state of the component.
Boolean: Gets or sets a value indicating whether the component is muted. (Default: False)
When set to true, the component is muted and may not produce sound or notifications. The default value is false.
Boolean: Gets or sets a value indicating whether transcription functionality is enabled. (Default: False)
When set to true, transcription features are activated. The default value is false.
String: Gets or sets the voice of the model. (Default: "Alloy")
Supported voices include: Alloy (default), Ash, Ballad, Coral, Echo, Sage, Shimmer, Verse. New voices can be used as they become available.
T
systemPrompt
Returns: T.
Raises the AnswerReceived event.
e
The instance containing the event data.
This method invokes the AnswerReceived event, passing the specified event arguments to all registered event handlers.
Raises the EnabledChanged event.
e
An instance that contains the event data.
This method is called to trigger the EnabledChanged event. Derived classes can override this method to provide additional logic when the event is raised.
Raises the ListeningChanged event.
e
An instance containing the event data.
This method is called to notify subscribers that the listening state has changed. Derived classes can override this method to provide custom event data or additional logic.
Raises the MutedChanged event.
e
An instance containing the event data.
This method is called to notify subscribers that the muted state has changed. Derived classes can override this method to provide custom event data or additional logic.
Raises the TranscriptionReceived event.
e
The instance containing the event data.
This protected virtual method allows derived classes to trigger the TranscriptionReceived event. Override this method to provide custom event invocation logic.
Resets the current conversation with the OpenAI Realtime API endpoint.
This method clears the ongoing conversation state, allowing a new conversation to begin with the OpenAI endpoint. It is useful when you want to discard the current context and start fresh.
Starts capturing and processing user voice input through the OpenAI Realtime API.
Call this method to begin listening for user voice input. The component will capture audio from the input device, transmit it to the OpenAI Realtime API, and process the response in real time. This enables interactive voice-driven features within your application.
Stops capturing and processing user voice input.
Call this method to stop listening for user voice input. The component will cease capturing audio from the input device and terminate any ongoing communication with the OpenAI Realtime API. This is useful for conserving resources or when voice input is no longer required.
AnswerReceivedEventHandler Occurs when an answer is received.
Subscribe to this event to handle actions when an answer is received. The event provides AnswerReceivedEventArgs containing details about the received answer.
EventHandler Occurs when the enabled state of the object changes.
Subscribe to this event to be notified whenever the enabled state of the object is modified. The event provides standard EventArgs data.
EventHandler Occurs when the listening state of the component changes.
Subscribe to this event to be notified when the listening state is updated, such as when the component starts or stops listening for input.
EventHandler Occurs when the muted state of the component changes.
Subscribe to this event to be notified when the muted state is updated, such as when the component is muted or unmuted.
TranscriptionReceivedEventHandler Occurs when a transcription is received.
Subscribe to this event to handle actions when a new transcription is received. The event provides TranscriptionReceivedEventArgs containing details about the transcription.
Represents a provider that supplies tools.
public class SmartRealtimeAdapter : SmartAdapterBoolean: Gets a value indicating whether the session has tools.
ParameterCollection: Gets the collection of parameters for the session.
ToolCollection: Gets the collection of tools available in the session.
Metrics: Gets the usage metrics for the session.
Clears all tools from the smart prompt.
Raises the AfterInvokeTool event.
args
The instance containing the event data.
Raises the AfterResponseReceived event.
args
The instance containing the event data.
Raises the BeforeInvokeTool event.
args
The instance containing the event data.
Raises the BeforeSendRequest event.
args
The instance containing the event data.
Raises the ConvertParameter event.
args
The instance containing the event data.
Raises the Done event.
e
An that contains the event data.
Raises the Error event.
args
The instance containing the event data.
Raises the PrepareMessages event.
e
An that contains the event data.
Raises the Start event.
e
An that contains the event data.
InvokeToolEventHandler Occurs after a tool is invoked.
MessagesEventHandler Occurs after a response is received.
InvokeToolEventHandler Occurs before a tool is invoked.
MessagesEventHandler Occurs before a request is sent.
ConvertParameterEventHandler Occurs when a parameter needs to be converted.
EventHandler Occurs when the session is done processing a question.
ErrorEventHandler Occurs when an error is encountered.
MessagesEventHandler Occurs after the messages have been prepared and before they are send to the AI.
EventHandler Occurs when the session starts processing a question.
TODO:
Represents a smart parallel prompt that can execute multiple tasks concurrently.
Represents a SmartHub component that provides AI capabilities to controls within a container.
Represents a smart prompt component that can process and manage prompts with tools and parameters.
Represents a session that manages interactions with a smart hub and endpoint, handling prompts, messages, and tools within the session context.
Public Class SmartObject
Inherits Componentpublic class SmartObject : ComponentRepresents an endpoint for connecting to OpenAI services.
public class OpenAIEndpoint : SmartHttpEndpointPublic Class OpenAIEndpoint
Inherits SmartHttpEndpointThis class is used to interact with OpenAI's API, providing methods to send requests and process responses for both chat completions and embeddings.
Initializes a new instance of the OpenAIEndpoint class with default settings.
The default authentication is set to "Bearer".
String: Gets or sets the model used for embeddings. (Default: "text-embedding-3-small")
String: Gets or sets the model used for chat completions. (Default: "gpt-4o")
String: Gets or sets the base URL for the OpenAI API. (Default: "https://api.openai.com/v1")
Boolean: (Default: True)
Adds options to the message for the API request.
message
The message to be sent to the API.
session
The current session context.
Sets the temperature to 0.0 and max tokens to the value of MaxOutputTokens.
Builds the payload for an embeddings request.
inputs
The input strings to be embedded.
Returns: Object. The constructed payload for the embeddings request.
Throws:
ArgumentNullException Thrown when the inputs are null.
Builds the message payload for the API request.
message
The message object containing the content to be sent.
Returns: Object. The constructed message payload.
Throws:
ArgumentNullException Thrown when the message is null.
Constructs the API URL for chat completions.
Returns: String. The full API URL for chat completions.
Constructs the API URL for embeddings.
Returns: String. The full API URL for embeddings.
Reads the embeddings from the API response.
response
The response received from the API.
Returns: Single[][]. An array of float arrays representing the embeddings.
Throws:
ArgumentNullException Thrown when the response is null.
Represents a connection to Cerebras endpoints, inheriting from OpenAIEndpoint.
Represents a connection to DeepSeek endpoints, inheriting from OpenAIEndpoint.
https://console.groq.com/docs/openai
Represents a connection to HuggingFace serverless endpoints for model inference and embeddings.
Represents a connection to LocalAI endpoints, providing access to various AI models and services.
Represents a connection to NVIDIA AI endpoints, providing access to various AI models and services.
OpenAIRealtime
1, 5, 6
NVIDIA
2, 3, 4
Azure AI
1, 2, 3, 4, 5, 6, 7
Ollama
2, 3, 4
Anthropic
1, 3, 4
GoogleAi
1, 2, 3, 4
HuggingFace
2, 3, 4
SambaNova
2, 4
Together.AI
2, 3, 4
X.AI
1, 4
Cerebras
2, 4
GroqCloud
2
GroqCloudWhisper
1, 6
Amazon Bedrock
2, 3, 4
LocalAI
2, 3, 4, 7
LocalAITTS
2, 5
LocalAIWhisper
2, 6
LocalAIImageGen
2, 7
Wisej.AI.Adapters.SmartDocumentAdapter
Namespace: Wisej.AI.Adapters
Assembly: Wisej.AI (3.5.0.0)
Represents a document adapter that can perform AI tasks using a document as a data source and interact with the user through a ChatBox control.
public class SmartDocumentAdapter : SmartAdapterThis class integrates with the to search within the document being managed. It provides properties to configure the bot's appearance and document handling, and events to manage responses.
Initializes a new instance of the class.
: Gets or sets a value indicating whether the history of the conversation is cleared after each response. (Default: False)
: Gets or sets the avatar image source of the bot. (Default: "resource.wx/Wisej.AI/Icons/wisej-avatar.svg")
: Gets or sets the name of the bot. (Default: "Wisej.AI")
: Gets the associated ChatBox control.
: Gets or sets the document conversion service used for converting documents to text.
: Gets or sets the file path of the document. (Default: null)
: Gets or sets the file type. i.e. .docx, .pdf, .txt. If null it will be detected automatically. (Default: null)
: Get or sets the maximum number of vector clusters to generate when performing summarization tasks. (Default: 5)
: Gets or sets the minimum similarity threshold. (Default: 0.25)
: Gets or sets the file path of the document. (Default: null)
: Gets or sets the text splitter service used for splitting text into smaller chunks.
: Gets or sets the number of top results to retrieve. (Default: 10)
: Gets the user representing the bot.
Returns: .
Throws:
When question is null.
Raises the event.
Returns: .
Returns: .
Resets the conversation history.
Executes the core logic asynchronously.
Returns: . A task representing the asynchronous operation.
Occurs when an answer is received.
Wisej.AI.SmartPrompt
Namespace: Wisej.AI
Assembly: Wisej.AI (3.5.0.0)
Represents a smart prompt component that can process and manage prompts with tools and parameters.
public class SmartPrompt : SmartObject, IToolProvider, ICloneableThe class provides functionality to manage prompts, tools, and parameters. It supports cloning and asynchronous operations to ask questions using a smart hub or endpoint.
Initializes a new instance of the class with the specified text.
: Gets the collection of prompt dictionaries.
The prompt dictionaries are loaded from the AI directory and embedded resources.
: Gets or sets the text of the smart prompt. (Default: null)
The text is resolved using the method.
Asynchronously asks a question using the specified smart hub.
Returns: . A task representing the asynchronous operation, with a result.
Throws:
Thrown when the hub or question is null.
Asynchronously asks a question using the specified smart endpoint.
Returns: . A task representing the asynchronous operation, with a result.
Throws:
Thrown when the endpoint or question is null.
Creates a new instance of the class that is a copy of the current instance.
Returns: . A new object that is a copy of this instance.
Removes a tool from the smart prompt.
Returns: . The current instance.
Throws:
Thrown when the tool is null.
Removes the tools from the specified object.
Returns: . The current instance.
Resolves the prompt text for the specified key.
Returns: . The resolved prompt text, or the key if not found.
Saves the prompts to the specified file path.
Adds a tool to the smart prompt.
Returns: . The current instance.
Throws:
Thrown when the tool is null.
Uses the tools from the specified target object.
Returns: . The current instance.
Wisej.AI.Endpoints.GoogleAIEndpoint
Namespace: Wisej.AI.Endpoints
Assembly: Wisej.AI (3.5.0.0)
Represents a connection to Google AI endpoints for generating content and embeddings.
This class is used to interact with Google AI services, allowing for content generation and text embeddings. It extends the to provide specific functionality for Google AI.
Initializes a new instance of the class with default settings.
: Gets or sets the authentication token for accessing the Google AI endpoint. (Default: null)
: Gets or sets the model used for text embeddings. (Default: "text-embedding-004")
: Gets or sets the model used for content generation. (Default: "gemini-1.5-pro-latest")
: Gets or sets the base URL for the Google AI endpoint. (Default: "https://generativelanguage.googleapis.com/v1beta/models")
Adds messages to the payload for processing.
Throws:
Thrown when session , payload , or messages is null.
Adds options to the message for content generation.
This method configures the generation parameters such as temperature and max output tokens. It also incorporates any additional model options from the session.
Builds the payload for embedding requests.
Returns: . The constructed payload object for embedding requests.
Throws:
Thrown when inputs is null.
Constructs the API URL for content generation requests.
Returns: . The constructed API URL as a string.
Constructs the API URL for embedding requests.
Returns: . The constructed embeddings URL as a string.
Reads the assistant's message from the response.
Throws:
Thrown when response or message is null.
Reads the embeddings response and extracts the embedding vectors.
Returns: . An array of float arrays representing the embedding vectors.
Throws:
Thrown when response is null.
Reads the usage data from the reply and updates the message usage.
How to use vector storage and queries
Wisej.AI can seamlessly integrate with any vector database through the service implementation of the IEmbeddingStorageService interface. While you don't need to interact with this interface directly in your code, any Wisej.AI tools or methods that require a vector database will automatically retrieve the current implementation of IEmbeddingStorageService.
Specifically, both the DocumentSearchTool and the SmartyHub.IngestDocumentAsync()
Wisej.AI.Adapters.SmartObjectAdapter
Namespace: Wisej.AI.Adapters
Assembly: Wisej.AI (3.5.0.0)
Converts unstructured text into a structured .NET object.
Wisej.AI.SmartParallelPrompt
Namespace: Wisej.AI
Assembly: Wisej.AI (3.5.0.0)
question
e
The AnswerReceivedArgs instance containing the event data.
control
control
text
text
removeMessages
Indicates whether all messages from the ChatBox control should also be removed.
control
The control associated with the operation.
Represents a provider that supplies tools.
Public Class SmartDocumentAdapter
Inherits SmartAdapterRepresents a connection to SambaNova endpoints, inheriting from OpenAIEndpoint.
Represents a connection to TogetherAI endpoints, providing access to various AI models and services.
Represents a connection to X.AI endpoints, extending the functionality of OpenAIEndpoint.
payload
The payload to which messages are added.
session
The current session.
messages
The list of messages to be processed.
message
The message to which options are added.
session
The current session containing model options.
inputs
The array of input strings to be embedded.
response
The response containing the assistant's message.
message
The message object to populate with the assistant's response.
response
The response containing the embeddings data.
message
The message to update with usage data.
reply
The reply containing usage metadata.
Public Class GoogleAIEndpoint
Inherits SmartHttpEndpointpublic class GoogleAIEndpoint : SmartHttpEndpointAn important concept in vector databases is "collections." Wisej.AI utilizes collections to organize embedded documents into logical groups, akin to how tables are used in databases. Additionally, the name of a document may include a virtual path, similar to a namespace, preceding the document's name.
For instance, to store two documents with the same name but in different "folders," you can use a naming convention like this:
If the code does not specify a collection name, Wisej.AI defaults to using the name "default" (in lowercase). The example below illustrates how to store documents in different collections.
Vector databases typically manage text chunks along with their corresponding vectors, while any additional information is stored in a general metadata field. Wisej.AI automatically extracts specific values when converting documents using the IDocumentConversionService. However, you can add additional custom fields by passing a Metadata object to the IngestDocument method.
The conversion service automatically adds several fields, depending on the document type: "Title," "Author," "Subject," "Pages," and "Description." For more details, refer to the IDocumentConversionService page. In addition to these fields, the IngestDocument method adds: "FilePath", "CreationDate", "ModifiedDate", "FileSize".
The following code demonstrates how to add custom metadata to an ingested document:
All metadata fields are made available to the AI as part of the RAG retrieval process when using DocumentSearchTools. If you use the IEmbeddingStorageService directly, you will find the metadata object as a property of the EmbeddedDocument instance.
Unless you register a specific provider, Wisej.AI defaults to using the built-in FileSystemEmbeddingStorageService. This implementation saves vectors in the file system at the location specified by FileSystemEmbeddingStorageService.StoragePath. The default path is set to "~\AI\Embeddings".
An easy alternative is the MemoryEmbeddingStorageService, which stores vectors in memory.
However, both implementations are intended for development purposes only and should not be used in production environments.
You can run Chroma either locally or on a virtual machine (VM) in a data center. The simplest way to run it is by using the Docker image. For installation instructions, please refer to this link.
With Chroma, you don't need to pre-create the index. Wisej.AI will automatically create the index if it doesn't already exist.
Currently, there isn't a well-established UI for Chroma. However, you can try some available options on GitHub for free. One such option we have used is fengzhichao/chromadb-admin. This tool only allows you to view the collections created by Wisej.AI. For any administrative functions, you'll need to use tools like CURL or Postman.
When working with Pinecone, you need to create an index to be used with Wisej.AI through the Pinecone dashboard. When setting up a new index, you only need to define the vector size and the metric. Always use "cosine" as the metric. The vector size is determined by the embedding model you plan to use.
Embedding models are not interchangeable. Therefore, once you create an index, it can only be used with the embedding model for which it was initially configured. In Wisej.AI, the default embedding model is text-embedding-3-small, which requires a vector with 1,536 dimensions.
To use Pinecone with Wisej.AI, you can register the service as follows:
The endpoint URL is the service index host address as shown by Pinecone.
When utilizing Azure AI Search with Wisej.AI, you must first create the index you'll be working with. Since Azure AI Search starts with a blank schema, you need to define all the necessary fields. Refer to the table and JSON file below for a comprehensive list of required fields.
🔑
id
Edm.String
Retrievable, Filterable
master
Edm.Boolean
Retrievable, Filterable
documentName
Edm.String
Download the JSON definition below to create the index.
The field that requires particular attention is the vector, where embeddings are stored and searched. When defining this field, select Collection(Edm.Single) and ensure that both the Retrievable and Searchable options are enabled. Additionally, you must specify the Dimensions, which indicate the size of the array based on the embedding model used.
Embedding models are not interchangeable. Therefore, once you create an index, it can only be used with the embedding model for which it was initially configured. In Wisej.AI, the default embedding model is text-embedding-3-small, which requires a vector with 1,536 dimensions.
This is what a created index looks like:
To use Azure AI Search with Wisej.AI, you can register the service as follows:
The endpoint URL is the service endpoint concatenated with /indexes/<index name>. For example, our tests use `https://aisearchwisej.search.windows.net/indexes/wisejtest`.
Qdrant offers flexibility in its deployment options by allowing you to run it locally or utilize Qdrant Cloud. They provide a user-friendly Docker image for local installations and offer a cloud service option. The cloud service includes a free tier hosted on Amazon AWS or Google Cloud, making it accessible and convenient for a variety of use cases.
With Qdrant, you don't need to pre-create the collection. Wisej.AI will automatically create it, if it doesn't already exist.
Qdrant Cloud offers an intuitive control panel that allows users to inspect their collections and execute queries directly within the interface.
To utilize a different vector database with Wisej.AI, you have two options: you can use the database directly, or you can implement the IEmbeddingStorageService interface and register your vector database connection as a Wisej.NET service. When you register your database this way, it will be seamlessly integrated and utilized by the DocumentSearchTools and the SmartHub.IngestDocument() implementations.
You can use any of the following implementations as a reference or starting point for integrating additional providers and developing your own custom IEmbeddingStorageService implementation. These examples demonstrate the recommended structure and key considerations when extending Wisej.AI with custom storage service integrations.
The generation of embeddings for chunks of text is handled by the IEmbeddingGenerationService. However, you have the option to generate embeddings directly using any other system or by replacing the service.
It's crucial to understand that embeddings generated with one model are not compatible with those generated by another model. Consequently, if you store documents and embeddings using a specific model and later change the IEmbeddingGenerationService or the model itself, all previously stored embeddings will become unusable for queries with a different embedding model. This necessitates careful consideration when altering the embedding generation approach to ensure compatibility and continuity.
This is why Wisej.AI registers a single shared IEmbeddingGenerationService with a default implementation that uses the OpenAIEndpoint and the "text-embedding-3-small" model. We recommend installing a single service implementation at startup and consistently using the same one.
For instance, if you want to change the model used by the OpenAIEndpoint or utilize your own embedding server, refer to the following example:
Regarding the IEmbeddingStorageService, Wisej.AI provides several built-in implementations for the IEmbeddingGenerationService. Additionally, you have the flexibility to create your own custom implementations to suit your specific requirements. The built-in services include:
This implementation utilizes the embedding endpoint of any SmartEndpoint that supports embedding functionality. By default, it is configured to use the OpenAIEndpoint. Please note, however, that not all AI providers offer embedding endpoints, so compatibility may vary depending on the provider you choose.
This implementation leverages the to generate embeddings. You can easily deploy this server locally by using the provided Docker container, allowing for flexible and scalable embedding generation within your own infrastructure.
Use the code below as a starting point to accelerate the development of your own custom service implementation. This example demonstrates the essential structure and key elements needed to create a new service that integrates seamlessly with the Wisej.AI framework.
Both implementations also demonstrate how to handle parallel requests. This approach allows your service to process multiple embedding requests concurrently.
Public Class SmartObjectAdapter
Inherits SmartAdapterThis class provides methods to convert text, images, streams, and clipboard content into .NET objects. It supports OCR for image processing and can handle multiple iterations to resolve missing values.
Initializes a new instance of the SmartObjectAdapter class.
Initializes a new instance of the SmartObjectAdapter class with the specified container.
container
An that represents the container of the component.
Int32: Gets or sets the maximum number of times the adapter will try to use the available tools to complete the missing values. (Default: 2)
If the adapter was not provided any tool, this property is ignored since it will not try to resolve missing values.
Boolean: Gets or sets a value indicating whether OCR should be used for image processing. (Default: False)
Releases the unmanaged resources used by the SmartObjectAdapter and optionally releases the managed resources.
disposing
true to release both managed and unmanaged resources; false to release only unmanaged resources.
Converts the clipboard content to an object of type T .
T
The type of the object to return.
Returns: Task<T>. A task that represents the asynchronous operation. The task result contains the converted object.
Converts the clipboard content to an object of the specified type.
objectType
The type of the object to return.
Returns: Task<Object>. A task that represents the asynchronous operation. The task result contains the converted object.
Converts the specified image to an object of type T .
T
The type of the object to return.
image
The image to convert.
Returns: Task<T>. A task that represents the asynchronous operation. The task result contains the converted object.
Converts the specified image to an object of the specified type.
image
The image to convert.
objectType
The type of the object to return.
Returns: Task<Object>. A task that represents the asynchronous operation. The task result contains the converted object.
Converts the specified stream to an object of type T .
T
The type of the object to return.
stream
The stream to convert.
streamType
The type of the stream. Default is null.
Returns: Task<T>. A task that represents the asynchronous operation. The task result contains the converted object.
Converts the specified stream to an object of the specified type.
stream
The stream to convert.
objectType
The type of the object to return.
streamType
The type of the stream. Default is null.
Returns: Task<Object>. A task that represents the asynchronous operation. The task result contains the converted object.
Converts the specified text to an object of type T .
T
The type of the object to return.
text
The text to convert.
Returns: Task<T>. A task that represents the asynchronous operation. The task result contains the converted object.
Converts the specified text to an object of the specified type.
text
The text to convert.
objectType
The type of the object to return.
Returns: Task<Object>. A task that represents the asynchronous operation. The task result contains the converted object.
Raises the ObjectParsed event.
args
The instance containing the event data.
Executes the core logic of the adapter asynchronously.
control
The control to use for the operation.
Returns: Task<Message>. A task that represents the asynchronous operation. The task result contains the response message.
ObjectParsedEventHandler Occurs when an object is parsed.
Represents a provider that supplies tools.
public class SmartObjectAdapter : SmartAdapterRepresents a smart parallel prompt that can execute multiple tasks concurrently.
public class SmartParallelPrompt : SmartPromptPublic Class SmartParallelPrompt
Inherits SmartPromptThis class extends the SmartPrompt class and provides functionality to run multiple prompts in parallel.
Initializes a new instance of the SmartParallelPrompt class with optional text.
text
The text associated with the prompt. Default is null.
Executes the prompt tasks asynchronously in parallel.
hub
The smart hub used for executing the tasks.
inputs
An array of input prompts to process in parallel.
Returns: Task<Message[]>. A task that represents the asynchronous operation. The task result contains an array of Message objects.
This method runs each input through the AskAsync method and tracks the progress using the CurrentIndex property.
Throws:
ArgumentNullException
Thrown when hub is null.
ArgumentNullException
Thrown when inputs is null.
Executes the prompt tasks asynchronously in parallel.
hub
The smart hub used for executing the tasks.
inputs
An array of input prompts to process in parallel.
images
An array of input images to process in parallel.
Returns: Task<Message[]>. A task that represents the asynchronous operation. The task result contains an array of Message objects.
This method runs each input through the AskAsync method and tracks the progress using the CurrentIndex property.
Throws:
ArgumentNullException
Thrown when hub is null.
ArgumentNullException
Thrown when inputs is null.
ArgumentNullException
Thrown when images is null.
Executes the prompt tasks asynchronously in parallel.
endpoint
The endpoint used for executing the tasks.
inputs
An array of input prompts to process in parallel.
Returns: Task<Message[]>. A task that represents the asynchronous operation. The task result contains an array of Message objects.
This method runs each input through the AskAsync method and tracks the progress using the CurrentIndex property.
Throws:
ArgumentNullException
Thrown when inputs is null.
Executes the prompt tasks asynchronously in parallel.
endpoint
The endpoint used for executing the tasks.
inputs
An array of input prompts to process in parallel.
images
An array of input images to process in parallel.
Returns: Task<Message[]>. A task that represents the asynchronous operation. The task result contains an array of Message objects.
This method runs each input through the AskAsync method and tracks the progress using the CurrentIndex property.
Throws:
ArgumentNullException
Thrown when inputs is null.
Raises the Progress event.
args
The event data.
ProgressEventHandler Occurs when progress is made in the execution of the parallel tasks.
Represents a provider that supplies tools.
text
The initial text for the prompt. Default is null.
hub
The smart hub to use.
question
The question to ask.
image
An optional image to include in the question. Default is null.
endpoint
The smart endpoint to use.
question
The question to ask.
image
An optional image to include. Default is null.
tool
The tool to remove.
target
The target object containing tools.
key
The key to resolve.
filePath
The file path to save the prompts to.
tool
The tool to add.
target
The target object containing tools.
TODO:
Represents a smart parallel prompt that can execute multiple tasks concurrently.
Represents a provider that supplies tools.
Public Class SmartPrompt
Inherits SmartObject
Implements IToolProvider, ICloneableHow to Extend and Integrate Wisej.AI
Wisej.AI offers unparalleled flexibility, ensuring that you are not limited by its predefined features. The system is designed to be open and highly extensible. In fact, the majority of the components have virtual members, allowing you to override nearly any functionality. You have the ability to customize all built-in prompts and connect to a wide range of events at various levels, providing a seamless integration experience.
Considering the architecture of Wisej.AI, you have several options for overriding its built-in functionality and integrating any other AI library of your choice.
You can override any of the built-in SmartEndpoints to reuse the existing functionality as needed. Alternatively, you can create a brand-new endpoint by extending either the base or . For instance, if you want to integrate a third-party client for OpenAI (or any other provider) into the Wisej.AI system, you can do so by extending and implementing at least the abstract methods. This approach allows you to customize and tailor the integration according to your specific requirements.
Many AI providers have standardized their APIs based on the OpenAI API. In such cases, you can use the OpenAIEndpoint within Wisej.AI and modify parameters like the URL, model, and authentication as needed. Alternatively, you can extend the to integrate different properties and adjust any variances specific to your provider. This flexibility allows for seamless adaptation to various AI service requirements.
To enhance any control with AI features in Wisej.AI, you can create custom adapters. Start by deriving your adapter from SmartAdapter and place your implementation in the RunAsyncCore() method. Within this method, you are responsible for interacting with the AI service and parsing its response. Notably, you are not required to use any of the SmartEndpoints within this process.
For example, if you want to create an adapter that populates a ComboBox with individual text items generated from an AI prompt, for simplicity you can use the property of the enhanced control to store the prompt. This approach provides a straightforward way to integrate AI-driven functionality into your applications.
If you wish to enhance the functionality of a control by adding a new property, you can make your adapter implement the interface. This allows the adapter to introduce new properties to the controls it augments. By implementing this interface, you can dynamically extend the capabilities of the associated controls, offering a more sophisticated level of customization.
After creating the MyComboBoxAdapter as outlined above, follow these steps to integrate it into your project:
Add a SmartHub and SmartEndpoint to your project.
Select your MyComboBoxAdapter.
Drag and drop your adapter onto the open form designer.
Associate the adapter with an existing ComboBox in your interface.
Once the adapter is linked to the ComboBox, it will automatically add the ItemsPrompt property to the ComboBox. Set this property to something like "European countries" or "US states" and run the application to see it in action. This will allow the ComboBox to populate dynamically based on the AI prompt specified in the ItemsPrompt property, or the Tag property if you didn't extend the IExtenderProvider interface.
One of the most powerful features in Wisej.AI is the system. You have the flexibility to utilize or extend any built-in tool, or you can create a new tool from scratch. The system is designed to make this process extremely straightforward and user-friendly, allowing developers to easily customize and extend their application's capabilities.
You can provide tools to the AI in multiple ways and at different levels within the Wisej.AI framework. For instance, if you annotate a method on the control being designed with the [SmartTool.Tool] attribute and have a component in place, the SmartHub will automatically include this method in the tools available to the AI. This inclusion is independent of the adapter, prompt, or endpoint being used. In essence, once a tool is added to a SmartHub, it is consistently made available to the AI model, ensuring seamless integration and accessibility.
Ways to provide tools to the AI:
To turn a method into a tool within the Wisej.AI system, annotate it with the [SmartTool.Tool] attribute. This method will be available as a tool if it is declared within a container that's associated with a SmartHub. Alternatively, the method can become a tool if it is part of an object that is passed to the UseTools() method of a , , , or . This flexible integration ensures that your tools can be widely accessible across different components of your application.
You can directly provide a specific method to the SmartHub, SmartPrompt, SmartAdapter, or SmartSession components by using the UseTool() method. For example, to pass a method such as get_customer_id to a SmartHub, you would use the following code: smartHub1.UseTool(this.get_customer_id);. This approach allows you to easily leverage specific methods as tools without needing to encapsulate them within a class or container.
For instance, if you want to enable the AI to send an email while performing other tasks, you can create a tools class with two methods. One method can be designed to send the email, while the other can be developed to find the recipient's email address, adding an intelligent layer to the process. By integrating this tools class into the Wisej.AI framework, you can provide the AI with advanced capabilities for handling emails efficiently as part of its operations.
You can enhance the usability and clarity of the tools within Wisej.AI by using the Description attribute to annotate various components. You can annotate the tools container class, each method, and each parameter with this attribute. These additional annotations provide the AI with detailed information, allowing it to utilize the tools more effectively and accurately, ensuring that each tool is used correctly within your application.
This is an example of the built-in annotation for the DatabaseTools bulilt-in tools container:
Using an INI file offers several advantages when working with prompts in Wisej.AI. It allows you to craft clearer and more complex prompts and gives you the flexibility to fine-tune these prompts externally, without needing to modify your application's code. This external configuration capability enhances maintainability and adaptability, enabling you to adjust prompts as needed without redeploying your application.
In Wisej.AI, a tool can modify its own prompt by programmatically replacing placeholders like {{server-type}} and {{database-schema}} with specific values relevant to the tool itself. This can be achieved by implementing logic within the tool's initialization or execution process to dynamically insert the appropriate values into the prompt.
For instance, you can define a method within the tool that retrieves the required information—for example, the server type or database schema—and injects these values into the prompt before it is processed. This approach ensures that the prompt is precisely tailored based on the tool's configuration or current state, maintaining the flexibility and relevance of the information provided to the AI.
To enable a tool to modify its own prompt with specific values for placeholders, your tools container class should either implement the interface or extend the class. By leveraging these options, you can incorporate the necessary logic to dynamically replace placeholders in prompts with context-specific information. This allows for greater customization and flexibility when configuring how tools interact with the AI in Wisej.AI.
If you modify the MyEmailTools example to include a placeholder in the tool's prompt, it might look something like this:
By using placeholders in prompts and implementing the logic to replace them with actual data dynamically, you can create flexible and context-aware tools in Wisej.AI.
Here are some functionalities you can use as inspiration for developing your tools:
Search the database
Search the file system
Search a document storage
Search the web
When augmented with sophisticated tools, there is virtually no limit to what an AI system can achieve.
Wisej.AI's internal implementation leverages a variety of for its adapters, tools, embeddings, and other AI functionalities. As a developer, you have the flexibility to override these built-in implementations or replace them with your own, including the use of alternative AI frameworks.
If your code relies on Semantic Kernel (SK), you can seamlessly integrate it into the Wisej.AI system at various points and execution levels. This flexibility allows you to enhance functionality and tailor the system to your specific needs.
As mentioned earlier, Wisej.AI functions at a higher level compared to foundational libraries like Semantic Kernel. While both Wisej.AI and SK offer similar basic functionalities, Wisej.AI provides a more integrated and advanced set of features designed for complex application development. To illustrate this, here is a comparison using a simple :
If you wish to integrate Semantic Kernel with Wisej.AI, you certainly can:
You can implement a new SKEndpoint derived from SmartEndpoint and use the SK's clients.
You can override the RunAsyncCore() in any of the existing adapters and use SK's plugin system, clients, embedding system, etc. Or you can implement it like this from the start in new adapters.
You can re-implement any of the that Wisej.AI uses in various places to use SK's vector storage support, document conversion, tokenization, chunking, etc.
While there may not be other directly comparable .NET libraries in the space that Wisej.AI occupies, the concept of integrating other libraries, like Semantic Kernel, applies broadly. You can seamlessly incorporate any other compatible library into your Wisej.AI applications. This approach allows you to extend the capabilities of your applications by combining the specific features and strengths of other libraries with the robust, high-level functionalities provided by Wisej.AI.
Detailed description of the Wisej.AI architecture and all its components.
Wisej.AI is specifically designed with developers in mind to seamlessly integrate AI systems into business applications.
The system fundamentally operates by utilizing the application's controls and code to construct sophisticated prompts. These prompts are then sent to the AI provider, and the response is parsed in accordance with the requirements of the controls and code.
Wisej.AI is composed of these key components:
SmartEndpoint
SmartPrompt
SmartAdapter
SmartTool
SmartSession
SmartHub
In developing Wisej.AI, several fundamental principles were adopted: maintaining simplicity, avoiding over-engineering, delivering real value to developers, and ensuring the system is self-contained with minimal dependencies.
Wisej.AI supports .NET Core, .NET Framework, C#, and VB.NET.
As the name implies, the components are responsible for managing all communications between an endpoint and the rest of the system. An endpoint refers to the URL of the AI engine where the Large Language Model (LLM) is running. This can be hosted on a local server on-premises, a private server in a data center, at any public AI providers, or even in your browser as a JavaScript worker.
SmartEndpoint components are designed to support any AI provider and handle various payload types, including binary data and images. They define numerous properties and methods specifically tailored to handle inference and embedding requests efficiently.
The component manages both system and user prompts. It resolves prompt text from INI files, replaces placeholders with parameter values, and offers a straightforward method for interacting with the AI.
For instance, using Wisej.AI in its simplest form requires just an endpoint. Here's an example:
An even simpler approach, requiring fewer lines of code, is available by using the . However, regardless of the method chosen, an endpoint is always necessary.
Prompts can either be written directly in code or referenced using a key string enclosed in square brackets. SmartPrompt will locate the key and read the corresponding prompt from any .ini file stored in the /AI folder. See for more information on INI files.
Prompts can also include placeholders, or parameters, which are replaced at runtime. These placeholders must be enclosed within {{ }}. However, when setting a parameter value, you should specify the parameter name without the curly brackets.
Parameters are always replaced before submitting a question but only within the initial System Prompt and the current user prompt. The earlier example using parameters might look like this:
act as mediators between the application and the AI, adapting data and processes to fit specific needs. Each adapter has a distinct role and set of features. For instance, the SmartDataEntryAdapter extracts structured data from unstructured sources and updates Wisej.NET controls with the relevant information. Similarly, the SmartObjectAdapter performs this function by setting properties of .NET objects instead of updating controls. Another example is the SmartChartAdapter, which creates charts from unstructured data.
A SmartAdapter operates by leveraging an instance of the SmartHub, which in turn requires an instance of the SmartEndpoint. When using a SmartAdapter, you're essentially working within the structure outlined in the diagram at the top of this page.
Once the adapter is properly configured, the following example demonstrates the powerful capabilities of the SmartObjectAdapter:
With just one line of code, we accomplish a task that would be nearly impossible to achieve using standard programming methods.
are integral to the Wisej.AI architecture. By utilizing SmartTools, you can "compose" powerful AI systems that deliver significant value to your applications. A SmartTool, at its core, is a function—code that executes, accepts parameters, and returns a result.
In Wisej.AI, having an exceptionally powerful tools system is a core feature. SmartTools in Wisej.AI go beyond simple method callbacks or the plugin systems found in Semantic Kernel. They are designed to offer rich, extensible functionalities, providing developers with advanced capabilities to enhance and augment their applications seamlessly.
These functions (tools) operate within the context in which they were created, allowing them to:
Keep state information
Update controls on the screen
Work within the user session
Access any resource available to the application
Wisej.AI offers a unique capability to interact seamlessly with the user interface (UI) without limitations. Unlike Wisej.AI, Microsoft Semantic Kernel plugins are not integrated with any specific UI framework, which restricts their ability to perform such interactions. In any event, achieving this level of UI interaction would be extremely challenging with templating frameworks like Blazor, Angular, React, or other templating frameworks.
Since code often speaks louder than words, here is a very simple example to illustrate how SmartTools function:
Initially, the AI has responded that it cannot provide the current time. This is accurate, as AI models do not have the capability to gather real-time information outside of what is encoded in their neural network weights.
After incorporating prompt.UseTool(GetCurrentTime), the AI became capable of returning the current time. Essentially, we provided the AI with a tool, enabling it with a new capability. By programming such tools—be they small functions or more complex ones—we extend the AI's functionality.
In Wisej.AI, the interaction between a "new feature" (the tools) and the AI is handled effectively by the system. Essentially, every interaction is managed by an agent, ensuring seamless integration and functionality.
Wisej.AI employs its own tool definition prompting, rather than using the default defined by the AI model in use. This approach provides greater control over tool usage, reduces token consumption, and enables the use of tools even with models that were not originally trained with such capabilities.
Note that the AddTool() and AddTools() methods support call chaining:
Keep in mind that tools can also be implemented as asynchronous methods. This allows the code within these tools to interact with the user through the Wisej.NET user interface. Additionally, it enables the invocation of other services. Through the use of asynchronous methods, your application can handle long-running operations without blocking the user interface.
In addition to async tools, Wisej.AI also supports anonymous tools:
Being a lamba method, it runs in context allowing the code in the lamba to reference local variables declared in the containing code.
As explained on the General Concepts page, LLMs do not possess any memory, do not remember past interactions, and are inherently stateless. Each time you ask a question, it is akin to talking to .
Dory is a fictional blue tang fish and a major character of Pixar's animated film series Finding Nemo. She suffers from short-term memory loss.
The Wisej.AI agent system operates within the SmartSession. This is where Wisej.AI constructs the prompts, invokes the tools, and repeats the process until the LLM has completed the requested task. Additionally, the SmartSession in Wisej.AI is capable of managing message length when the context window is exceeded by either trimming or summarizing previous messages.
Every interaction with the LLM involves an instance of the SmartSession component. Even for a single prompt, Wisej.AI always creates and disposes of a session internally. This ensures that every request can utilize tools and operate within the boundaries set by Wisej.AI.
You can use the SmartSession, or a custom-derived class, to maintain the state of the interaction with the LLM and preserve the context effectively.
The example above demonstrates a session with two interactions with the LLM. In the first interaction, an image and a prompt are submitted. In the second interaction, you can refer to the image without resubmitting it, as it is part of the session and will be automatically included.
Interactions that include images are only compatible with models that have vision capabilities. Adapters that work with images have the UseOCR property allowing the usage the and extract text from the image (without relying on the model's vision) before sending it to the AI.
The final component is the . As the name suggests, it functions as a central hub, connecting an endpoint, multiple adapters, custom tools, and your Wisej.NET application.
With the SmartHub, you can manage messages, errors, and parameters related to all the adapters connected to both the SmartHub and the endpoint. A SmartHub always requires one endpoint, but it can be associated with multiple adapters.
In addition to centralizing events and connecting adapters to an endpoint, the SmartHub offers a variety of higher-level functions that leverage all the built-in services in Wisej.AI. Essentially, it provides developers with a high-level interface to interact with the AI.
When the SmartHub is placed onto a designer surface of a component, it automatically binds to the top-level component and registers all methods marked with the [SmartTool.Tool] attribute as tools.
In Wisej.AI, the majority of objects are designed to trigger various events, enabling applications to interact with AI processes at multiple stages. While individual SmartAdapters may offer their own unique events, three key components—, , and —uniformly raise a standardized set of events. This consistent event pattern facilitates seamless integration and enhances the interactivity between the application and Wisej.AI functionalities.
Events are typically triggered in the following sequence:
SmartSession
SmartPrompt
SmartHub
When event arguments are derived from HandledEventArgs, listeners have the option to set the Handled property to true. This signals that the event has already been addressed by the listener. For instance, when handling the BeforeInvokeTool event, if you set e.Handled to true, the system will not proceed to invoke the tool, as it will presume that your code has already taken care of the necessary actions.
This feature allows you to intercept a tool call and carry out additional operations either before, after, or instead of executing the tool. This capability provides flexibility in customizing the behavior of your application, enabling you to implement complex logic or alternative workflows as needed.
internal static class Program
{
static Program()
{
Application.Services
.AddOrReplaceService<IEmbeddingStorageService>(
new PineconeEmbeddingStorageService("<endpoint url>"));
}
}Module Program
Shared Sub New()
Application.Services.AddOrReplaceService( _
Of IEmbeddingStorageService)( _
New PineconeEmbeddingStorageService("<endpoint url>"))
End Sub
End Moduleinternal static class Program
{
static Program()
{
Application.Services
.AddOrReplaceService<IEmbeddingStorageService>(
new AzureAISearchEmbeddingStorageService("<endpoint url>"));
}
}Module Program
Shared Sub New()
Application.Services.AddOrReplaceService( _
Of IEmbeddingStorageService)( _
New AzureAISearchEmbeddingStorageService("<endpoint url>"))
End Sub
End Modulestatic class Program
{
static Program()
{
Application.Services
.AddOrReplaceService<IEmbeddingGenerationService>(
new DefaultEmbeddingGenerationService(
new OpenAIEndpoint{ EmbeddingModel = "text-embedding-3-large" }));
// Or
Application.Services
.AddOrReplaceService<IEmbeddingGenerationService>(
new HuggingFaceEmbeddingGenerationService("http://ollama.myserver.com:8090"));
// Or
Application.Services
.AddOrReplaceService<IEmbeddingGenerationService>(
new DefaultEmbeddingGenerationService(
new TogetherAIEndpoint()));
}
}Module Program
Sub New()
Application.Services.AddOrReplaceService(Of IEmbeddingGenerationService)(
New DefaultEmbeddingGenerationService(
New OpenAIEndpoint With {.EmbeddingModel = "text-embedding-3-large"}))
' Or
Application.Services.AddOrReplaceService(Of IEmbeddingGenerationService)(
New HuggingFaceEmbeddingGenerationService("http://ollama.myserver.com:8090"))
' Or
Application.Services.AddOrReplaceService(Of IEmbeddingGenerationService)(
New DefaultEmbeddingGenerationService(
New TogetherAIEndpoint()))
End Sub
End Moduleawait this.smartHub1.IngestDocumentAsync(
"C:\\Files\\2024\\AAPL-10K.pdf"), "10Ks\\2024\\AAPL-10K.pdf");
await this.smartHub1.IngestDocumentAsync(
"C:\\Files\\2023\\AAPL-10K.pdf"), "10Ks\\2023\\AAPL-10K.pdf");await this.smartHub1.IngestDocumentAsync(
"C:\\Files\\2024\\AAPL-10K.pdf"), "10Ks\\2024\\AAPL-10K.pdf", "Apple Docs");
await this.smartHub1.IngestDocumentAsync(
"C:\\Files\\Logs\\ServiceLogs.txt"), "ServiceLogs.txt", "Logs");var metadata = Metadata();
metadata["ServiceName"] = "W3WP-1";
await this.smartHub1.IngestDocumentAsync(
"C:\\Files\\Logs\\ServiceLogs.txt"), "ServiceLogs.txt", "Logs", true, metadata);
var smartPrompt = new SmartParallelPrompt();
var messages = await smartPrompt.RunAsync(hub, new string[] { "input1", "input2" });
var smartPrompt = new SmartParallelPrompt();
var messages = await smartPrompt.RunAsync(hub, new string[] { "input1", "input2" });
var smartPrompt = new SmartParallelPrompt();
var messages = await smartPrompt.RunAsync(hub, new string[] { "input1", "input2" });
var smartPrompt = new SmartParallelPrompt();
var messages = await smartPrompt.RunAsync(hub, new string[] { "input1", "input2" });
SmartPrompt prompt = new SmartPrompt("You are a weather expert");
prompt.Start += (s, e) => Console.WriteLine("Processing started.");
prompt.Done += (s, e) => Console.WriteLine("Processing done.");
await prompt.AskAsync(new OpenAIEndpoint(), "What is the weather today?");
Retrievable, Filterable
âš¡
vector
Collection(Edm.Single)
Retrievable, Searchable
collectionName
Edm.String
Retrievable, Filterable
metadata
Edm.String
Retrievable, Filterable
chunk
Edm.String
Retrievable, Filterable





Read incoming emails
Send emails
Read signals in a SCADA system
[Extends(typeof(ComboBox), allowMultiple: true)]
public class MyComboBoxAdapter : SmartAdapter
{
SmartPrompt _prompt;
public MyComboBoxAdapter()
{
_prompt = new SmartPrompt(
"Your job is to create a comma separated list " +
"of the items described by the user.");
}
protected override async Task<SmartSession.Message> RunAsyncCore(
Control control)
{
var comboBox = control as ComboBox;
if (comboBox == null)
return null;
var question = control.Tag as string;
if (String.IsNullOrEmpty(question))
return null;
var response = await _prompt.AskAsync(this.Hub, question);
comboBox.Items.Clear();
comboBox.Items.AddRange(
response.Text.Split(',')
.Select(t => t.Trim())
.ToArray());
return response;
}
}Provide parameters that alter their own prompt
Interact with the calling context and callers
Interact with the user through the Wisej.NET UI
Return synchronous and asynchronous values
BeforeInvokeTool
Occurs before a tool is invoked.
AfterInvokeTool
Occurs after a tool is invoked.
PrepareMessages
Occurs after the messages have been prepared and before they are send to the AI.
Start
Occurs when the session starts processing a question.
Done
Occurs when the session is done processing a question.
Error
Occurs when an error is encountered.
BeforeSendRequest
Occurs before a request is sent.
AfterResponseReceived
Occurs after a response is received.
ConvertParameter
Occurs when a parameter needs to be converted.

<Extends(GetType(ComboBox), AllowMultiple:=True)>
Public Class MyComboBoxAdapter
Inherits SmartAdapter
Private _prompt As SmartPrompt
Public Sub New()
_prompt = New SmartPrompt( _
"Your job is to create a comma " & _
"separated list of the items described by the user.")
End Sub
Protected Overrides Async Function RunAsyncCore(control As Control)
As Task(Of SmartSession.Message)
Dim comboBox = TryCast(control, ComboBox)
If comboBox Is Nothing Then
Return Nothing
End If
Dim question = TryCast(control.Tag, String)
If String.IsNullOrEmpty(question) Then
Return Nothing
End If
Dim response = Await _prompt.AskAsync(Me.Hub, question)
comboBox.Items.Clear()
comboBox.Items.AddRange(response.Text.Split(","c).
Select(Function(t) t.Trim()).
ToArray())
Return response
End Function
End Class[ProvideProperty("ItemsPrompt", typeof(ComboBox))]
[Extends(typeof(ComboBox), allowMultiple: true)]
public class MyComboBoxAdapter : SmartAdapter, IExtenderProvider
{
// Same as above
[Category("AI Features")]
public string GetItemsPrompt(Control control)
{
return control.UserData.ItemsPrompt as string;
}
public void SetItemsPrompt(Control control, string text)
{
control.UserData.ItemsPrompt = text;
}
bool IExtenderProvider.CanExtend(object extendee)
{
var control = extendee as Control;
if (IsAssociatedWith(control))
return control is ComboBox;
return false;
}
}<ProvideProperty("ItemsPrompt", GetType(ComboBox))>
<Extends(GetType(ComboBox), allowMultiple:=True)>
Public Class MyComboBoxAdapter
Inherits SmartAdapter
Implements IExtenderProvider
'' Same as above
<Category("AI Features")>
Public Function GetItemsPrompt(control As Control) As String
Return TryCast(control.UserData.ItemsPrompt, String)
End Function
Public Sub SetItemsPrompt(control As Control, text As String)
control.UserData.ItemsPrompt = text
End Sub
Private Function CanExtend(extendee As Object) As Boolean
Implements IExtenderProvider.CanExtend
Dim control As Control = TryCast(extendee, Control)
If IsAssociatedWith(control) Then
Return TypeOf control Is ComboBox
End If
Return False
End Function
End Class[Description("[MyEmailTools]")]
public class MyEmailTools
{
[SmartTool.Tool]
private string find_email_address(string name)
{
var emailAddress = "[email protected]";
return emailAddress;
}
[SmartTool.Tool]
private bool send_email(string emailAddress, string subject, string body)
{
var email = new Emailer();
return email.SendEmail(emailAddress, subject, body);
}
}
...
this.smartDataEntryAdapter1.UseTools(new MyEmailsTools());# Prompt for MyEmailTools
[MyEmailTools]
Use this tool to send an email to the account manager assigned
to the new case entered into the system. The subject should be the
case number followed by a - and a short summary of the case. The body
should contain all the data of the case in a table.<Description("[MyEmailTools]")>
Public Class MyEmailTools
<SmartTool.Tool>
Private Function find_email_address(name As String) As String
Dim emailAddress = "[email protected]"
Return emailAddress
End Function
<SmartTool.Tool>
Private Function send_email(
emailAddress As String, subject As String, body As String) As Boolean
Dim email = New Emailer()
Return email.SendEmail(emailAddress, subject, body)
End Function
End Class
...
Me.smartAdapter1.UseTools(New MyEmailTools())# Prompt for MyEmailTools
[MyEmailTools]
Use this tool to send an email to the account manager assigned
to the new case entered into the system. The subject should be the
case number followed by a - and a short summary of the case. The body
should contain all the data of the case in a table.#
# DatabaseTools
#
[DatabaseTools]
Provides tools to access the database.
Unless instructed otherwise, use it before the web search tools.
Instructions:
- Use only the tables defined in the Database Schema.
- The generated SQL statement MUST be valid for "{{server-type}}".
- Define column aliases within single quotes.
- Enclose column names in [].
```Database Schema
{{database-schema}}
```
[DatabaseTools.select]
Executes a SQL SELECT command on the database to retrieve the specified data.
[DatabaseTools.select.sql]
A well-formed SQL SELECT statement using the tables and
columns defined in the Database Schema.# Prompt for MyEmailTools
[MyEmailTools]
Use this tool to send an email to the {{recipient_types}} assigned
to the new case entered into the system. The subject should be the
case number followed by a - and a short summary of the case. The body
should contain all the data of the case in a table.public MyEmailTools(bool notifySupervisor)
{
if (!notifySupervisor)
this.Parameters.Add("recipient_types", "Account Manager");
else
this.Parameters.Add("recipient_types", "Account Manager and Supervisor");
}
...
this.smartDataEntryAdapter1.UseTools(new MyEmailsTools(true));# Prompt for MyEmailTools
[MyEmailTools]
Use this tool to send an email to the {{recipient_types}} assigned
to the new case entered into the system. The subject should be the
case number followed by a - and a short summary of the case. The body
should contain all the data of the case in a table.Public Sub New(notifySupervisor As Boolean)
If Not notifySupervisor Then
Me.Parameters.Add("recipient_types", "Account Manager")
Else
Me.Parameters.Add("recipient_types", "Account Manager and Supervisor")
End If
End Sub
...
Me.smartDataEntryAdapter1.UseTools(New MyEmailTools(True))using Wisej.AI;
using Wisej.AI.Endpoints;
// Create a smart prompt with a plugin (defined elsewhere)
var prompt = new SmartPrompt().UseTool(new LigthsPlugin());
// Create the AzureAI endpoint
var endpoint = new AzureAIEndpoint { URL = url };
// Ask the question and get the answer from the AI
var answer = prompt.AskAsync(endpoint, "Please turn on the lamp");
// Print the results
Console.WriteLine("Assistant > " + answer);using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.ChatCompletion;
using Microsoft.SemanticKernel.Connectors.OpenAI;
// Create a kernel with Azure OpenAI chat completion
var builder = Kernel.CreateBuilder().AddAzureOpenAIChatCompletion(modelId, endpoint, apiKey);
// Build the kernel
Kernel kernel = builder.Build();
var chatCompletionService = kernel.GetRequiredService<IChatCompletionService>();
// Add a plugin (the LightsPlugin class is defined elsewhere)
kernel.Plugins.AddFromType<LightsPlugin>("Lights");
// Enable planning
OpenAIPromptExecutionSettings openAIPromptExecutionSettings = new()
{
FunctionChoiceBehavior = FunctionChoiceBehavior.Auto()
};
// Create a history store the conversation
var history = new ChatHistory();
history.AddUserMessage("Please turn on the lamp");
// Get the response from the AI
var result = await chatCompletionService.GetChatMessageContentAsync(
history,
executionSettings: openAIPromptExecutionSettings,
kernel: kernel);
// Print the results
Console.WriteLine("Assistant > " + result);
// Add the message from the agent to the chat history
history.AddAssistantMessage(result);var prompt = new SmartPrompt();
var endpoint = new OpenAIEndpoint() { ApiKey = "..." };
var response = await prompt.AskAsync(
endpoint,
"Tell me a fun short story about a bee in less than 50 words.");
Console.WriteLine(response.Text);Dim prompt As New SmartPrompt()
Dim endpoint As New OpenAIEndpoint() With {.ApiKey = "..."}
Dim response = Await prompt.AskAsync( _
endpoint, _
"Tell me a fun short story about a bee in less than 50 words.")
Console.WriteLine(response.Text)var prompt = new SmartPrompt();
var endpoint = new OpenAIEndpoint() { ApiKey = "..." };
prompt.Parameters.Add("word-count", 50);
var response = await prompt.AskAsync(
endpoint,
"Tell me a fun short story about a bee in less than {{word-count}} words.");
Console.WriteLine(response.Text);Dim prompt As New SmartPrompt()
Dim endpoint As New OpenAIEndpoint() With {.ApiKey = "..."}
prompt.Parameters.Add("word-count", 50)
Dim response = Await prompt.AskAsync( _
endpoint, _
"Tell me a fun short story about a bee in less than {{word-count}} words.")
Console.WriteLine(response.Text)public class Person
{
[Description("First name")]
public string FirstName { get; set; }
[Description("Surname")]
public string Surname { get; set; }
[Description("Job title")]
public string Title { get; set; }
[Description("Extract list of personal interests from the text, summarize in 1-3 words maximum and change to title case")]
public string[] Interests { get; set; }
[Description("Person Address")]
public string Address { get; set; }
}
var adapter = new SmartObjectAdapter()
{
Hub = new SmartHub()
{
Endpoint = new OpenAIEndpoint()
{
ApiKey = "..."
}
}
};
// Extract the field values described above from resume.pdf.
var person = await adapter.FromStreamAsync<Person>(File.OpenRead("resume.pdf"));
// Or
var person = await adapter.FromImageAsync<Person>(businessCardImage);Public Class Person
<Description("First name")> _
Public Property FirstName As String
<Description("Surname")> _
Public Property Surname As String
<Description("Job title")> _
Public Property Title As String
<Description("Extract list of personal interests from the text, summarize in 1-3 words maximum and change to title case")> _
Public Property Interests As String()
<Description("Person Address")> _
Public Property Address As String
End Class
Dim adapter = New SmartObjectAdapter() With { _
.Hub = New SmartHub() With { _
.Endpoint = New OpenAIEndpoint() With { _
.ApiKey = "..." _
} _
} _
}
' Extract the field values described above from resume.pdf.
Dim person = Await adapter.FromStreamAsync(Of Person)(File.OpenRead("resume.pdf"))
' Or
Dim person = Await adapter.FromImageAsync(Of Person)(businessCardImage)var prompt = new SmartPrompt();
var endpoint = new OpenAIEndpoint();
var response = await prompt.AskAsync(endpoint, "What time is it?");
Console.WriteLine(response.Text);
// output:
// I'm sorry, but I don't have the capability to
// provide real-time information such as the current time.
prompt.UseTool(GetCurrentTime);
response = await prompt.AskAsync(endpoint, "What time is it?");
Console.WriteLine(response.Text);
// output:
// The current time is 4:51:38 PM.
...
private static string GetCurrentTime()
=> DateTime.Now.ToLongTimeString()Dim prompt As New SmartPrompt()
Dim endpoint As New OpenAIEndpoint()
Dim response = Await prompt.AskAsync(endpoint, "What time is it?")
Console.WriteLine(response.Text)
' output:
' I'm sorry, but I don't have the capability to
' provide real-time information such as the current time.
prompt.UseTool(AddressOf GetCurrentTime)
response = Await prompt.AskAsync(endpoint, "What time is it?")
Console.WriteLine(response.Text)
' output:
' The current time is 4:51:38 PM.
...
Private Shared Function GetCurrentTime() As String
Return DateTime.Now.ToLongTimeString()
End Functionthis.smartObjectAdapter
.UseTools(new WebSearchTools())
.UseTools(new MathTools())
.UseTool(TurnLightsOnOff);prompt.UseTool(
[Description("Returns the current date/time")]
() => DateTime.Now);using (var session = new SmartSession(new OpenAIEndpoint()))
{
var response = await session.AskAsync(
"Extract the emails in the image.",
screenshot);
Console.WriteLine(response.Text);
response = await session.AskAsync("Extract all the dates.");
Console.WriteLine(response.Text);
}
Using session As New SmartSession(New OpenAIEndpoint())
Dim response = Await session.AskAsync( _
"Extract the emails in the image.", _
screenshot)
Console.WriteLine(response.Text)
response = Await session.AskAsync("Extract all the dates.")
Console.WriteLine(response.Text)
End Usingvar hub = new SmartHub();
hub.Endpoint = new OpenAIEndpoint();
// Splits the pdf into chunks using IDocumentConversionService and ITextSplitterService
// Generates the embeddings for each chunk using IEmbeddingGenerationService
// Stores the chunks and the embeddings in a vector DB using IEmbeddingStorageService
await hub.IngestDocumentAsync("JP-CV.pdf", "jp-cv-00012.pdf");
// Generates the embedding using IEmbeddingGenerationService
var query = await hub.EmbedAsync("Experience");
// Finds the 1 chunk that is most relevant to "Wisej.NET" and
// has a similarity of at least 0.2.
var topChunks = await hub.SimilarityQueryAsync(
"Wisej.NET",
new[] { "Wisej.NET is...", "Bananas are not apples..." },
1,
0.2f);Dim hub As New SmartHub()
hub.Endpoint = New OpenAIEndpoint()
' Splits the PDF into chunks using IDocumentConversionService and ITextSplitterService
' Generates the embeddings for each chunk using IEmbeddingGenerationService
' Stores the chunks and the embeddings in a vector DB using IEmbeddingStorageService
Await hub.IngestDocumentAsync("JP-CV.pdf", "jp-cv-00012.pdf")
' Generates the embedding using IEmbeddingGenerationService
Dim query = Await hub.EmbedAsync("Experience")
' Finds the 1 chunk that is most relevant to "Wisej.NET" and
' has a similarity of at least 0.2.
Dim topChunks = Await hub.SimilarityQueryAsync(
"Wisej.NET",
New String() { "Wisej.NET is...", "Bananas are not apples..." },
1,
0.2F)Wisej.AI.Adapters.SmartDataEntryAdapter
Namespace: Wisej.AI.Adapters
Assembly: Wisej.AI (3.5.0.0)
Enhances all the controls in the associated container with the AI-powered capability to extract structured data from unstructured text.
public class SmartDataEntryAdapter : SmartAdapter, IExtenderProviderWorks with:
AzureAI/OpenAI gpt-4
AzureAI/OpenAI gpt-4o
AzureAI/OpenAI gpt-3.5
AzureAI/Anthropic Claude
Supports several types of sources for the input text: Clipboard (text or image), PDF stream, Text, Word, Excel. Uses the extension to run the OCR text extraction on the user's browser in JavaScript.
Initializes a new instance of the class.
Initializes a new instance of the class with the specified container.
: Gets or sets a value indicating whether read-only controls should be excluded. (Default: False)
: Gets or sets the maximum number of times the adapter will try to use the available tools to complete the missing values. (Default: 2)
If the adapter was not provided any tool, this property is ignored since it will not try to resolve missing values.
: Gets or sets a value indicating whether OCR should be used for image processing. (Default: False)
Releases the unmanaged resources used by the and optionally releases the managed resources.
Asynchronously processes data from the clipboard and extracts structured data.
Returns: . A task representing the asynchronous operation.
Throws:
Thrown when the SmartAdapter is busy.
Asynchronously processes the provided image and extracts structured data.
Returns: . A task representing the asynchronous operation.
Throws:
Thrown when the image is null.
Thrown when the SmartAdapter is busy.
Asynchronously processes the provided stream and extracts text and image data. If an image is detected without associated text, the method attempts to perform OCR to extract text.
Returns: . A task representing the asynchronous operation.
Throws:
Thrown when the stream is null.
Thrown when the SmartAdapter is busy.
Asynchronously processes the provided text and extracts structured data.
Returns: . A task representing the asynchronous operation.
Throws:
Thrown when the text is null.
Thrown when the SmartAdapter is busy.
Gets the name of the field to extract for the specified component.
Returns: . The name of the field to extract.
Throws:
Thrown when the component is null.
Returns: .
Gets the rectangle that defines the area of the field to extract for the specified component.
Returns: . The rectangle that defines the area of the field to extract.
Throws:
Thrown when the component is null.
Raises the event.
Raises the event.
Asynchronously runs the core logic of the adapter on the specified control.
Returns: . A task representing the asynchronous operation.
Sets the name of the field to extract for the specified component.
Throws:
Thrown when the component is null.
Sets the prompt to instruct the AI on how to extract the value for the field for the specified component.
Throws:
Thrown when the component is null.
Sets the rectangle that defines the area of the field to extract for the specified component.
Throws:
Thrown when the component is null.
Occurs when a value is parsed from the input data.
Occurs when a field is updated with a new value.
Wisej.AI.Adapters.SmartTextBoxAdapter
Namespace: Wisej.AI.Adapters
Assembly: Wisej.AI (3.5.0.0)
Enhances a TextBox control with several AI features, including suggestions, translation, and auto-correction.
public class SmartTextBoxAdapter : SmartAdapter, IExtenderProviderThis class extends the functionality of a standard TextBox by providing AI-driven features such as text suggestions, automatic translation between specified languages, and auto-correction of text input. It uses SmartPrompt sessions to interact with AI services.
Initializes a new instanced of .
: Gets or sets the source of the icon displayed when a suggestion is accepted. (Default: null)
: Gets or sets the source of the icon displayed during processing. (Default: "resource.wx/Wisej.AI/Icons/processing.svg")
: Gets or sets the delay in milliseconds before suggestions are shown. (Default: 250)
: Gets or sets the CSS style applied to suggestions. (Default: "opacity:0.4")
Determines whether the auto-correction feature is enabled for the specified control.
Returns: . True if auto-correction is enabled; otherwise, false.
Throws:
Thrown when the control is null.
Determines whether the suggestions feature is enabled for the specified control.
Returns: . True if suggestions are enabled; otherwise, false.
Throws:
Thrown when the control is null.
Determines whether the translation feature is enabled for the specified control.
Returns: . True if translation is enabled; otherwise, false.
Throws:
Thrown when the control is null.
Gets the source language for the translation feature associated with the specified control.
Returns: . The source language as a string.
Throws:
Thrown when the control is null.
Gets the target language for the translation feature associated with the specified control.
Returns: . The target language as a string.
Throws:
Thrown when the control is null.
Gets the user-provided phrases associated with the specified control.
Returns: . An array of user phrases.
Throws:
Thrown when the control is null.
Gets the user role associated with the specified control.
Returns: . The user role as a string.
Throws:
Thrown when the control is null.
Returns: .
Sets the source language for the translation feature associated with the specified control.
Throws:
Thrown when the control is null.
Sets the target language for the translation feature associated with the specified control.
Throws:
Thrown when the control is null.
control
The control to check.
control
The control to check.
control
The control to check.
control
The control from which to retrieve the source language.
control
The control from which to retrieve the target language.
control
The control from which to retrieve the user phrases.
control
The control from which to retrieve the user role.
control
control
control
control
value
control
value
control
value
control
The control to associate with the source language.
value
The source language as a string.
control
The control to associate with the target language.
value
The target language as a string.
control
phrases
control
value
Represents a provider that supplies tools.
Public Class SmartTextBoxAdapter
Inherits SmartAdapter
Implements IExtenderProviderLlama3:8b and 70b
container
An IContainer that represents the container of the component.
disposing
true to release both managed and unmanaged resources; false to release only unmanaged resources.
image
The image to be processed.
stream
The stream containing the input data.
streamType
An optional string that specifies the type of data being processed. If not provided, a default value is used.
text
The text to be processed.
component
The component for which to get the field name.
component
component
The component for which to get the field rectangle.
e
The ParseValueArgs instance containing the event data.
e
The UpdateFieldArgs instance containing the event data.
control
The control to process.
component
The component for which to set the field name.
value
The name of the field to extract.
component
The component for which to set the field prompt.
value
The prompt to instruct the AI on how to extract the value for the field.
component
The component for which to set the field rectangle.
value
The rectangle that defines the area of the field to extract.
Represents a provider that supplies tools.
Public Class SmartDataEntryAdapter
Inherits SmartAdapter
Implements IExtenderProviderWisej.AI.Endpoints.OpenAIEndpointRealtime
Namespace: Wisej.AI.Endpoints
Assembly: Wisej.AI (3.5.0.0)
Represents a component that manages communication with OpenAI's real-time API endpoints.
provides a convenient way to connect to and interact with OpenAI's real-time endpoints within a Wisej application. It extends , inheriting HTTP communication capabilities and adding specialized logic for real-time data exchange with OpenAI services. This component can be used to send prompts, receive streaming responses, and manage session state with OpenAI's real-time APIs.
Initializes a new instance of the class with default settings.
: Gets or sets the model used for chat completions. (Default: "gpt-4o-realtime-preview")
: Gets or sets the model used for the audio transcription. (Default: "gpt-4o-transcribe")
: Gets or sets the base URL for the OpenAI API. (Default: "https://api.openai.com/v1")
: (Default: True)
Adds options to the message payload.
Sets default values for temperature and max tokens in the message payload.
Constructs the instruction string for the OpenAI real-time API endpoint based on the current session context.
Returns: . A string containing the fully constructed instructions for the OpenAI real-time API endpoint.
This method generates the instruction prompt by extracting the system message from the session, incorporating tool prompts, and replacing template placeholders such as {{tools}}, {{today}}, and {{language}} with their corresponding values. The resulting instruction string is then processed to replace any additional session parameters.
Override this method to customize how instructions are built for the OpenAI endpoint.
Builds the tools payload based on the session context.
Returns: . A JSON object representing the tools payload.
Throws:
Thrown when the session is null.
Creates the configuration payload for the OpenAI real-time API session.
Returns: . A dynamic object representing the configuration payload for the session.
This method builds a dynamic configuration object that includes options, tools, instructions, and modalities required by the OpenAI realtime API. If audio transcription is enabled, the configuration will also include the transcription model. The transcription_enabled property is removed from the final configuration payload.
Disables the OpenAIEndpointRealtime component, stopping it from processing or sending further requests.
This method deactivates the component, preventing any further communication with the OpenAI Realtime API endpoint until is called.
Releases the unmanaged resources used by the component and optionally releases the managed resources.
Enables the OpenAIEndpointRealtime component, allowing it to process and send requests.
This method activates the component, allowing it to resume communication with the OpenAI Realtime API endpoint after being disabled.
Constructs the API URL for the real time api.
Returns: . The full API URL for the real time api..
Asynchronously retrieves an ephemeral token for the specified session from the OpenAI realtime API endpoint.
Returns: . A task that represents the asynchronous operation. The task result contains a dynamic object representing the ephemeral token response from the API.
This method constructs a request to the OpenAI real-time API endpoint to obtain a temporary (ephemeral) token associated with the provided session. The token is typically used for short-lived authentication or authorization purposes during real-time communication. The method sends a POST request with the session configuration and parses the response as a dynamic object.
Mutes the OpenAIEndpointRealtime component, suppressing any audio or notification output.
Use this method to temporarily silence the component without disabling its core functionality. To restore audio or notifications, call .
Raises the event.
This method is called to notify subscribers that a function call has been detected in the AI's response. Override this method to provide custom handling for function calls.
Raises the event.
Raises the event.
Raises the event.
Reads and updates the usage statistics from the OpenAI real-time API reply.
Sends an array of messages to the OpenAI Realtime API endpoint for processing.
Use this method to transmit one or more messages to the OpenAI Realtime API endpoint. Each message should conform to the expected structure required by the endpoint.
Initiates listening for real-time events from the OpenAI endpoint using the specified session.
This method establishes a connection to the OpenAI real-time API endpoint by obtaining an ephemeral token for the provided session and starting the listening process. It also triggers the OnStart event for the session and updates the associated component in the Wisej application.
If session is null, an is thrown.
Stops listening for real-time events from the OpenAI endpoint for the specified session.
This method terminates the connection to the OpenAI real-time API endpoint for the given session. It also triggers the OnDone event for the session to indicate that listening has stopped.
If session is null, an is thrown.
Unmutes the OpenAIEndpointRealtime component, restoring audio or notification output.
This method re-enables audio or notifications that were previously suppressed by .
Occurs when a function call is detected in the response from the OpenAI Realtime API.
This event is triggered when the SmartRealtimeAdapter identifies a function call in the AI's response, allowing the application to handle or execute the function as needed.
Occurs when a new response is created by the OpenAI real-time endpoint.
Occurs when the response from the OpenAI real-time endpoint has been fully received and processing is complete.
Occurs when a transcription operation has completed.
message
The message payload to modify.
session
The current session context.
session
The SmartSession instance containing the session context and parameters used to build the instruction string.
session
The session context.
session
The SmartSession instance containing session-specific options and settings.
disposing
true to release both managed and unmanaged resources; false to release only unmanaged resources.
session
The SmartSession instance representing the current user session for which the ephemeral token is requested.
e
A FunctionCallEventArgs instance containing the event data.
e
An EventArgs instance containing the event data.
e
A ResponseDoneEventArgs instance containing the event data for the completed response.
e
A TranscriptionCompletedEventArgs instance containing the event data for the completed transcription.
message
The message object to update with usage data.
reply
The dynamic object containing the API response, expected to include input_tokens and output_tokens properties.
messages
An array of dynamic message objects to be sent to the OpenAI endpoint.
session
The SmartSession instance representing the current user session. This parameter cannot be null.
session
The SmartSession instance representing the current user session. This parameter cannot be null.
Public Class OpenAIEndpointRealtime
Inherits SmartHttpEndpointpublic class OpenAIEndpointRealtime : SmartHttpEndpointWisej.AI.SmartAdapter
Namespace: Wisej.AI
Assembly: Wisej.AI (3.5.0.0)
Represents an abstract base class for creating smart adapters that interact with AI endpoints.
public class SmartAdapter : Component, IToolProviderPublic Class SmartAdapter
Inherits Component
Implements IToolProviderInitializes a new instance of the class.
Initializes a new instance of the class attached to an implementation to allow for disposing the SmartAdapter when the service container is disposed.
: Gets the collection of agents associated with this adapter.
: Gets or sets a value indicating whether the adapter should automatically run. (Default: True)
: Gets or sets a value indicating whether the adapter should automatically update the browser when is done processing (requires a working WebSocket connection). (Default: True)
: Gets or sets a value indicating whether the adapter is busy. (Default: False)
: Gets the list of controls associated with this adapter.
: Gets a value indicating whether the adapter has any tools.
: Gets or sets the associated with this adapter.
: Gets or sets the name of the adapter.
: Gets the collection of parameters associated with this adapter.
: Gets the in use by the adapter.
: Gets or sets the system prompt used by the adapter. (Default: "")
: Gets the collection of tools associated with this adapter.
: Gets the usage metrics for the session.
Clears all tools from the adapter.
Creates a new session with an optional prompt.
Returns: . A new instance.
Creates a new session of a specified type with an optional prompt.
Returns: . A new session of type T .
Releases the unmanaged resources used by the and optionally releases the managed resources.
Returns the JSON string returned in the message by stripping the enclosing markers (json and ) if present.
Returns: . JSON string.
Determines whether a control is associated with the adapter.
Returns: . True if the control is associated; otherwise, false.
Notifies all agents asynchronously with a message.
Returns: . A task representing the asynchronous operation.
Raises the event.
Called when a control is created.
Called when a control is disposed.
Raises the event.
Raises the event.
Registers a control with the adapter.
Removes a tool from the adapter.
Returns: .
Removes the tools from the specified object.
Returns: . The current instance.
Runs the adapter asynchronously.
Returns: . A task representing the asynchronous operation.
Unregisters a control from the adapter.
Adds a tool to the adapter.
Returns: .
Adds multiple tools to the adapter from a target object.
Returns: .
Occurs when the busy state of the adapter changes.
Occurs when the adapter has completed processing.
Occurs when the adapter starts processing.
Adds semantic filtering to the auto-complete functionality.
Turns the control into a AI-powered assistant. It can control and navigate an application, click menu items, navigation bar items, buttons, etc. It can also invoke methods in your applications as needed (see ).
Enhances all the controls in the associated container with the AI-powered capability to extract structured data from unstructured text.
Represents a document adapter that can perform AI tasks using a document as a data source and interact with the user through a ChatBox control.
Converts unstructured text into a structured .NET object.
Represents an adapter for a PictureBox that generates images based on a description using the OpenAI DALL-E endpoint.
Represents an adapter that generates a data set from a database schema and a user-provided description.
Represents a smart adapter that provides real-time data processing capabilities for use with the endpoint.
Enhances the ChatBox control to allow seamless PDF report queries using an AI provider.
Enhances a TextBox control with several AI features, including suggestions, translation, and auto-correction.
container
An IContainer that represents the container of the component.
prompt
The prompt to use for the session.
T
The type of session to create.
systemPrompt
The system prompt to use for the session.
disposing
True to release both managed and unmanaged resources; false to release only unmanaged resources.
message
Message with the response text that may be a JSON string.
control
The control to check.
message
The message to notify agents with.
e
An EventArgs that contains the event data.
control
The control that was created.
control
The control that was disposed.
e
An EventArgs that contains the event data.
e
An EventArgs that contains the event data.
control
The control to register.
tool
The tool to remove.
target
The target object containing tools.
control
The control to run the adapter on.
control
The control to unregister.
tool
The tool to add.
target
The target object containing tools.
Transcribes the audio file of the of the associated Audio control to its Text property in the original language of the audio. speech-to-text
Represents a smart calendar adapter that extends the functionality of a SmartAdapter.
Represents an adapter that enhances a chart control with AI features.
Represents an adapter that enhances a ChartJS control with several AI features.
Represents a provider that supplies tools.
Wisej.AI.SmartEndpoint
Namespace: Wisej.AI
Assembly: Wisej.AI (3.5.0.0)
Represents an abstract base class for a smart endpoint component that interacts with AI models.
This class provides properties and methods to configure and interact with AI models, including handling messages, building payloads, and managing tool invocations.
Wisej.AI.SmartSession
Namespace: Wisej.AI
Assembly: Wisej.AI (3.5.0.0)
Represents a session that manages interactions with a smart hub and endpoint, handling prompts, messages, and tools within the session context.
Initializes a new instance of the SmartEndpoint class.
Int32: Gets or sets the size of the context window in tokens. (Default: 32000)
String: Gets or sets the embedding model used by the endpoint. (Default: null)
Int32: Gets or sets the maximum number of output tokens. (Default: 4096)
String: Gets or sets the model used by the endpoint. (Default: null)
Object: Gets the model options for the endpoint.
String: Gets or sets the name of the endpoint.
String: Gets or sets the system prompt for the endpoint.
Warning, setting this property entirely overrides the internal system prompt.
String: Gets or sets the tools prompt for the endpoint.
Warning, setting this property entirely overrides the internal tools prompt.
Metrics: Gets the usage metrics for the endpoint.
Boolean: Gets or sets a value indicating that the endpoint should use the native tools payload when adding tools to the request. (Default: False)
Adds messages to the payload object.
payload
The payload object to update.
session
The session context.
messages
The list of messages to add.
Throws:
ArgumentNullException Thrown when the session, payload, or messages are null.
Adds model options to the message object.
message
The message object to update.
session
The session context.
Throws:
ArgumentNullException Thrown when the message is null.
Asynchronously sends a message to the AI model and returns the response.
session
The session context.
messages
The list of messages to send.
Returns: Task<Message>. A task representing the asynchronous operation, with a Message as the result.
This method must be implemented by derived classes to handle the specific logic for sending messages to the AI model.
Asynchronously requests embeddings for the given inputs.
inputs
The array of input strings.
Returns: Task<Embedding>. A task representing the asynchronous operation, with an Embedding as the result.
Throws:
NotImplementedException Thrown when the method is not implemented.
Builds an assistant Message from the given response.
response
The response to build from.
Throws:
ArgumentNullException Thrown when the response is null.
Builds a payload for requesting embeddings.
inputs
The array of input strings.
Returns: Object. An object representing the embeddings payload.
Throws:
NotImplementedException Thrown when the method is not implemented.
Builds a message object from the given Message.
message
The message to build from.
Returns: Object. An object representing the built message.
Throws:
ArgumentNullException Thrown when the message is null.
Builds the tools namespace prompt based on the session context.
session
The session context.
Returns: String. A string representing the tools prompt.
Throws:
ArgumentNullException Thrown when the session is null.
Builds a string representation of tool parameters.
parameters
The array of parameters to build from.
Returns: String. A string representing the tool parameters.
session
The session context.
messages
The list of messages to include in the payload.
Returns: Object. An object representing the built payload.
Throws:
ArgumentNullException Thrown when the session or messages are null.
Builds the messages containing tool results.
toolResults
Builds a message containing tool results.
toolResults
The array of tools with results.
Builds the tools payload based on the session context.
session
The session context.
Returns: Object[]. A JSON object representing the tools payload.
Throws:
ArgumentNullException Thrown when the session is null.
Builds the tools prompt based on the session context.
session
The session context.
Returns: String. A string representing the tools prompt.
Throws:
ArgumentNullException Thrown when the session is null.
Gets the current language based on the session context.
session
The session context.
Returns: String. A string representing the current language.
Gets a description of the current date.
Returns: String. A string representing the current date.
Gets the tools to invoke based on the session and message context.
session
The session context.
message
The message context.
Returns: ToolContext[]. An array of ToolContext representing the tools to invoke.
Throws:
ArgumentNullException Thrown when the session or message is null.
Reads the assistant message from the response and updates the message object.
response
The response containing the assistant message.
message
The message object to update.
Reads the embeddings response and returns the embeddings data.
response
The response containing embeddings data.
Returns: Single[][]. A jagged array of floats representing the embeddings.
Throws:
NotImplementedException Thrown when the method is not implemented.
Reads the usage information from reply.
message
The message containing usage information.
reply
The reply object to update with usage information.
Represents an endpoint for connecting to Amazon Bedrock services.
Represents an endpoint for connecting to Anthropic services.
Represents an endpoint for connecting to Azure AI services, specifically designed to interact with OpenAI models.
Represents a connection to Cerebras endpoints, inheriting from OpenAIEndpoint.
Represents a connection to DeepSeek endpoints, inheriting from OpenAIEndpoint.
Represents a connection to Google AI endpoints for generating content and embeddings.
Public Class SmartEndpoint
Inherits Componentpublic class SmartEndpoint : ComponentPublic Class SmartSession
Inherits SmartObjectThe SmartSession class is responsible for managing the lifecycle of a session, including initialization, message handling, and tool usage. It supports asynchronous operations for asking questions and processing responses, and it can handle context overflow through truncation or summarization strategies.
Initializes a new instance of the SmartSession class with the specified hub and optional system prompt.
hub
The smart hub associated with the session.
systemPrompt
The optional system prompt for the session. Default is null.
Throws:
ArgumentNullException Thrown when the hub is null.
Initializes a new instance of the SmartSession class with the specified endpoint and optional system prompt.
endpoint
The smart endpoint associated with the session.
systemPrompt
The optional system prompt for the session. Default is null.
Throws:
ArgumentNullException Thrown when the endpoint is null.
SmartEndpoint: Gets the smart endpoint associated with the session.
Boolean: Gets a value indicating whether the session has model options.
SmartHub: Gets the smart hub associated with the session.
Boolean: Gets a value indicating whether the session has been disposed.
MessageCollection: Gets the collection of messages in the session.
Object: Gets or sets the model options for the session.
Message: Gets the last response message from the assistant.
SmartPrompt: Gets the system prompt for the session.
Asynchronously asks a question and returns the response message.
question
The question to ask.
image
An optional image associated with the question. Default is null.
Returns: Task<Message>. A task representing the asynchronous operation, with a Message as the result.
Throws:
ObjectDisposedException Thrown when the session is disposed.
ArgumentNullException Thrown when the question is null.
Asynchronously asks a question using a message and returns the response message.
message
The message containing the question.
Returns: Task<Message>. A task representing the asynchronous operation, with a Message as the result.
Throws:
ObjectDisposedException Thrown when the session is disposed.
ArgumentNullException Thrown when the message is null.
Asynchronously sends a question to the AI and processes the response.
question
The message containing the question to be sent to the AI.
Returns: Task<Message>. A task representing the asynchronous operation, with a Message as the result containing the AI's response.
This method handles the communication with the AI, including managing the session's message context, handling tool calls, and processing AI responses. It ensures that the session is not disposed and that the question is not null before proceeding. The method also manages the context window size and handles exceptions such as token limit exceedance.
Raises the ConvertParameter event.
value
Returns: String.
Releases all resources used by the SmartSession.
Gets the value of a parameter.
parameter
The parameter to get the value for.
Returns: String. The value of the parameter as a string.
Returns: Boolean.
Prepares session messages by replacing parameters in the system and user messages.
messages
The collection of messages to prepare.
Removes a tool from the smart session.
tool
The tool to remove.
Returns: SmartSession. The current SmartSession instance.
Throws:
ArgumentNullException Thrown when the tool is null.
Replaces parameters in the given prompt with their values.
prompt
The prompt containing parameters to replace.
Returns: String. The prompt with parameters replaced by their values.
Trims messages in the session based on the specified truncation strategy.
messages
The collection of messages to trim.
Returns: Task.
Registers a tool using a delegate.
tool
The delegate representing the tool to register.
Returns: SmartSession. The current SmartSession instance.
Throws:
ArgumentNullException Thrown when the tool is null.
Registers the tools declared on the target.
target
The target object containing tools to register.
Returns: SmartSession. The current SmartSession instance.
public class SmartSession : SmartObjectWisej.AI.MarkupExtensions
Namespace: Wisej.AI
Assembly: Wisej.AI (3.5.0.0)
Provides extension methods for configuring and handling events on SmartEndpoint, SmartHttpEndpoint, SmartHub, and SmartSession objects.
public class MarkupExtensionsPublic Class MarkupExtensionsThe MarkupExtensions class offers a fluent API for setting properties and subscribing to events on various smart endpoint and session types. These extension methods enable concise and readable configuration of endpoints and event handlers.
Sets the API key for the specified HTTP endpoint.
Returns: . The configured HTTP endpoint instance.
Sets the authentication string for the specified HTTP endpoint.
Returns: . The configured HTTP endpoint instance.
Sets the context window size for the specified endpoint.
Returns: . The configured endpoint instance.
Sets the embedding model for the specified endpoint.
Returns: . The configured endpoint instance.
Sets the HTTP headers for the specified HTTP endpoint.
Returns: . The configured HTTP endpoint instance.
Sets the maximum number of output tokens for the specified endpoint.
Returns: . The configured endpoint instance.
Sets the model name for the specified endpoint.
Returns: . The configured endpoint instance.
Sets the name for the specified endpoint.
Returns: . The configured endpoint instance.
Subscribes to the AfterInvokeTool event on the specified object.
Returns: . The configured instance.
Subscribes to the AfterResponseReceived event on the specified object.
Returns: . The configured instance.
Subscribes to the BeforeInvokeTool event on the specified object.
Returns: . The configured instance.
Subscribes to the BeforeSendRequest event on the specified object.
Returns: . The configured instance.
Subscribes to the ConvertParameter event on the specified object.
Returns: . The configured instance.
Subscribes to the Done event on the specified object.
Returns: . The configured instance.
Subscribes to the Error event on the specified object.
Returns: . The configured instance.
Subscribes to the PrepareMessage event on the specified object.
Returns: . The configured instance.
Subscribes to the Start event on the specified object.
Returns: . The configured instance.
Sets the retry delay (in milliseconds) for the specified HTTP endpoint.
Returns: . The configured HTTP endpoint instance.
Sets the system prompt for the specified endpoint.
Returns: . The configured endpoint instance.
Sets the Text for the specified SmartPrompt.
Returns: . The configured SmartPrompt instance.
Sets the URL for the specified HTTP endpoint.
Returns: . The configured HTTP endpoint instance.
Specifies whether to use native tools for the specified endpoint.
Returns: . The configured endpoint instance.
Wisej.AI.Endpoints.SmartHttpEndpoint
Namespace: Wisej.AI.Endpoints
Assembly: Wisej.AI (3.5.0.0)
Represents an abstract base class for a smart HTTP endpoint, providing common functionality for HTTP-based communication.
public class SmartHttpEndpoint : SmartEndpointThis class is designed to facilitate communication with HTTP endpoints by managing authentication, headers, and retry logic. It provides methods for sending requests and handling responses, including error detection and retry mechanisms.
Initializes a new instance of the class.
: Gets or sets the API key used for authentication. (Default: null)
: Gets or sets the authentication scheme used for requests. (Default: "Bearer")
: Gets or sets the collection of additional headers to be included in requests.
: Gets or sets the maximum number of retry attempts for failed requests. (Default: 10)
: Gets or sets the delay between retry attempts in milliseconds. (Default: 10000)
: Gets or sets the base URL of the endpoint. (Default: null)
The URL should not end with a trailing slash. If a trailing slash is present, it will be removed.
Asynchronously sends a request to the endpoint and returns the response message.
Returns: . A task representing the asynchronous operation, with a as the result.
**Throws:** * [ArgumentNullException](https://docs.microsoft.com/dotnet/api/system.argumentnullexception) Thrown when *session* or *messages* is null.
Asynchronously requests embeddings for the specified inputs.
Returns: . A task representing the asynchronous operation, with an as the result.
This method sends the inputs to the embeddings endpoint and returns the resulting embeddings.Throws:
Thrown when inputs is null.
Thrown when the embedding model is not defined.
Builds the payload for the request.
Returns: . The constructed payload object.
Throws:
Thrown when session or messages is null.
Creates the HTTP content from the given data.
Returns: . The created .
Detects if the context limit has been exceeded in the response.
Returns: . true if the context limit is exceeded; otherwise, false.
Detects if the rate limit has been exceeded in the response.
Returns: . true if the rate limit is exceeded; otherwise, false.
Gets the API key for the endpoint.
Returns: . The API key as a string.
Gets the API URL for the endpoint.
Returns: . The API URL as a string.
Gets the URL for embeddings.
Returns: . The embeddings URL as a string.
Throws:
Thrown when the method is not implemented.
Asynchronously sends a POST request to the specified URL with the given data.
Returns: . A task representing the asynchronous operation, with a as the result.
This method handles retries and error detection, including token and rate limit exceeded exceptions.Throws:
Thrown when the token limit is exceeded.
Thrown when the rate limit is exceeded.
Thrown when the response status code is not OK.
Reads the assistant's message from the API response.
Throws:
Thrown when the response or message is null.
Reads the usage statistics from the API response.
// Example usage:
SmartEndpoint endpoint = new CustomSmartEndpoint();
var response = await endpoint.AskAsync(session, messages);
// Example usage:
var response = await endpoint.AskAsync(session, messages);
var response = await AskAsyncCore(new Message { Text = "What is the weather today?" });
Console.WriteLine(response.Text);
T
A type derived from SmartHttpEndpoint.
endpoint
The HTTP endpoint to configure.
value
The API key.
T
A type derived from SmartHttpEndpoint.
endpoint
The HTTP endpoint to configure.
value
The authentication string.
T
A type derived from SmartEndpoint.
endpoint
The endpoint to configure.
value
The context window size.
T
A type derived from SmartEndpoint.
endpoint
The endpoint to configure.
value
The embedding model name.
T
A type derived from SmartHttpEndpoint.
endpoint
The HTTP endpoint to configure.
value
A NameValueCollection containing the headers.
T
A type derived from SmartEndpoint.
endpoint
The endpoint to configure.
value
The maximum number of output tokens.
T
A type derived from SmartEndpoint.
endpoint
The endpoint to configure.
value
The model name to assign.
T
A type derived from SmartEndpoint.
endpoint
The endpoint to configure.
value
The name to assign.
T
A type derived from SmartObject.
object
The SmartObject to subscribe to.
action
The action to invoke when the event is raised.
T
A type derived from SmartObject.
object
The SmartObject to subscribe to.
action
The action to invoke when the event is raised.
T
A type derived from SmartObject.
object
The SmartObject to subscribe to.
action
The action to invoke when the event is raised.
T
A type derived from SmartObject.
object
The SmartObject to subscribe to.
action
The action to invoke when the event is raised.
T
A type derived from SmartObject.
object
The SmartObject to subscribe to.
action
The action to invoke when the event is raised.
T
A type derived from SmartObject.
object
The SmartObject to subscribe to.
action
The action to invoke when the event is raised.
T
A type derived from SmartObject.
object
The SmartObject to subscribe to.
action
The action to invoke when the event is raised.
T
A type derived from SmartObject.
object
The SmartObject to subscribe to.
action
The action to invoke when the event is raised, receiving the event arguments.
T
A type derived from SmartObject.
object
The SmartObject to subscribe to.
action
The action to invoke when the event is raised.
T
A type derived from SmartHttpEndpoint.
endpoint
The HTTP endpoint to configure.
value
The retry delay in milliseconds.
T
A type derived from SmartEndpoint.
endpoint
The endpoint to configure.
value
The system prompt text.
T
A type derived from SmartPrompt.
prompt
The SmartPrompt to configure.
value
The text string.
T
A type derived from SmartHttpEndpoint.
endpoint
The HTTP endpoint to configure.
value
The URL to assign.
T
A type derived from SmartEndpoint.
endpoint
The endpoint to configure.
value
True to use native tools; otherwise, false.
https://console.groq.com/docs/openai
Represents a connection to GroqCloud speech endpoints for audio transcription.
Represents a connection to HuggingFace serverless endpoints for model inference and embeddings.
Represents an endpoint that uses the transformers.js module in the user's browser to provide AI services to Wisej.AI components.
Represents a connection to LocalAI endpoints, providing access to various AI models and services.
Represents an endpoint for connecting to LocalAI Image Generation services.
Represents an endpoint for connecting to LocalAI's speech services.
Represents an endpoint for connecting to LocalAI's Whisper model for speech-to-text transcriptions.
Represents a connection to NVIDIA AI endpoints, providing access to various AI models and services.
Represents an endpoint that connects to Ollama services, providing functionalities for chat and embeddings.
Represents an endpoint for connecting to OpenAI services.
Represents an endpoint for connecting to OpenAI DallE services.
Represents a component that manages communication with OpenAI's real-time API endpoints.
Represents an endpoint for connecting to OpenAI's speech services.
Represents an endpoint for connecting to OpenAI's Whisper model for speech-to-text transcriptions.
Represents a connection to SambaNova endpoints, inheriting from OpenAIEndpoint.
Represents an abstract base class for a smart HTTP endpoint, providing common functionality for HTTP-based communication.
Represents a connection to TogetherAI endpoints, providing access to various AI models and services.
Represents a connection to X.AI endpoints, extending the functionality of OpenAIEndpoint.
https://console.groq.com/docs/openai
Represents a connection to GroqCloud speech endpoints for audio transcription.
Represents a connection to HuggingFace serverless endpoints for model inference and embeddings.
Represents a connection to LocalAI endpoints, providing access to various AI models and services.
Represents an endpoint for connecting to LocalAI Image Generation services.
Represents an endpoint for connecting to LocalAI's speech services.
Represents an endpoint for connecting to LocalAI's Whisper model for speech-to-text transcriptions.
Represents a connection to NVIDIA AI endpoints, providing access to various AI models and services.
Represents an endpoint that connects to Ollama services, providing functionalities for chat and embeddings.
Represents an endpoint for connecting to OpenAI services.
Represents an endpoint for connecting to OpenAI DallE services.
Represents a component that manages communication with OpenAI's real-time API endpoints.
Represents an endpoint for connecting to OpenAI's speech services.
Represents an endpoint for connecting to OpenAI's Whisper model for speech-to-text transcriptions.
Represents a connection to SambaNova endpoints, inheriting from OpenAIEndpoint.
Represents a connection to TogetherAI endpoints, providing access to various AI models and services.
Represents a connection to X.AI endpoints, extending the functionality of OpenAIEndpoint.
session
The session managing the AI interaction.
messages
The list of messages to be sent.
inputs
The array of input strings for which embeddings are requested.
session
The session managing the AI interaction.
messages
The list of messages to be included in the payload.
data
The data to be serialized into the content.
response
The HTTP response message.
response
The HTTP response message.
url
The URL to send the request to.
data
The data to be sent in the request body.
response
The response received from the API.
message
The message object to populate with the response content.
message
The message object to update with usage data.
reply
The dynamic object containing the API response data.
Represents an endpoint for connecting to Amazon Bedrock services.
Represents an endpoint for connecting to Anthropic services.
Represents an endpoint for connecting to Azure AI services, specifically designed to interact with OpenAI models.
Represents a connection to Cerebras endpoints, inheriting from OpenAIEndpoint.
Represents a connection to DeepSeek endpoints, inheriting from OpenAIEndpoint.
Represents a connection to Google AI endpoints for generating content and embeddings.
Public Class SmartHttpEndpoint
Inherits SmartEndpointWisej.AI.SmartHub
Namespace: Wisej.AI
Assembly: Wisej.AI (3.5.0.0)
Represents a SmartHub component that provides AI capabilities to controls within a container.
public class SmartHub : SmartObject, IExtenderProvider, IToolProviderThe SmartHub class is an extender provider that allows controls to be extended with AI functionalities. It manages various services and tools to facilitate AI operations, such as embedding generation, document conversion, and similarity queries.
Initializes a new instance of the class.
Initializes a new instance of the class with a specified container control.
Throws:
Thrown when the containerControl is null.
Initializes a new instance of the class attached to an implementation.
: Gets the binding context for the SmartHub.
: Returns or sets the container that provides the for binding to the interface implemented by a . (Default: null)
: Gets or sets the culture information for the SmartHub. (Default: null)
: Gets or sets a data source that can be used to resolve prompt parameters. (Default: null)
: Gets or sets the default endpoint. (Default: null)
: Gets a value indicating whether the SmartHub is disposed.
Asynchronously asks a question and returns a response message.
Returns: . A task representing the asynchronous operation, with a as the result.
Throws:
Thrown when the question is null.
Releases the unmanaged resources used by the and optionally releases the managed resources.
Asynchronously generates an embedding for a specified text.
Returns: . A task representing the asynchronous operation, with an as the result.
Asynchronously generates an embedding for a specified array of text chunks.
Returns: . A task representing the asynchronous operation, with an as the result.
Throws:
Thrown when the chunks array is null.
Gets the adapter associated with a specified control.
Returns: . The associated with the control, or null if none exists.
Throws:
Thrown when the control is null.
Gets the adapter of a specified type associated with a specified control.
Returns: . The adapter of type T associated with the control, or null if none exists.
Throws:
Thrown when the control is null.
Gets the AI properties for a specified control.
Returns: . The associated with the control.
Gets the value of a specified parameter.
Returns: . The value of the parameter as a string.
Throws:
Thrown when the parameter is null.
Asynchronously ingests a document from a specified file path.
Returns: . A task representing the asynchronous operation, with an as the result.
This method utilizes several services to perform its operations: , and , and , and .Throws:
Thrown when the filePath or documentName is null.
Asynchronously ingests a document from a specified stream.
Returns: . A task representing the asynchronous operation, with an as the result.
This method utilizes several services to perform its operations: , and , and , and .Throws:
Thrown when the stream or documentName is null.
Removes a specified adapter from the SmartHub.
Throws:
Thrown when the adapter is null.
Removes a tool from the SmartHub.
Returns: . The current instance.
Throws:
Thrown when the tool is null.
Asynchronously performs a similarity query on a specified query and text chunks.
Returns: . A task representing the asynchronous operation, with an array of strings as the result.
Throws:
Thrown when the query or chunks are null.
Asynchronously performs a similarity query on a specified query, text chunks, and vectors.
Returns: . A task representing the asynchronous operation, with an array of strings as the result.
Throws:
Thrown when the query, chunks, or vectors are null.
Asynchronously calculates similarity scores for a specified query and text chunks.
Returns: . A task representing the asynchronous operation, with an array of floats as the result.
Throws:
Thrown when the query or chunks are null.
Asynchronously calculates similarity scores for a specified query, text chunks, and embeddings.
Returns: . A task representing the asynchronous operation, with an array of floats as the result.
Throws:
Thrown when the query, chunks, or embeddings are null.
Adds a tool to the SmartHub.
Returns: . The current instance.
Throws:
Thrown when the tool is null.
Uses tools from a specified target.
Returns: . The current instance.
overwrite
Whether to overwrite an existing document. Default is false.
metadata
The metadata for the document. Default is null.
computeSimilarity
The function to compute similarity. Default is null.
minSimilarity
The minimum similarity threshold.
computeSimilarity
The function to compute similarity. Default is null.
containerControl
The container control to which the SmartHub is attached.
container
An IContainer that represents the container of the SmartHub extender.
question
The question to ask.
image
An optional image to include in the question. Default is null.
systemPrompt
Option system prompt.
disposing
true to release both managed and unmanaged resources; false to release only unmanaged resources.
text
The text to embed.
chunks
The text chunks to embed.
control
The control for which to get the adapter.
T
The type of the adapter.
control
The control for which to get the adapter.
control
The control for which to get the AI properties.
parameter
The parameter for which to get the value.
filePath
The file path of the document to ingest.
documentName
The name of the document.
collectionName
The name of the collection. Default is null.
metadata
The metadata for the document. Default is null.
stream
The stream of the document to ingest.
documentName
The name of the document.
collectionName
The name of the collection. Default is null.
overwrite
Whether to overwrite an existing document. Default is false.
adapter
The adapter to remove.
tool
The tool to remove.
query
The query text.
chunks
The text chunks to compare against.
topN
The number of top results to return.
minSimilarity
The minimum similarity threshold.
query
The query text.
chunks
The text chunks to compare against.
vectors
The vectors to compare against.
topN
The number of top results to return.
query
The query text.
chunks
The text chunks to compare against.
computeSimilarity
The function to compute similarity. Default is null.
query
The query text.
chunks
The text chunks to compare against.
embeddings
The embeddings to compare against.
computeSimilarity
The function to compute similarity. Default is null.
tool
The tool to add.
target
The target from which to use tools.
Represents a provider that supplies tools.
Public Class SmartHub
Inherits SmartObject
Implements IExtenderProvider, IToolProvider