Vector Databases
How to use vector storage and queries
Overview
Wisej.AI can seamlessly integrate with any vector database through the service implementation of the IEmbeddingStorageService
interface. While you don't need to interact with this interface directly in your code, any Wisej.AI tools or methods that require a vector database will automatically retrieve the current implementation of IEmbeddingStorageService
.
Specifically, both the DocumentSearchTool
and the SmartyHub.IngestDocumentAsync()
method utilize this service.
Collections
An important concept in vector databases is "collections." Wisej.AI utilizes collections to organize embedded documents into logical groups, akin to how tables are used in databases. Additionally, the name of a document may include a virtual path, similar to a namespace, preceding the document's name.
For instance, to store two documents with the same name but in different "folders," you can use a naming convention like this:
await this.smartHub1.IngestDocumentAsync(
"C:\\Files\\2024\\AAPL-10K.pdf"), "10Ks\\2024\\AAPL-10K.pdf");
await this.smartHub1.IngestDocumentAsync(
"C:\\Files\\2023\\AAPL-10K.pdf"), "10Ks\\2023\\AAPL-10K.pdf");
If the code does not specify a collection name, Wisej.AI defaults to using the name "default" (in lowercase). The example below illustrates how to store documents in different collections.
await this.smartHub1.IngestDocumentAsync(
"C:\\Files\\2024\\AAPL-10K.pdf"), "10Ks\\2024\\AAPL-10K.pdf", "Apple Docs");
await this.smartHub1.IngestDocumentAsync(
"C:\\Files\\Logs\\ServiceLogs.txt"), "ServiceLogs.txt", "Logs");
Metadata
Vector databases typically manage text chunks along with their corresponding vectors, while any additional information is stored in a general metadata
field. Wisej.AI automatically extracts specific values when converting documents using the IDocumentConversionService. However, you can add additional custom fields by passing a Metadata object to the IngestDocument method.
The conversion service automatically adds several fields, depending on the document type: "Title," "Author," "Subject," "Pages," and "Description." For more details, refer to the IDocumentConversionService page. In addition to these fields, the IngestDocument
method adds: "FilePath", "CreationDate", "ModifiedDate", "FileSize".
The following code demonstrates how to add custom metadata to an ingested document:
var metadata = Metadata();
metadata["ServiceName"] = "W3WP-1";
await this.smartHub1.IngestDocumentAsync(
"C:\\Files\\Logs\\ServiceLogs.txt"), "ServiceLogs.txt", "Logs", true, metadata);
All metadata fields are made available to the AI as part of the RAG retrieval process when using DocumentSearchTools. If you use the IEmbeddingStorageService directly, you will find the metadata object as a property of the EmbeddedDocument instance.
Built-In
Unless you register a specific provider, Wisej.AI defaults to using the built-in FileSystemEmbeddingStorageService
. This implementation saves vectors in the file system at the location specified by FileSystemEmbeddingStorageService.StoragePath
. The default path is set to "~\AI\Embeddings"
.
An easy alternative is the MemoryEmbeddingStorageService
, which stores vectors in memory.
However, both implementations are intended for development purposes only and should not be used in production environments.
Chroma DB
You can run Chroma either locally or on a virtual machine (VM) in a data center. The simplest way to run it is by using the Docker image. For installation instructions, please refer to this link.
With Chroma, you don't need to pre-create the index. Wisej.AI will automatically create the index if it doesn't already exist.
Currently, there isn't a well-established UI for Chroma. However, you can try some available options on GitHub for free. One such option we have used is fengzhichao/chromadb-admin. This tool only allows you to view the collections created by Wisej.AI. For any administrative functions, you'll need to use tools like CURL
or Postman
.

Pinecone
When working with Pinecone, you need to create an index to be used with Wisej.AI through the Pinecone dashboard. When setting up a new index, you only need to define the vector size and the metric. Always use "cosine" as the metric. The vector size is determined by the embedding model you plan to use.
Embedding models are not interchangeable. Therefore, once you create an index, it can only be used with the embedding model for which it was initially configured. In Wisej.AI, the default embedding model is text-embedding-3-small
, which requires a vector with 1,536 dimensions.

To use Pinecone with Wisej.AI, you can register the service as follows:
internal static class Program
{
static Program()
{
Application.Services
.AddOrReplaceService<IEmbeddingStorageService>(
new PineconeEmbeddingStorageService("<endpoint url>"));
}
}
The endpoint URL is the service index host address as shown by Pinecone.

Azure AI Search
When utilizing Azure AI Search with Wisej.AI, you must first create the index you'll be working with. Since Azure AI Search starts with a blank schema, you need to define all the necessary fields. Refer to the table and JSON file below for a comprehensive list of required fields.
🔑
id
Edm.String
Retrievable, Filterable
master
Edm.Boolean
Retrievable, Filterable
documentName
Edm.String
Retrievable, Filterable
⚡
vector
Collection(Edm.Single)
Retrievable, Searchable
collectionName
Edm.String
Retrievable, Filterable
metadata
Edm.String
Retrievable, Filterable
chunk
Edm.String
Retrievable, Filterable
Download the JSON definition below to create the index.
The field that requires particular attention is the vector
, where embeddings are stored and searched. When defining this field, select Collection(Edm.Single)
and ensure that both the Retrievable and Searchable options are enabled. Additionally, you must specify the Dimensions, which indicate the size of the array based on the embedding model used.
Embedding models are not interchangeable. Therefore, once you create an index, it can only be used with the embedding model for which it was initially configured. In Wisej.AI, the default embedding model is text-embedding-3-small
, which requires a vector with 1,536 dimensions.
This is what a created index looks like:

To use Azure AI Search with Wisej.AI, you can register the service as follows:
internal static class Program
{
static Program()
{
Application.Services
.AddOrReplaceService<IEmbeddingStorageService>(
new AzureAISearchEmbeddingStorageService("<endpoint url>"));
}
}
The endpoint URL is the service endpoint concatenated with /indexes/<index name>
. For example, our tests use `https://aisearchwisej.search.windows.net/indexes/wisejtest`.
Qdrant
Qdrant offers flexibility in its deployment options by allowing you to run it locally or utilize Qdrant Cloud. They provide a user-friendly Docker image for local installations and offer a cloud service option. The cloud service includes a free tier hosted on Amazon AWS or Google Cloud, making it accessible and convenient for a variety of use cases.
With Qdrant, you don't need to pre-create the collection. Wisej.AI will automatically create it, if it doesn't already exist.
Qdrant Cloud offers an intuitive control panel that allows users to inspect their collections and execute queries directly within the interface.

Custom Implementation
To utilize a different vector database with Wisej.AI, you have two options: you can use the database directly, or you can implement the IEmbeddingStorageService interface and register your vector database connection as a Wisej.NET service. When you register your database this way, it will be seamlessly integrated and utilized by the DocumentSearchTools
and the SmartHub.IngestDocument()
implementations.
You can use any of the following implementations as a reference or starting point for integrating additional providers and developing your own custom IEmbeddingStorageService
implementation. These examples demonstrate the recommended structure and key considerations when extending Wisej.AI with custom storage service integrations.
Embedding Generation
The generation of embeddings for chunks of text is handled by the IEmbeddingGenerationService
. However, you have the option to generate embeddings directly using any other system or by replacing the service.
It's crucial to understand that embeddings generated with one model are not compatible with those generated by another model. Consequently, if you store documents and embeddings using a specific model and later change the IEmbeddingGenerationService
or the model itself, all previously stored embeddings will become unusable for queries with a different embedding model. This necessitates careful consideration when altering the embedding generation approach to ensure compatibility and continuity.
This is why Wisej.AI registers a single shared IEmbeddingGenerationService
with a default implementation that uses the OpenAIEndpoint
and the "text-embedding-3-small" model. We recommend installing a single service implementation at startup and consistently using the same one.
For instance, if you want to change the model used by the OpenAIEndpoint
or utilize your own embedding server, refer to the following example:
static class Program
{
static Program()
{
Application.Services
.AddOrReplaceService<IEmbeddingGenerationService>(
new DefaultEmbeddingGenerationService(
new OpenAIEndpoint{ EmbeddingModel = "text-embedding-3-large" }));
// Or
Application.Services
.AddOrReplaceService<IEmbeddingGenerationService>(
new HuggingFaceEmbeddingGenerationService("http://ollama.myserver.com:8090"));
// Or
Application.Services
.AddOrReplaceService<IEmbeddingGenerationService>(
new DefaultEmbeddingGenerationService(
new TogetherAIEndpoint()));
}
}
Regarding the IEmbeddingStorageService
, Wisej.AI provides several built-in implementations for the IEmbeddingGenerationService
. Additionally, you have the flexibility to create your own custom implementations to suit your specific requirements. The built-in services include:
This implementation utilizes the embedding endpoint of any SmartEndpoint
that supports embedding functionality. By default, it is configured to use the OpenAIEndpoint
. Please note, however, that not all AI providers offer embedding endpoints, so compatibility may vary depending on the provider you choose.
This implementation leverages the HuggingFace Text Embedding Inference server to generate embeddings. You can easily deploy this server locally by using the provided Docker container, allowing for flexible and scalable embedding generation within your own infrastructure.
Use the code below as a starting point to accelerate the development of your own custom service implementation. This example demonstrates the essential structure and key elements needed to create a new service that integrates seamlessly with the Wisej.AI framework.
Both implementations also demonstrate how to handle parallel requests. This approach allows your service to process multiple embedding requests concurrently.
Last updated