AI Providers

Supported AI providers

Overview

Wisej.AI is compatible with any LLM provider, whether it's on a public cloud, a private deployment, or a local server. Most providers offer a REST API that is compatible with the OpenAI API. In such cases, if you need to add a new SmartEndpoint, you can either use the SmartOpenAIEndpoint and specify a different URL or create a derived class.

Typically, private models are hosted exclusively by their owners. In contrast, open-source models can be hosted by various providers and can also be deployed on proprietary hardware.

The currently available implementations include:

Name
URL
Features

OpenAI

1, 3, 4, 7

OpenAITTS

1, 5

OpenAIDallE

1, 7

OpenAIWhisper

1, 6

OpenAIRealtime

1, 5, 6

Azure AI

1, 2, 3, 4, 5, 6, 7

Ollama

2, 3, 4

GoogleAi

1, 2, 3, 4

HuggingFace

2, 3, 4

SambaNova

2, 4

Together.AI

2, 3, 4

X.AI

1, 4

GroqCloudWhisper

1, 6

Amazon Bedrock

2, 3, 4

LocalAI

2, 3, 4, 7

LocalAITTS

2, 5

LocalAIWhisper

2, 6

LocalAIImageGen

2, 7

Notes:

  1. Proprietary models

  2. Open source models

  3. Embeddings

  4. Vision

  5. Text to Speech

  6. Speech to Text

  7. Imaging

Local Hosting

By "Local Hosting," we refer to using a server to provide AI features outside the typical cloud services. This server could be located on-premises, housed in a data center, or hosted as a virtual machine instance with any cloud provider. This setup offers flexibility in deploying AI solutions by allowing organizations to have more control over their data and resources while still benefiting from sophisticated AI capabilities.

Wisej.AI can seamlessly integrate with any non-cloud server, providing flexibility and adaptability in deploying AI features. One of the most common types of servers used for this purpose is Ollama, which allows for efficient hosting of AI models and services. This capability ensures that you can leverage Wisej.AI's advanced functionalities, whether your infrastructure is cloud-based or locally hosted.

To use an Ollama server, instantiate the OllamaEndpoint and provide the URL of your server:

var ollama = new OllamaEndpoint { URL = "http:\\localhost:8080" };

To use other local servers, such as vLLM, Localai, LM Studio, and others, you can most likely use or extend the OpenAIEndpoint. This provides the flexibility to integrate a variety of servers seamlessly.

Another excellent local server option is LocalAI. It offers an API compatible with OpenAI and supports a comprehensive range of features. These features include text completion, embedding, image generation, text-to-speech, speech-to-text, and re-ranking.

Last updated