AI Providers
Supported AI providers
Overview
Wisej.AI is compatible with any LLM provider, whether it's on a public cloud, a private deployment, or a local server. Most providers offer a REST API that is compatible with the OpenAI API. In such cases, if you need to add a new SmartEndpoint
, you can either use the SmartOpenAIEndpoint
and specify a different URL or create a derived class.
Typically, private models are hosted exclusively by their owners. In contrast, open-source models can be hosted by various providers and can also be deployed on proprietary hardware.
The currently available implementations include:
1, 3, 4, 7
1, 5
1, 7
1, 6
1, 5, 6
2, 3, 4
1, 2, 3, 4, 5, 6, 7
2, 3, 4
1, 3, 4
1, 2, 3, 4
2, 3, 4
2, 4
2, 3, 4
1, 4
2, 4
2
1, 6
2, 3, 4
2, 3, 4, 7
2, 5
2, 6
2, 7
Notes:
Proprietary models
Open source models
Embeddings
Vision
Text to Speech
Speech to Text
Imaging
Local Hosting
By "Local Hosting," we refer to using a server to provide AI features outside the typical cloud services. This server could be located on-premises, housed in a data center, or hosted as a virtual machine instance with any cloud provider. This setup offers flexibility in deploying AI solutions by allowing organizations to have more control over their data and resources while still benefiting from sophisticated AI capabilities.
To use an Ollama server, instantiate the OllamaEndpoint
and provide the URL of your server:
To use other local servers, such as vLLM, Localai, LM Studio, and others, you can most likely use or extend the OpenAIEndpoint
. This provides the flexibility to integrate a variety of servers seamlessly.
Last updated