AIProxy enables integration of AI into iOS apps, protecting API keys and ensuring secure usage with monitoring features.
Integrate AIProxy into your iOS app in less than 10 minutes.
Protect sensitive user data with bank-level data security measures.
Monitor your iOS app’s usage with real-time observability.
Receive alerts via email or Slack when there is an abnormal spike in usage or errors.
Get detailed analytics on user requests, including request volume and real-time error tracking.
Automatically identify and block abusive behavior with rate limiting and abuse detection features.
Handle hundreds of users effortlessly, thanks to AIProxy’s scaling capabilities.
AIProxy supports integration with various services such as Azure, DeepL, EachAI, and more. This allows you to use their APIs through a single platform.
It helps manage your API keys securely by encrypting them and storing parts in separate locations, enhancing security.
Provides access to specific endpoints like OpenAI for Azure, Translation for DeepL, EachAI Workflows for EachAI, and more, allowing for diverse API functionality.
Offers examples for each supported service to guide users on how to implement and use the APIs effectively.
Ensures that the app connects to the trusted server by validating the server's certificate.
Verifies that the request is coming from a legitimate device using the app.
Encrypts the API key and splits it into two parts, one stored on their server and one in your app.
Sets limits on API usage per user to avoid overuse or abuse.
Sets limits on API usage per IP address to prevent abuse from a single IP source.
A Swift client library to help integrate AIProxy. Available on GitHub for developers to incorporate into their projects.
Bootstrap apps that use AIProxy, available to get started quickly. These are accessible on GitHub for easy cloning and testing.
A collection of helpful code snippets provided to assist developers in beginning their projects with AIProxy. Viewable in the documentation section.
A video tutorial that shows how to integrate AIProxy into applications quickly and easily. Available on YouTube.
A video guide to help users bootstrap sample apps using AIProxy, available on YouTube for step-by-step assistance.
Support for MistralService's chat completions using specific API requests. This allows the user to send a message and receive a response, with details about the usage of tokens for tracking API usage.
Support for MistralService's streaming chat completions which allows continuous message handling, providing real-time streaming of output as it is generated, with token usage details.
Allows the use of Flux-Dev Control Net model for image-to-image generation. Inference time is around 3 minutes when the model is cold, and 30-40 seconds when warm.
Provides various controls to tweak images using the 'ReplicateFluxDevControlNetInputSchema.swift'.
Allows experimentation with parameters in AIProxySwift for enhanced image processing.
Demonstrates a conversion example using a template prompt.
Allows training a headshot model using images available on iOS or macOS apps and generating images from the trained result. Training takes around 2 minutes, costing $2, and inference takes around 10 seconds.
Enables fast transcriptions using Groq's whisper support integrated into AIProxySwift. The transcriptions are made safely available with a drop-in snippet.
Support for Groq chat completions is now available in AIProxySwift. Groq completions are fast and free, but subject to rate limits on Groq’s end.
Both chat and streaming chat completions are supported and are close to OpenAI's structure, letting you manage responses effectively.
Allows generation of headshots using the Flux PuLID model without a separate training step. The image-to-image generator works in a single image sent as part of the inference request.
Requires AIProxySwift version 0.23.0 or later to use the Flux PuLID integration.
You can copy a sample snippet from the PR that added support to configure your AIProxy service to proxy requests to the Replicate API.
AIProxy is compatible with Azure OpenAI deployments, enabling you to use Azure credits for OpenAI functions in your app.
Requires AIProxySwift version 0.21.0 or later to use Azure's chat completions.
Use a new argument, 'requestFormat', when initializing 'openAIService' to specify Azure deployment.
Configure the service in AIProxy dashboard under Services > Add New Service and set Azure OpenAI deployment's URL as the proxy domain.
All new AIProxy services are required to use Apple's DeviceCheck for security. This checks if the requests come from Apple hardware running a signed version of the application.
DeviceCheck can no longer be disabled in AIProxy services, preventing users from forgetting to enable it before app deployment.
AIProxy now supports Replicate, Fai, ElevenLabs, and TogetherAI. It provides tools for managing data in fine-tune UI and generating images using new models on Replicate.
Added parameter support for OpenAI's Chat Completions to control how the model processes the image.
Shipped to the dashboard for monitoring requests and identifying issues like rate limiting.
New sidebar with pages for Live Charts, Request History, and Top Client Usage.
Any requests denied due to rate limits now show up with status codes to aid debugging. Improved error messages in Live Console and Request History.