We'll use different terms for the components of our product:
The main part of the architecture uses a set of open-source software along with our own software.
The open source components here are:
In the Enterprise Edition, you'll also need to run a set of workers on dedicated servers: this is where the Machine Learning processes will run.
Each worker in the diagram represents a dedicated server, running our in-house job scheduling agents and dedicated Machine Learning tasks.
We only cover the most common cases here. If you have questions about your own architecture, please contact us.
This is the simplest option, a standalone server that hosts all the services using Docker containers.
A single docker-compose.yml
can efficiently deploy the whole stack.
With more budget, you can deploy Arkindex across several servers, still using Docker Compose along with placement constraints on Docker Swarm.
A Docker Swarm cluster enables you to run Docker services instead of containers, with multiple containers per service so you can benefit from higher throughput and eliminate single points of failure.
You can also deploy Arkindex using a Cloud provider (like Amazon AWS, Google GCP, Microsoft Azure), using their managed services to replace self-hosting databases and shared S3-compatible storage.
Most cloud providers provide manged offers for the services required by Arkindex (Load balancer, PostgreSQL, S3-compatible storage, search engine & Redis cache). You'll then need to run Arkindex containers: