Elastic Compute Cloud-based Amazon Machine Image for XWiki

Last modified by Vincent Massol on 2024/02/26 17:54







The aim of the project is to increase the number of active installs in XWiki by providing the end-users with the flexibility to use Cloud Computing or services provided by AWS. There are two types of installations currently present for XWiki - Demo and production

Also, there are two major versions available currently: 12.10.7 - the stable release and 13.4 - the active latest release. Cloud computing has gained significant recognition in the past few years due to the availability of a virtually unlimited number of servers and unlimited power/memory availability within seconds while maintaining a pay-per-use model for billing. As companies and users move to Cloud-based solutions for their server needs, it makes sense that XWiki too should integrate seamlessly with major cloud provider solutions. Currently, Amazon’s AWS has the greatest market share in the Cloud Market, which is greater than the combined market share of Google (GCP) and Microsoft (Azure). Therefore we need to provide AMIs and Cloudformation Templates (in the context of AWS), which end users can consume directly to spin up XWiki within minutes and start testing/working. AWS provides flexible compute, storage, and database services, making it an ideal platform to run XWiki. AWS offers a complete set of services and tools for deploying XWiki on its highly reliable and secure cloud infrastructure. Coupled with AWS as the underlying infrastructure, XWiki will offer a very agile, scalable, and high-performance platform for pay-as-you-go second generation wiki

AWS Services

The core AWS components that will be used for this project are the following services.

  • Amazon EC2 – The Amazon Elastic Compute Cloud (Amazon EC2) service enables you to launch virtual machine instances with a variety of operating systems. You can choose from existing Amazon Machine Images (AMIs) or import your own virtual machine images.
  • Amazon EFS - Amazon Elastic File System (Amazon EFS) provides simple, scalable file storage for use with Amazon EC2 instances in the AWS Cloud. With Amazon EFS, you can create a file system, mount the file system on your EC2 instances, and then read and write data from your EC2 instances to and from your file system.
  • Amazon RDS – Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database such as Amazon Aurora or Amazon RDS MySQL in the cloud. With Amazon RDS, you can deploy scalable Amazon Aurora or Amazon RDS MySQL software in minutes with cost-efficient and resizable hardware capacity.
  • Amazon VPC – The Amazon Virtual Private Cloud (Amazon VPC) service lets you provision a private, isolated section of the AWS Cloud where you can launch AWS services and other resources in a virtual network that you define. You have complete control over your virtual networking environment, including a selection of your own IP address range, creation of subnets, and configuration of route tables and network gateways.
  • AWS CloudFormation – AWS CloudFormation gives you an easy way to create and manage a collection of related AWS resources, and provision and update them in an orderly and predictable way. You use a template to describe all the AWS resources (e.g., Amazon EC2 instances) that you want. You don’t have to create and configure the resources or figure out dependencies; AWS CloudFormation handles all of that.
  • Elastic Load Balancing – Elastic Load Balancing (ELB) automatically distributes incoming application traffic across multiple Amazon EC2 instances. It detects unhealthy instances and reroutes traffic to healthy instances until the unhealthy instances have been restored. ELB automatically scales its request handling capacity in response to incoming traffic
  • AWS IAM – AWS Identity and Access Management (IAM) enables you to securely control access to AWS services and resources for your users. With IAM, you can manage users, security credentials such as access keys, and permissions that control which AWS resources users can access, from a central location.
  • Amazon Route 53 - Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service. It is designed to give developers and businesses an extremely reliable and cost-effective way to route end users to internet applications by translating names like www.domain.com into numeric IP addresses like that computers use to connect to one other. Amazon Route 53 is fully compliant with IPv6
  • AWS CDK - AWS CDK or Cloud Development Kit is used in order to provision resources inside an AWS Account without the hassle of creating them manually and helps to lock down on configurations required for provisioning those resources so as to maintain consistency across various stages and installs. With CDK we can write infrastructure as code in languages like typescript, python, java, .NET . CDK code when built gives us Cloudformation templates which are then used for rendering/provisioning various resources in an AWS Account


We are aiming to provide four options for the user to deploy XWiki in their AWS account easily. for Xwiki demo (i.e. standalone distribution with prepackaged HSQLDB and lightweight java container) user will have two options to deploy in their AWS account, Amazon Machine Image(AMIs) published at AWS marketplace and CDK code for quick setup just by running few commands. Similarly, for deploying production installation on their AWS account, users will have two options, AMI and CDK code. 

AMIs once generated and published to the AWS marketplace would correspond to one Version only and later versions would require updates in the marketplace with new AMIs. To reduce the maintenance overhead here,  CDK code would be the preferred one for users since they won't have to go to AWS marketplace for spinning up EC2 instances and packer scripts can be updated for newer versions.CDK will provide a quick solution for deploying the services we need for running XWiki on Production-based installations and Testing/Demo installations by just running few commands and will help end-users for even faster deployment without going to the AWS marketplace. Also, in the case of AMI, it can't be updated with the release of a new version of XWiki. We will have to make a new AMI for every new release of XWiki. Whereas in the case of CDK code, just changing few lines will be enough for updating with the new release. Here I'll be explaining Architecture, components, and working of Infrastructure as a code(IAAC) or CDK for demo and production installation.


1. Demo Installation

the demo distribution or standalone distribution provides a built-in XWiki, with a portable database (HSQLDB) and a lightweight Java container (Jetty). This standalone distribution is not recommended in a production environment. This installation can be done using a single EC2 machine and can be used under free-tier for testing purposes. User won't have to pay for resources such as RDS, Amazon EFS, Loadbalancer, etc. like they have to in case of production installation.

Since demo/testing installations can be run on single EC2 instances, We will just be required to configure an EC2 instance inside a virtual private cloud and install XWiki inside it


The EC2 instance provisioned by the CDK-code can be accessed only by using the SSH key specified during the deployment process. AWS doesn't store these SSH keys, so if the user loses your SSH key, they will lose access to these instances. other components that will be used to manage the security of the CDK app will be Identity Acess Management (IAM) and Security groups. This solution uses an IAM role with the least privileged access. For this we will create a new IAM role, trusting the amazon EC2 services. A security group acts as a firewall that controls the traffic for one or more instances. In AWS Security Groups, We are only allowed to set permissive rules, as in, If no rules are set for an instance all inbound and outbound traffic will be blocked. here we will be allowing only the required ports since with large port ranges open, vulnerabilities could be exposed. The security group's inbound traffic will allow HTTP, HTTPS, SSH, and Port 8080 on IPv4 and IPv6. for outbound rule all traffic will be allowed.

Design for XWiki Demo

2. Production Installation

The goal is to provide production-ready XWiki in the user's AWS account by running just a few commands. The CDK code we will provide can be used to bootstrap a new AWS infrastructure to create a VPC, two public subnets, two private subnets, NAT gateway, AWS EFS, AWS Aurora DB, and other infrastructure to deploy XWiki production-ready instance. The production-based installation would run the XWiki server on ECS Cluster and frontend by Load Balancer and the Database server would be running on RDS instances. This would allow us to separate the database instance from the XWiki server so that the database server would not be competing with the XWiki instance server for resources and database failover can be gracefully handled. This would also allow us to handle XWiki server failover without affecting the Database server gracefully and scale seamlessly as traffic increases. The ECS Fargate instances would be scaling in response to higher stress on the server and RDS would scale in response to the huge amount of data processing required for storing all XWiki pages.

We can divide the Infrastructure as a code into these three layers:

-Ingress/Connection Layer: Consists of AWS DNS with AWS Route53 and AWS LoadBalancing with AWS ElasticLoadbalancing
-Compute Layer: Consists of containers running on AWS ECS Fargate, a container run engine that manages all needed infrastructure
-Storage/Database Layer: Consists of serverless MySQL DB powered by AWS Aurora Serverless for MySQL and managed NFS Filesystem powered by AWS ElasticFilesystem

Components of IAAC:

We will be using CDK code to deploy XWiki. The following list describes how AWS services and components are used.

  • Amazon VPC, which creates a logically isolated networking environment that we can connect to our on-premises data centers or use as a standalone environment. The VPC is configured across two Availability Zones. For each Availability Zone, the CDK code will provision one public subnet and one private subnet. To ensure high availability, this architecture deploys the XWiki servers across two Availability Zones within a region. The Multi-AZ feature is enabled for the Amazon Aurora database, which creates a primary database instance and a replica database instance in different Availability Zones for high availability. The XWiki instances and Amazon RDS database instances are in private subnets, exposing only the Application Load Balancing (ALB) listener and NAT gateways host instances to the internet.
  • NAT gateways deployed into the public subnets and configured with an IP address for outbound internet connectivity. These instances are used for internet access
    for Amazon EFS and AWS Aurora DB launched within the private subnet of VPC.
  • An IAM role with fine-grained permissions for access to AWS services necessary for the deployment process.
  • Appropriate security groups for each instance or function to restrict access to only necessary protocols and ports. The security groups also restrict access to Amazon Aurora DB instances by web server instances
  • Amazon Aurora for the shared database. Amazon RDS is a managed database service, so AWS handles all the administrative tasks for managing the database. By default, the database is deployed in multiple Availability Zones for high availability and automatically backed up on a schedule. Few of the features of Aurora DB are:
    • MySQL and Postgres are both supported as aurora DB.
    • Aurora gives 5x performance improvement over MySQL on RDS and 3x the performance over Postgres on RDS.
  • Amazon Elastic File System (Amazon EFS) as the shared file system. For creating a serverless infrastructure for hosting XWiki, One of the most important parts is having a place to store files. For That, we will be using Amazon Elastic File System (EFS). Amazon EFS manages all the infrastructure and automatically scales the file system storage capacity up or down as the user will add or remove files. With Amazon EFS, the user will only have to pay for the space their files and directories will consume. There is no minimum fee and no setup cost. This EFS will be located in the private subnet portion of our VPC.
  • Amazon Route 53 is a highly available and scalable Domain Name System (DNS) web service. It is designed for developers and corporates to route the end users to Internet applications by translating human-readable names like www.xyz.com, into the numeric IP addresses like that computers use to connect to each other. Highlights of Route 53 are:
    • Scalable − Route 53 is designed in such a way that it automatically handles large volume queries without the user’s interaction.
    • Can be used with other AWS Services − Route 53 also works with other AWS services. It can be used to map domain names to our Amazon EC2 instances, Amazon S3 buckets, and other AWS resources.
    • Cost-Effective − Pay only for the domain service and the number of queries that the service answers for each domain.
  • Application load balancer serves as the single point of contact for clients. The load balancer distributes incoming application traffic across multiple targets, such as EC2 instances, ECS clusters, etc. in multiple Availability Zones. This increases the availability of our application.

  • AWS Fargate is a technology that you can use with Amazon ECS to run containers without having to manage servers or clusters of Amazon EC2 instances. To use EFS with ECS or Fargate, customers can add one or more volume definitions to a task definition using the ECS console. There are many task-definition parameters that one must specify. But in reference to the architecture of this project the main ones are

    • vCPUs: Amazon ECS task definitions for Fargate require us to specify CPU and memory at the task level. According to the performance guide of XWiki, 2 vCPUs will be enough for the proper functioning of XWiki.
    • RAM: if a minimum configuration of ECS fargate with 2 vCPUs will be used, It will be more than enough, going by the memory needs given in XWiki’s performance guide.
    • AVAILABLE VOLUME: For Amazon ECS tasks hosted on Fargate, one of the supported storage types is Amazon EFS volumes. So, we will link the volume from AWS EFS (stated above) to the XWiki container. 

Working And Architecture OF IAAC


As shown in the above diagram the architecture will be composed of three different layers. The first layer being the ingress layer or connection layer consisting of components responsible for connecting XWiki instance to the internet that is AWS Route 53, and the Application load balancer. The second layer will be the compute layer consisting of an ECS cluster with fargate instance with XWiki in a container. and the third layer that will reside inside the private subnet of VPC will be the database or storage layer consisting of Amazon EFS for maintaining static files and Amazon Aurora for RDS. Details of the configuration of the different layers are as follow:

Ingress Layer:

ingress or connection layer will be consisting of managed AWS DNS with Amazon route53 and AWS application Load balancer. Main configuration the components of this layer in reference to the flow will be the following:

  • The application load balancer will be configured to be externally available to the world, that is, will allow all outbound traffic. the user-chosen domain name will be provided by using Route 53.
  • The ECS cluster with Fargate instance will be added as the target for the load balancer in order to make the XWiki instance available to load balancer and hence to all the outbound traffic.

Compute Layer:

Compute layer will be consisting of an ECS cluster with Fargate instance, a managed container run engine that manages all needed infrastructure. We will turn on the container insights in the instance. A few of the main configurations for ECS Fargate will be:

  • Backing the AWS EFS and AWS RDS (Aurora) into the container with the specified path and read-write permissions, to host all the created files and database of XWiki
  • Using the environment variables for connecting to the database from the ones given in XWiki container documentation and Adding the XWiki Docker image from Docker Hub 

Storage Layer:

Storage or database layer will be hosted inside the private subnet part of VPC. This layer will consist of managed serverless MySQL DB powered by AWS Aurora DB and File system by AWS EFS.

  • Storage will be encrypted by using created encryption keys
  • Will enable the automatic backup for the file system and will allow all outbound traffic to EFS.


The AWS Cloud provides a scalable, highly reliable platform that helps customers deploy applications and data quickly and securely. When we build systems on the AWS infrastructure, security responsibilities are shared between you and AWS. This shared model can reduce our operational burden as AWS operates, manages, and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the services operate. In turn, we assume responsibility and management of the guest operating system (including updates and security patches), other associated applications, as well as the configuration of the AWS-provided security group firewall.

AWS IAM and Security Groups

here we will be using an IAM role with the least privileged access. We will not store SSH keys, secret keys, or access keys on the provisioned instances. A security group acts as a firewall that controls the traffic for one or more instances. When we launch an instance, we associate one or more security groups with the instance. we add rules to each security group that allows traffic to or from its associated instances. Here we will configure the security group to have inbound access to TCP 22 (permission to allow SSH from the internet), TCP 80 (permission to allow HTTP access from the internet), TCP 443 (permission to allow HTTPS access from the internet), and also will allow port 8080 access.



Get Connected