DuploCloud enables platform-as-a-service on AWS for building and deploying applications via simple declarative interface and not have to deal with low level infrastructure details like VPC, security groups, IAM, Jenkins setup etc. These constructs are still in play but Duplo software abstracts it away for you by auto-generating the configuration based on application needs.

There are two variants of the product - DuploEnterprise and DuploLive. DuploEnterprise is a version that is installed within your AWS account while DuploLive is a hosted service running on our AWS accounts.

Pre-requisites

  • Google Account: Duplo works off Google Oauth. To login to duplo you need a google account. We will add more providers like O365, Okta etc in future.

  • Docker Knowledge: If you are deploying a Docker based microservice using duplo then it is assumed that you are familiar with docker.

    • You should have a docker image for your application that has been tested locally in your computer. Make sure it runs in detached mode (i.e. docker run -d option)
    • The Image should have been pushed to your repository. If the repository is private, then your will have to set the credentials for Docker hub in your Duplo account.
  • AWS SDK Familiarity: If your are using AWS services like S3, DynamoDB, SQS, and SNS you must be familiar with a very basic knowledge of how these services can be consumed. The interface to create these services via duplo will be very declarative and self-explanatory. You do not need any access keys in your code to access these services. Use the AWS constructor that does not take credentials but takes only region which must be US-West-2 unless specified otherwise by your enterpise administrator or during signup in DuploLive.

Terminologies

  • Tenant or Project: Tenant or Project is a sandbox or unit of deployment. All resources in a given tenant are isolated from another tenant via Security groups and IAM policies (optionally VPC). For applications to be reachable outside the sandbox a Port mapping or ELB must be created.
  • User: This is an individual with a user id. Each user could have access to one more tenants\projects.
  • Host: This is an EC2 instance or VM. This is where your application will run.
  • Service: Service is your appliction code packaged as a single docker image and running as a set of one or more containers. It is specified as - image-name; replicas; env-variables; vol-mappings if any. DuploEnterprise also allows running applications that are not packaged as Docker images.
  • ELB: A Service can be exposed outside of the tenant\project via an ELB and DNS name. ELB is defined as - Service name + container-port + Extrenal port + Internal-or-internet facing. Optionally a wild card certificate can be choosen for SSL ternmination. In DuploLive all ELBs are public exposed. In DuploEnterprise you can choose to make it internal which will expose it only within your VPC to other applications.
  • DNS Name: By default when a Service is exposed via an ELB, Duplo will create a friendly DNS Name. A user can choose to edit this name. In DuploEnterprise the domain name must have been configured in the system by the admin. In DuploLive the domain name is duplocloud.net
  • Docker Host or Fleet Host: If a host is marked as part of the fleet then DuploCloud will use it to deploy containers. If the user needs a host for development purposes say like a test machine then he would mark is a not part of the pool or fleet.

Quick Start - A Docker Microservice

Deployment is a three step process. Watch this short video that shows how to package your code in a docker image and depoloy

  • Create a Host from the menu Deployment --> All Hosts --> + sign above the table. Choose the desired instance type. The available instance types are set by your administrator or the plan you choose in DuploLive. If you are not using this host for hosting containers then mark the Fleet Host as false.
  • Deploy a Service (application) from the menu Deployment --> Services --> + Sign above the table. Give a name for your services (no spaces); number of replicas; Dockerimage; volumes (if any). The number of replicas must be less than or equal to the number of hosts in the fleet. The syntax for ENV variables is json key value pairs i.e. { "VAR1" : "VAL1", "VAR2" : "VAL2" }. The syntax or volume mappings is "\home\ubuntu:\somedir\".
  • Create an ELB from the menu Deployment --> Services --> + Sign above the table for Load Balancer Configuration. The url suffix you specify under Health Check will be used by Duplo during rolling upgrade i.e. when a service image is changed then duplo will take down one container replica at a time and bring up a new one, then once its running then the Duplo agent running on the host will make a call to the url and expects a 200 OK. If it does not get a 200 OK then the upgrade is paused and user needs to update with an image that fixes the issue.
  • The DNS name for the service will be present in the Services Table. It takes about 5 to 10 minutes for the DNS name to start resolving.
  • Update a Service from the menu Deployment --> Services --> Select the service from the table and click on edit

Example NodeJS App

In this demo, we will deploy a simple Hello World NodeJs web app. Duplo pulls Docker images from Docker Hub. You can choose a public image or provide credentials to access your private repository. For the sake of this demo we will use a ready made image available on Duplo's repository on Docker Hub.

  • Login to your Duplo console.
  • Select "Deployments" from the tab on the top left corner.
  • Select "Hosts" from the tabs. A Host is the instance in which your docker container will run. You can should choose a host with appropriate processing capacity for your application
  • Click on the plus icon to choose your host. Fill out the advanced configuration form if required and click submit.
  • You should now see your Host present in the table. Please give it a moment to instantiate.
  • Next, lets create a Service. A Service is nothing but a container with user specified image and environment variables. Lets go ahead and click the plus icon to create a new service.
  • Name the service "Test-service". For this demo we will use the latest, nodejs-hello image from Duplo's public docker hub repository. Fill in "duplocloud/nodejs-hello:latest" in the Docker Image field.
  • Enter the desired number of replica's you want in the swarm. Please note that each replica runs in an individual Host, so the the number of replica's must equal the number of Hosts. For the sake of this demo we will choose 1.
  • Fill in the desired environment variables, this is ideal for credentials or application specific configurations.
  • Volume mapping is super easy, simply give the host path and container path as shown. Please note that we highly recommend keeping the Hosts stateless and using S3 for static assets. We will keep this field empty for this demo.
  • Hit Submit ! Please wait a moment for the service to initialize.
  • Almost there. Since the hello-nodejs image serves on port 3000 we need to create a load balancer (LB) configuration to map external port (LB) to internal port (container).
  • Click the plus icon on the load balancer configuration table. The form will be pre-filled with some information from the service we just created. Fill the menu as shown below and click submit. This will also create a DNS name that we can use.
  • Please wait for ~5 minutes as it can take a while for the DNS Route table changes to be reflected.
  • Head to the URL shown in the Services table. You should see the Hello world serving our welcome message.
  • Congratulations ! You have just launched your first web service on Duplo !

Deploying a S3 Backed NodeJS Webapp

In the last tutorial we deployed a Nodejs just based webserver and accessed it using the DNS name that Duplo created for us. In this tutorial we will take it a step further. We will create a text file with a message and upload it to s3. Next we will modify our NodeJS application to access this file and display the message to every visitor. This purpose of the tutorial is to show how easy it is to manage resources on AWS using Duplo

  • The code in the Docker container that we used in Part 1 can be found on this repo. It's a simple NodeJS web server. Clone this repository and make the following changes.
  • Current app.js should look like this.
  • Lets make the following changes
  • In the above code we added a link variable that will point to the S3 bucket that we will create shortly, message variable is the message the text file in the S3 bucket will contain.
  • Let go ahead and create a text file with the message "My Name is Pranay",
  • Now that we have this file, lets upload it to S3. Login to your Duplo Console and Navigate to the AWS tab.
  • Click on the Plus Icon to create a new service, select S3 from the dropdown and hit submit.
  • This will create a S3 bucket and generate a name for us.
  • Now click on the AWS Console button, this will take us to your AWS console where we can upload the file to S3 and also modify permissions.
  • You should see the following range of items. Select S3 from Storage options.
  • From the list of items search for the bucket name that matches the name on your Duplo console and click on it. You should be able to see bucket option. Select Upload and choose the text file that we created earlier. Hit next till the end and the file should be uploaded.
  • Click on the file name and copy the Link for the file in the bucket.
  • Now lets assign this link to the link variable in app.js
  • Almost there, all we need to do now is create a Docker file and push to Docker Hub so that Duplo can access it.
  • I have created a simple dockerfile inspired from this Blog from NodeJS.
  • We will now run the Docker build command to build out an image from this dockerfile.
  • And now docker push to push it to Docker Hub
  • All set ! Lets create a new service as we did in the last tutorial to run this new Docker container.
  • In the services tab click on the plus sign and fill the options out as such
  • It's important to note here that this service along with the previous service will run in the single host that we created in the previous tutorial ! All we need to do is create a new load balancer for this service so that we can access it.
  • Click on the Load balancers plus symbol and fill out the selections as such
  • As before Give it a minute and hit the freshly generated link in the browser. Voila !
  • So now we have two services which we can access via two different load balancers running on a single host (ec2 instance) ! Due to this tight coupling with S3 we can do all sort of fun stuff like store static files as well as javascript in S3.

Configuring AWS Resources

Duplo provides a declarative interface to create and delete AWS resources. All resources that is created in a given tenant is accessible from the hosts in the same tenant. You do not need any access keys in your code to access these services. Use the AWS constructor that does not take credentials but only region which must be US-West-2 unless specified otherwise by your admin.

  • RDS Database: Create a MySQL or PostGRES DB from the menu Deployment --> Relational Databases --> + sign above the table. Once the DB is created which takes about 5 to 10 mins then the endpoint will be specified in the table. Use this endpoint, DB indentifier and credentials to connect to the DB from your application running in the EC2 instance. Note that the DB is accessible only from inside the EC2 instance (inlcuding the containers runining in it) in the current tenant. The best way to pass DB endpoint, DBName and credentials to your application is through ENV variables of the Service.
  • Elasti-cache. Create a REDIS or Memcache from Deployment --> Redis and Memcache --> + Sign above the table. The best way to pass the cache endpoint to your application is through ENV variables of the Service.
  • S3, Dynamo, SQS and SNS. Create the resources from Deployment --> Other AWS resources --> + Sign above the table. Duplo will auto-generate a name for the resource and display it in the UI. The best way to pass resource name to your application is through ENV variables of the Service. Note that SQS takes a few minutes to appear in the UI after triggering a deployment.

Using AWS Console

Duplo users get AWS console access for advanced configurations of S3, Dynamo, SQS and SNS resources that were created through duplo. ELB and Ec2 parts of console is not supported yet. Under the menu "S3, Dynamo..." there is a link to AWS console. This will lauch the AWS UI with permissions scoped to the current tenant\project. You don't have permissions to create new resourses directly from the AWS console. You need to do them in Duplo. You can do all operations on resources already created via Duplo. For example you can create an S3 bucket from Duplo UI and go to AWS console to add remove files, setup static web hosting etc. Similarly create a DynamoDB in Duplo and use AWS console to add\remove entries in the table.

You will notice that in the console you can see the names of *all* S3, dynamo, Sqs and SNS resources even the ones that don't bellong to you. Don't be alarmed, only the names are visible. None can access other person's resource. This issue is a painful bug in AWS console where one cannot see his own resource unless we grant List:* permission. This is the reason why we anonymize the resource names to prevent leaking any private information through resource names. We are sooo waiting for AWS to fix it.

Advanced Functions

  • Allocation tags: By default Duplo spreads the container replicas across the hosts. By virtue of allocation tag the user can pin a container to a set of hosts that have a specific tag. First mark given set of hosts with a meta data key value pair (Menu Deployment --> Fleet Host --> MetaData) called "AllocationTags" as key and a value of your choice for example "highmemory;highcpu" or "serviceA". Then when creating a service set the allocation tag value to be a substring of the above tag for example highmemory or serviceA. The allocation algorithm tries to pick a host where AllocationTag specified in the service is a subsstring of the AllocationTags of the host.

FAQ

  • How do i ssh into the host?
    Under "All Hosts" menu click on the black icon below the table. It will provide the key file and instructions to ssh.
  • My host is windows how do i RDP?
    Under "All Hosts" menu click on the black icon below the table. It will provide the password and instructions to rdp.
  • How do i get into the container where my code is running?
    Under the services Status TAB find the host where the container is running. Then ssh into the host (see instructions above) and then run "sudo docker ps" and get id the container of you container. Then run "sudo docker exec -it containerid bash". You can tell which one is your container by using the image id. Don't forget the sudo in docker commands
  • I cannot connect to my service url, how do i debug?
    Make sure the DNS name is resolving by running in your computer "ping ". Then we need to check if the application is running by testing the same from within the container. ssh into the host and then connect to your docker container using docker exec command (see above). From inside the container curl the application url using the ip 127.0.0.1 and port where the application is running. Confirm that this works. Then curl the same url using the ip address of the container instead of 127.0.0.1, the ipaddress can be obtained by running ifconfig command in the container.
    If the connectipn from within the container is fine then exit from the container into the host. Now curl the same endpoint from the host i.e. using container ip and port. If this works then under the ELB UI in duplo note down the host port that duplo created for the given container endpoint, this will be in the range 10xxx or the same as container port. Now try connecting to the "HostIP" and DuploMappedHostPort just obtained. If this works as well but the service url is still failing contact the your enterprise admin or duplolive-support@duplocloud.net.
  • What keys should i use in my application to connect to the AWS resources i have created in duplo like S3, Dynamo, SQS etc?
    You don't need any. Use the AWS constructor that takes only the region as the argument (us-west-2). Duplo setup links your instance profile and the resources. So the host in Duplo already has access to the resources within the same tenant\project. Duplo AWS resources are reachable only from Duplo Hosts on the same account. You CANNOT CONNECT TO ANY DUPLO AWS RESOURCE FROM YOUR LAPTOP.
  • What is rolling upgrade and how do i enable it?
    If you have multiple replicas in your service i.e. multiple containers then when u update you service like change an image or env variable then duplo will make this change one container at a time i.e. it will first bring down the first container and bring up the new one on that host with the updated config. If the new container fails to start or the health check url set by the user is not returning 200OK then duplo will pause the upgrade of remaining containers. A user has to intervene by fixing the issue typically by updating the service with a newer image that has a fix. If no health check url is specified then duplo only checks for the new container to be running. In order to specify health check go the ELB menu and you will find the health check url suffix.
  • I want to have multiple replicas of my mvc service, how do i make sure that only one of them runs migration?
    Enable health check for your service and make sure that the api does not return 200OK till migration is done. Since duplo waits for health check to be complete before upgrading the next service it is guaranteed that only one instance runs migration.
  • One or more of my containers are shoing in pending state, how can i debug?
    If it is Pending when the desired state is Running then either the image is being downloaded so wait a few minutes. If it's been more than five minutes check the faults from the buttom below the table.Check if your image name is correct and does not have spaces.Image names are case sesitive so it should be all lower case i.e. the image name in DockerHub should also be lower case.
    If the current state is Pending when desired state is Delete then it means this container is the old version of the service. It is still running as the system is in rolling upgrade and previous replica is not successfully upgraded yet. Check the faults in other containers of this service for which the Current State is Pending with Desired State as Running..
  • Some of my container status says "pending delete" what does it mean?
    This means Duplo is wanting to remove these continers. The most common reason for this is that another replica of the same service was upgraded but is now not operational. Hence Duplo has blocked the upgrade. You might see the other replicas in even "Running" state but its possible that health check is failing and thats why rolling upgrade is blocked. To unblock fix the service configuration (image, env etc) to a error free state.

Serverless Applications with Lambda and Api Gateway

Duplo enables deploying serverless applications a breeze. A step by step tutorial on how to deploy a lambda application is at https://www.linkedin.com/pulse/deploy-existing-web-applications-serverless-using-aws-thiruvengadam/ Following is a video version of the tutorial


Deploying from UI

In the previous section you saw the deployment using the cli tool. The same can be performed with UI. Deployment is a three step process.

  • Create a Zip file: Generate a Zip package of your lambda code. The lambda function code should be at the root of the package. If your are using virtual env all dependencies should be packaged. Look the the aws documentation at http://docs.aws.amazon.com/lambda/latest/dg/lambda-python-how-to-create-deployment-package.html for detailed instructions on how to generate a package. We personally prefer using tools like zappa and serverless
  • Create an S3 bucket and upload the Zip file: Deployment --> Aws --> + Sign above the table called SQS, S3,.... Choose S3 from the dop down. Give a name for your bucket or leave it blank and a name will be auto generated. Then Click on AWS Console Button to get into AWS console for this S3 bucket and upload the zip file we just created.
  • Create Lambda Function: Deployment --> Aws --> + Sign above the second table (Lambda Functions). Give a name for the Lambda function and other values. This will create the lambda function. Click on AWS console to go to the aws console for this function. Test the function. You can look at the tutorial above for the same or look at aws documentation.
  • Updating Lambda Function and configuration: To update the code for the lambda function, create a new package with a different name and upload in S3 bucket again. The select the lambda function in the table and click Edit Function. Make sure the right S3 bucket has been selected and provide the name of the function. To update a function configuration like timeout memory etc use the edit configuration button.

Integrating with other resources

Duplo enables you to create a classic micro-services based architecture where your lambda function can integrate with any other resource within your tenant like S3, dynamo, RDS or other docker based microservices. Duplo will implicitly enable the lambda function to communicate with other resources but block any communication outside the tenant (except ELB).

Triggers and Event Sources

To setup a trigger or event source, the resource needs to be created via duplo portal. Subsequently one could trigger directly from that resource to the lambda function in the AWS console menu of your Lambda function. resources could be S3 buckets, API Gateway, DynamoDB, SNS etc. An example trigger via API Gateway was described in the tutorial

Passing Secrets

Passing secrets to a lambda function can be done in much the same way as passing secrets to your docker based service i.e. using environmental variables. For example you can create a SQL database from the RDS menu in Duplo,provide username and password then in the Lambda menu give the same username and password. You see no secrets need to stored anywhere outside like vault, git repo etc.

Api Gateway

AWS ApiGateway RestApi is created from the Duplo Portal which will take care of creating the security policies to make the API Gateway accessible to other resources (like Lambda function) within the tenant. From the Duplo portal only create the rest api and all other configuration for the api (like defining methods, resources and pointing to lambda functions) should be done in the AWS console. The console for the api can be reached by selecting the API in Deployment-->Aws table and then clicking on the AWS Console button as shown in the picture below. The above tutorial displays the integration of API Gateway and lambda function

Administrator's Guide

This section applies only to DuploCloud Enterprise version and not DuploLive. Setup is done using a cloud formation template.

Configuration Scopes

  • Base Configuration: This must be setp by the user outside Duplo and provided as an input to DuploCloud. This is a once in a life time configuration. This configuration includes one VPC and one subnet with appropriate routes. It is recommended that at least. It is recomended that you setup at least 2 Availability Zones. One public and One private subnet in each availability zone. All resources (Ec2 instances, RDS, Elastic-cache etc) are placed in private subnet by default. ELB is placed in public or private subnet based on user (tenant) choice. DuploCloud privides a default cloud formation template that can be used if desired. Further details of this is provided
  • Plan Configuration: Every tenant is part of one and only one plan. Configuration applied at the plan level is applies to all tenant's in a plan. A plan can be used to denote say a dev environment, a class of tenants (private\public facing) etc. Following are the plan configurations. Except of VPC and subnets rest of the parameters can be changed at will. Plan parameters include:
    {
    			 "Name": "devplan", /* Name of the plan */
    			 "Images": [ /* AMIs that are made available for the tenants */
    			  {
    			   "Name": "Duplo-Host-Docker-v1.17", /* User friendly Name of the AMI displayed to the user */
    			   "ImageId": "ami-0861ea68", /* AWS AMI ID */
    			   "OS": "Linux"
    			  },
    			  {
    			   "Name": "ubuntu-dev",
    			   "ImageId": "ami-d83af7b8",
    			   "OS": "Linux"
    			  },
    			  {
    			   "Name": "Windows-Docker-Fleet",
    			   "ImageId": "ami-1977d979",
    			   "OS": "Windows"
    			  },
    			  {
    			   "Name": "Unmanaged Ubuntu-16.04",
    			   "ImageId": "ami-5e63d13e",
    			   "OS": "Linux"
    			  }
    			 ],
    			 "Quotas": [ /* Limit of on the number of resources that be used by each tenant. If none is set for a given resource type, then there is no limit. */
    			  {
    			   "ResourceType": "ec2",
    			   "CumulativeCount": 2,
    			   "InstanceQuotas": [
    			    {
    			     "InstanceType": "t2.medium",
    			     "MetaData": "(2CPU, 4GB)",
    			     "Count": 2
    			    },
    			    {
    			     "InstanceType": "t2.small",
    			     "MetaData": "(1CPU, 2GB)",
    			     "Count": 2
    			    }
    			   ]
    			  },
    			  {
    			   "ResourceType": "rds",
    			   "CumulativeCount": 1,
    			   "InstanceQuotas": [
    			    {
    			     "InstanceType": "db.t2.small",
    			     "MetaData": "(1CPU, 1.7GB)",
    			     "Count": 1
    			    }
    			   ]
    			  },
    			  {
    			   "ResourceType": "ecache",
    			   "CumulativeCount": 1,
    			   "InstanceQuotas": [
    			    {
    			     "InstanceType": "cache.t2.small",
    			     "MetaData": "(1CPU, 1.7GB)",
    			     "Count": 1
    			    }
    			   ]
    			  },
    			  {
    			   "ResourceType": "s3",
    			   "CumulativeCount": 1,
    			   "InstanceQuotas": [
    			    {
    			     "InstanceType": "bucket",
    			     "MetaData": "bucket",
    			     "Count": 1
    			    }
    			   ]
    			  },
    			  {
    			   "ResourceType": "sqs",
    			   "CumulativeCount": 1,
    			   "InstanceQuotas": [
    			    {
    			     "InstanceType": "queue",
    			     "MetaData": "queue",
    			     "Count": 1
    			    }
    			   ]
    			  },
    			  {
    			   "ResourceType": "dynamodb",
    			   "CumulativeCount": 1,
    			   "InstanceQuotas": [
    			    {
    			     "InstanceType": "table",
    			     "MetaData": "table",
    			     "Count": 1
    			    }
    			   ]
    			  },
    			  {
    			   "ResourceType": "elb",
    			   "CumulativeCount": 2,
    			   "InstanceQuotas": [
    			    {
    			     "InstanceType": "elb",
    			     "MetaData": "elb",
    			     "Count": 1
    			    }
    			   ]
    			  },
    			  {
    			   "ResourceType": "sns",
    			   "CumulativeCount": 1,
    			   "InstanceQuotas": [
    			    {
    			     "InstanceType": "topic",
    			     "MetaData": "topic",
    			     "Count": 1
    			    }
    			   ]
    			  }
    			 ],
    			 "AwsConfig": {
    			  "VpcId": "vpc-1eafce79", /* VPC for this tenant */
    			  "AwsHostSg": "sg-c2d899ba", /* list of security groups separated by ;. All hosts will be placed in this security groups. */
    			  "AwsElbSg": "sg-b9d899c1", /* Security group in which the ELB will be placed. This applies to traffic into the extenal (VIP) of the elb. Internal Sg between hosts and ELB will be auo setup.  */
    			  "AwsPublicSubnet": "subnet-0066b449;subnet-24e25943", /* Subnets in which hpublic facing ELBs must be placed. Each subnet corresponds to an AZ. */
    			  "AwsInternalElbSubnet": "subnet-0e66b447;subnet-23e25944", /* Subnets in which internal resources like Ec2 hosts, rds ecaache etc will be placed. If you like hosts to be public facing then put the same public subnets here. */
    			  "AwsElastiCacheSubnetGrp": "duplo-cache-aww6gk5hsy4n",
    			  "AwsRdsSubnetGrp": "duplo-vpc-21-resources-dbsubnetduplo-z9fh3g59362z",
    			  "CommonPolicyARNs": "", /*Set of IAM policy ARNs that will be applied to all tenants. */
    			  "Domain name": "" /* Name of route 53  domain that will be used for tenant custom dns names*/
    			  "CertMgrResourceArns": "" /* List of certificate arns that wil be available to the tenant for SSL termination on the ELB. */
    			 },
    			 "UnrestrictedExtLB": false,
    			 "Capabilities": {
    			  "DisableNativeApps": false,
    			  "DisablePublicEps": false,
    			  "DisablePrivateElb": false,
    			  "AssignInstanceElasticIp": false,
    			  "BlockEbsOptimization": false,
    			  "DisableSumoIntegration": false,
    			  "DisableSignalFxIntegration": false,
    			  "EnableTenantExpiry": false
    			 }
    			}
    			
  • Tenant Configuration: These are the configuration that we covered in the deployment guide.

Access Control

There are 2 types of roles, user and administrator. Each user can have access to one or more tenants. Each tenant can be accessed by one or more users. Administrator have access to all tenants plus the administrative functions like plan configuration, system dashboard, system faults etc


Katkit - Duplo's CI/CD Component

Duplo provides a CI/CD framework that allows you to build, test and deploy your application from Git HUB commits and PRs. We call it Katkit. Katkit is a arbitrary code execution engine which allows the user to run arbitrary code before and after deployment. Katkit follows the same notion of a "Tenant" or environment. Thus tying together CI and CD. In other words the tests are run against the application in same underlying AWS topology where one's code is running as against running them in a separate fleet of servers which does not capture the interactions of the application with the AWS infrastructure like IAM, Security groups ELB etc.

At a high level, katkit functions as follows

  • A repository is linked to a Tenant.
  • User chooses a GIT commit to run test and deploy
  • Katkit deploys a service in the same tenant with the docker image provided by DuploCloud, which is essentially like a jenkins worker and had the katkit agent in it.
  • Katkit agent in the CI container checks out the code at that commit inside the container. It then executes ci.sh from the checked out code. Essentially each build is a shortlived service that is removed once the ci.sh execution is over.
  • User can put any arbitrary code in ci.sh
  • Katkit allows, for a given run of a commit, the user to execute code in "phases" where in each phase Katkit repeats the above steps with a difference in the ENV variables that are set in each phase. The code inside ci.sh is to read the env variables and perform diiferent actions corresponding to each phase
  • Katkit has a special phase called "deployment" where it does not run ci.sh but it looks for the servicdescription.js file (details below), replaces the docker image tag and replaces it with the git commit sha. It is assumed that the user, before invoking the deployment phase, has gone through a prior phase where he build a docker image which was tagged with the git commit sha. The sha is available as an ENV variable in every phase.

First Deployment

Before using CI/CD, the first deployment of the application needs to be done via Duplo menus described above. Make sure that the application works as expected. Katkit is used only for upgrades of container images and run tests that can be written to run before and after.

Environments

In duploEnterprise a standard practice is to have a separate tenant for a given logical application for each deployment environment. For example say an application called taskrunner would be created as three tenants called d-taskrunner, b-taskrunner and p-taskrunner to represent dev, beta and prod environment. In each tenant one can specify an arbitrary name for the env say "DEV" in the menu Dashboard-->ENV. This string will be set by Katkit as an ENV variable when it runs the tests during CI/CD and thus your test code can use this value to determine what tests should be run in each env or for that matter take any other action.

Export Service Description

Service Description represents the toplogy of a service. It is a Json file that is used by Katkit to upgrade the running service. Go to Deployment-->Services-->Export. This will give a json file. Save this as servicedescription.js under the servicedescription folder that must exist at the root of your repository. In this file search for "DockerImage": and here change the image tag to the word <hubtag> for example change "DockerImage": "nginx:latest" to "DockerImage": "nginx:<hubtag>". Remove the ExtraConfig and Replicas field from the file. These have env variables and replicas which would vary from one environment to other. Hence during deployment Katkit will retain what is already present in the current running service.

Link Repository

Once the above steps have been performed, you can link your GitHub Repository to your tenant. In addition to the repository name you also need to specify the "Home Branch" which is the branch for which the PRs will be monitored by Katkit for the user to run deployments. Same repository and branch combination can be linked in several tenants. If your repository has several services for different tenants then each service can be represented by a separate folder at the root. This is Folder Path field. Katkit looks for service description file under /servicedescription/servicedescription.js Same respository but different folders can also be used in different tenant. Same tenant can also have different respositories.

Phases

Each CI/CD run comprises of one or more phases. There are two types of phases - execution and deployment. In execution phase katkit will invoke ci.sh file from the repository. The difference between two execution phases is in ENV variables based on which user code in ci.sh can operate differently. There can be only one deployment phase in a given run. Katkit it does not run ci.sh in deployment phase but it looks for the servicdescription.js file (details below), replaces the docker image tag <hubtag> and replaces it with the git commit sha. It is assumed that the user, before invoking the deployment phase, has gone through a prior phase where he build a docker image which was tagged with the git commit sha. The sha is available as an ENV variable in every phase.

Katkit Config

The above configuration customizations like Phases, ENV etc can be saved in the repository in a config file called katkitconfig.js Following is an example of one such file

Advanced Functions

  • Bring-your-own-image: By default all tenant CI/CD runs are executed in a docker image specified by the adminitsrator. This image would typically have the most common packages for your organization. But a user can bring his own builder image and specify the same. The image should have the katkit agent that can be copied from the default builder image.
  • Bring-your-own-fleet or Local Fleet: By default Katkit will run the builder containers in a separate set of hosts, but the user can also choose to run the build container in the same tenant hosts which is being tested.