In this article we will demonstrate how to create a docker image and use it.

We have taken the a full blown enterprise application (Liferay 7.0 GA5), as the candidate for docker image.  This is my first attempt at docker image creation.

We assume that the reader of this article has some idea about what Docker is. If you are new to Docker then please go through following link – What is Docker?

If you want to try out the liferay application docker image (that we are going to create in this article), then you can directly go to the following docker hub url – https://hub.docker.com/r/chatterjeesunit/liferay/

The files used for creation of this docker image are uploaded to following GitHub repository – https://github.com/chatterjeesunit/docker-liferay-mysql

Table of Contents

1.0 Introduction

Liferay is an application that is primarily used for building portals, intranets, website (although it is much more than that). You can find more information at Liferay’s official website

We will be using Liferay Community 7.0 GA5 Release, as the base application for building our docker image.

1.1 Why we are creating this docker image?

The first question that may come to your mind, is why are we making this docker image of Liferay application, and how does it helps us?

  • Setting up of Liferay requires lot of time (especially for new developers/ QA)
    • A lot of software needs to be downloaded and installed
      • Liferay Community
      • MySQL
      • Elastic Search Server (if we are setting up Remote search server)
    • After the installation, first time setup/configuration and startup of liferay also takes time (as  during the first installation liferay will create all tables and load sample data)
    • In general new developers in our team used to spend 1-2 day to complete full setup.
  • Second challenge was refresh of environment
    • With time most Demo/QA liferay setup became overloaded with test/junk data and needs to be refreshed.
    • Refreshing the application also takes time for Demo/QA environments.

Our aim is to create a docker image that is pre-setup, pre-configured and all the base tables and sample data is already populated.

If we can create such a image then creating a new environment setup or refreshing an existing environment will just take 10-15 minutes (as simple as creating a new docker container from the docker image)

1.2 Is this the right approach going forward?

Probably not. The reasons are listed below

  • Generally we do not recommend having more than one applications within same container
  • The reason being that this approach is not scalable (if we want to have clustered deployment of liferay, as in production environments.)
  • The right approach would be to create 3 separate docker images – liferay / mysql / elastic search, and then use a Docker Compose to build the application

But why are we building this single container image of liferay application?

  • Since we are not building for Production/UAT environments, but for Dev/QA/Demo environments, so this approach is simpler in terms of how to run /deploy a single image.
  • Secondly I believe that Docker Compose is more suited for microservices architecture where
    • You have multiple services running and each service would have its own image
    • Docker compose would then simply link them together and run.
    • Adding a new service is also as simple as adding a new entry in docker compose.

1.3 What will this docker image contain

  • All three servers (Liferay / MySQL/ Elastic Search) installed / setup and pre-configured.
    • MySql Server Login Credential
      • User: root
      • Password: root
      • Database: testdb
  • Default tables and sample data is pre-loaded.
    • Default Admin Login in Liferay
      • User: test@liferay.com
      • Password : test
  • Following ports have been exposed to connect from host machine
    • 3306 – MySQL Server
    • 8080 – Liferay Application
    • 9200 – Elastic Search
    • 9300 – Elastic Search
    • 8000 – For tomcat application debug (debugging from IntelliJ Idea or Eclipse)
  • User auto-created within docker
    • Username: user
    • Password: welcome

2.0 Creating the Docker Image

To create a docker image, we will have to create a file with name as Dockerfile and add docker commands to it. We can then build the image using this Dockerfile.

2.1 Basic Commands Reference for Image Creation

We will make use of following commands for creating the docker image.
You can skip this section if you are already aware of these docker commands.

For full reference of all commands please refer to Dockerfile reference

2.1.1 FROM

  • DockerFile generally starts with a FROM command.
  • FROM command is used to initialize the docker build stage.
  • Argument to FROM Command is generally a valid image that already exists.

FROM [:] [AS ]

e.g. if we want our base image to be Ubuntu 18.04 version, then we can use following command.
FROM ubuntu:18.04

2.1.2 RUN

  • This is used to run commands within the image.
  • Each RUN command will create a new image layer on top of current image layer, and then execute the command.
    • A minor side effect of this is that sometimes the image size becomes too big due to multiple image layers being created because of multiple RUN commands.
    • However creating multiple image layers is beneficial
      • If any commands are needed are modified in the Dockerfile and need to be re-run. In those cases the image will build from the top of last layer that was created before the command being modified. It will not start full build from start.
      • We can also create containers from any particular commit of image history
  • The most common form of RUN command is RUN
  • Command is run in a shell, which by default is /bin/sh -c on Linux or cmd /S /C on Windows)
    e.g. RUN mkdir -p /home/user/Tools/scripts
    This command will create the ‘/home/user/Tools/scripts‘ folder within the image.
  • Running multiple commands using the same RUN command can be done using \ at end of each line, and concatenating commands using &&e.g. Following three RUN commandsRUN wget https://elastic.co/../elasticsearch-2.4.6.tar.gz
    RUN tar -xvf elasticsearch-2.4.6.tar.gz \RUN rm -f elasticsearch-2.4.6.tar.gzcan also be executed as a single RUN command

    RUN \
    wget https://elastic.co/../elasticsearch-2.4.6.tar.gz \ 
    && tar -xvf elasticsearch-2.4.6.tar.gz \ 
    && rm -f elasticsearch-2.4.6.tar.gz

2.1.3 COPY

  • The COPY instruction copies new files or directories from and adds them to the filesystem of the container at the path .

e.g. Following command will copy the file portal-ext.properties from the current directory of host machine, to the /home/user/Tools folder of container.

COPY portal-ext.properties /home/user/Tools/

2.1.4 ENV

  • The ENV instruction sets the environment variable to the value .
  • This value will be in the environment for all subsequent instructions in the build stage.
  • The environment value can be referred using $

e.g. Following will create an environment variable with name TOOL_HOME

ENV TOOL_HOME=/home/user/Tools

We can later access it also. e.g

RUN cp java.zip $TOOL_HOME

2.1.5 USER

  • The USER instruction sets the user name (or UID) and optionally the user group (or GID) to use when running the image and for any  RUN , CMD and ENTRYPOINT instructions that follow it in the the Dockerfile.

e.g. USER myuser:root

2.1.6 WORKDIR

  • The WORKDIR instruction sets the working directory for any RUN, CMD, ENTRYPOINT, COPY and ADD instructions that follow it in the Dockerfile.
  • If the WORKDIR doesn’t exist, it will be created.
  • The WORKDIR instruction can be used multiple times in a Dockerfile. If a relative path is provided, it will be relative to the path of the previous WORKDIR instruction.

e.g. WORKDIR /home/user/Tools

2.1.7 ENTRYPOINT

  • An ENTRYPOINT allows us to configure a container that will run as an executable.
  • We can specify a script that runs when our container starts. e.g. script to start the tomcat application within the container.

e.g. ENTRYPOINT /home/user/tomcat/bin/startup.sh

2.1.8 EXPOSE

  • The EXPOSE instruction informs Docker that the container listens on the specified network ports at runtime. Default Protocol is TCP, unless specified.
    e.g.
    EXPOSE 8080
    EXPOSE 80/udp
  • The EXPOSE instruction does not actually publish the port.
  • It functions as a type of documentation between the person who builds the image and the person who runs the container, about which ports are intended to be published.
  • To actually publish the port when running the container, use the -p flag on docker run to publish and map one or more ports.
    e.g docker run -d -p 3306:3306 -p 8080:8080 chatterjeesunit/liferay

2.2 Creating Liferay Docker Image

The entire code for Dockerfile and other associated files, that are required to create this image are located at https://github.com/chatterjeesunit/docker-liferay-mysql

Creating this image will involve following steps

2.2.1 Specifying the base image

We will use Ubuntu as our base image.

#set the base image
FROM ubuntu

2.2.2 Add environment variables

We will add the environment variables to set path for Liferay home folder, JDK home, etc.

ENV TOOL_HOME=/home/user/Tools
ENV LIFERAY_HOME=$TOOL_HOME/liferay-ce-portal-7.0-ga5
ENV JAVA_HOME=$TOOL_HOME/jdk1.8.0_171
ENV JDK_HOME=$JAVA_HOME
ENV JRE_HOME=$JAVA_HOME
ENV ELASTIC_HOME=$TOOL_HOME/elasticsearch-2.4.6
ENV PATH=$PATH:$JAVA_HOME/bin:$TOOL_HOME

Add Java to the PATH variable, and also add the Tool home folder to the PATH variable.
ENTRYPOINT script is picked from PATH environment variable. Hence we added the TOOL_HOME folder, where we will copy our container startup script.

ENV PATH=$PATH:$JAVA_HOME/bin:$TOOL_HOME

2.2.3 Update and Install packages

  • Update ubuntu
  • Install required libraries like VIM, SUDO, UNZIP, CURL, WGET
  • Install MySQL Server
RUN \ 
   apt-get update \ 
  && apt-get install -y sudo unzip curl vim wget mysql-server

2.2.4 Create a user

  • By default the container works with root user.
  • Generally it works fine without any issues, as root user has all the privileges
  • However we cannot run the container as root user.
    • Because we are installing Elastic Search server, and Elastic Search does not allows running it as root user.
  • Hence we will need to create a new user and perform remaining installations as the newly created user.
  • We will create a new user with
    • name: user
    • password: welcome
    • default group: root
    • additional group: sudo
  • We will also add user to the group sudoers, so that we can run few commands with sudo
RUN \
   useradd -d /home/user -ms /bin/bash -g root -G sudo -p welcome user \
 && echo "user ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers

2.2.5 Change logged in user and current working directory

Change logged in user to the newly created user, and also change working directly to /home/user/Tools, where we will install all the applications

#Change user (don't work as root)
USER user

#Change working directory to Tools folder
WORKDIR $TOOL_HOME

2.2.6 Install JDK1.8 / Liferay / Elastic Search

  • Download JDK 1.8 / Liferay Community 7.0 GA5 / Elastic Search 2.4.6
  • Extract them to the /home/user/Tools folder
RUN \ 
   sudo chown -R user:root $TOOL_HOME \ 
   && wget https://download.elastic.co/elasticsearch/release/org/elasticsearch/distribution/tar/elasticsearch/2.4.6/elasticsearch-2.4.6.tar.gz \ 
   && wget --no-check-certificate -c --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u171-b11/512cd62ec5174c3487ac17c61aaa89e8/jdk-8u171-linux-x64.tar.gz \ 
   && wget https://excellmedia.dl.sourceforge.net/project/lportal/Liferay%20Portal/7.0.4%20GA5/liferay-ce-portal-tomcat-7.0-ga5-20171018150113838.zip \ 
   && tar -xvf jdk-8u171-linux-x64.tar.gz \ 
   && unzip liferay-ce-portal-tomcat-7.0-ga5-20171018150113838.zip \ 
   && tar -xvf elasticsearch-2.4.6.tar.gz \ 
   && rm -f liferay-ce-portal-tomcat-7.0-ga5-20171018150113838.zip \ 
   && rm -f jdk-8u171-linux-x64.tar.gz \ 
   && rm -f elasticsearch-2.4.6.tar.gz

2.2.7 Download and load Elastic Search plugins.

Elastic search for liferay requires few additional plugins to work, so we will install them.

RUN \ 
   mkdir -p $TOOL_HOME/elastic-plugins \ 
   && cd $TOOL_HOME/elastic-plugins \ 
   && wget https://download.elastic.co/elasticsearch/release/org/elasticsearch/plugin/analysis-icu/2.4.6/analysis-icu-2.4.6.zip \ 
   && wget https://download.elastic.co/elasticsearch/release/org/elasticsearch/plugin/analysis-kuromoji/2.4.6/analysis-kuromoji-2.4.6.zip \ 
   && wget https://download.elastic.co/elasticsearch/release/org/elasticsearch/plugin/analysis-smartcn/2.4.6/analysis-smartcn-2.4.6.zip \ 
   && wget https://download.elastic.co/elasticsearch/release/org/elasticsearch/plugin/analysis-stempel/2.4.6/analysis-stempel-2.4.6.zip \ 
   && cd $ELASTIC_HOME/bin/ \ 
   && ./plugin install file:///$TOOL_HOME/elastic-plugins/analysis-icu-2.4.6.zip \ 
   && ./plugin install file:///$TOOL_HOME/elastic-plugins/analysis-kuromoji-2.4.6.zip \ 
   && ./plugin install file:///$TOOL_HOME/elastic-plugins/analysis-smartcn-2.4.6.zip \ 
   && ./plugin install file:///$TOOL_HOME/elastic-plugins/analysis-stempel-2.4.6.zip

2.2.8 Copy Scripts and Default Configs

  • Create a container startup script,  that will
    • Start MySQL Server
    • Start Elastic Search Server
    • Start Liferay application server
    • Tail on liferay tomcat logs (so that container does not stops after liferay application server start)
  • Copy this script to the tools folder (this folder is already added in PATH environment variable)
  • Make this startup script executable
#Copy start script to Tools folder
COPY start $TOOL_HOME/
RUN sudo chmod +x start
  • Liferay also requires few configuration files
    • ElasticSearch configuration file
      • For specifying address of elastic search server
      • Elastic search cluster name, etc
    • Portal properties with details of database connections, etc
  • Create all these files, and COPY them to the proper locations in liferay folder
#Copy Default Liferay config and properties
COPY liferay-configs/com.liferay.portal.search.elasticsearch.configuration.ElasticsearchConfiguration.config $LIFERAY_HOME/osgi/configs/
COPY liferay-configs/portal-ext.properties $LIFERAY_HOME/
COPY liferay-configs/portal-setup-wizard.properties $LIFERAY_HOME/
  • Create two more scripts
    • MySQL Init Script – mysql-init.sql
      • This script will create the password for root user in MySQL Server.
      • Grant access to root user from any machine/host.
      • Create a default database with name ‘testdb’.
    • Create another liferay first time startup script – liferay-first-startup.sh
      • This script will start liferay application server.
      • Wait / Sleep till application server full starts (and creates tables / sample data, etc.
      • Then it will exit
  • Place both this scripts in a folder and copy the scripts folder to the container
#Copy scripts
COPY scripts $TOOL_HOME/scripts

2.2.9 Default MySQL Configurations

  • Comment out configuration of bind-address
  • Add configuration for keeping table names in lower case
  • Run mysql-init script (added in previous step)
RUN \ 
  sudo chmod 664 /etc/mysql/mysql.conf.d/mysqld.cnf \ 
  && sudo sed -i s/bind-address/#bind-address/g /etc/mysql/mysql.conf.d/mysqld.cnf \ 
  && sudo echo lower-case-table-names=1 >> /etc/mysql/mysql.conf.d/mysqld.cnf \ 
  && sudo service mysql start \  
  && sleep 10 \ 
  && sudo mysql < $TOOL_HOME/scripts/mysql-init.sql \ 
  && sudo service mysql stop

2.2.10 Liferay Startup and Default tables and sample data creation.

  • Set memory configurations for liferay
  • Run the liferay first time startup script added in Step 2.2.8
RUN \  
   sed -i 's/Xmx1024m/Xms2048m -Xmx2048m/g' $LIFERAY_HOME/tomcat-8.0.32/bin/setenv.sh \ 
   && sh $TOOL_HOME/scripts/liferay-first-startup.sh

3.0 Create and Upload Image to Docker Hub

3.1 Build the Image

docker build -t liferay:7.0GA5 .
  • liferay:7.0GA5, is the image tag
  • it will pick the Dockerfile from the current directory

3.2 Upload the Image to Docker hub

  • Go to Docker Hub, and create an account (if you don’t have one)
  • Go to the folder where you have placed your Dockerfile, and run below commands
    • Create a public repository in docker hub. e.g with name as liferay
    • Login to docker
      docker login --username=chatterjeesunit
    • Tag your image
      docker tag 4f6c72e141e9 chatterjeesunit/liferay:7.0GA5

      where

      • 4f6c72e141e9 is the docker image id
      • chatterjeesunit, is the username
      • liferay, is the image name
      • 7.0GA5, is the new tag name
    • Push docker image to docker hub repository
      docker push chatterjeesunit/liferay:7.0GA5

4.0 Running Liferay application using the docker image : Commands Reference

All the commands given below assume that liferay is the name of the container.

You can use the image you have build OR you can also download it from the docker repository using below command

docker pull chatterjeesunit/liferay:7.0GA5

4.1 Create a new container

docker run -d --name liferay chatterjeesunit/liferay:7.0GA5

where

  • liferay:7.0GA5  – is the image name and version
  • liferay – is the name of then new container
  • -d option is specified to run the docker container in detached mode.

We can also expose the ports, so that the application server, mysql server, elastic search server can also be accessed from localhost (the host machine). [Recommended Option]

docker run -d --name liferay -p 8080:8080 -p 3306:3306 -p 9200:9200 -p 9300:9300 -p 8000:8000 chatterjeesunit/liferay:7.0GA5

4.2 Starting and Stopping the container

docker start liferay

docker stop liferay

4.3 Other commands

4.3.1 Finding IP Address of the docker container

docker inspect --format '{{ .NetworkSettings.IPAddress }}' liferay

4.3.2 Printing application server logs

docker exec -it liferay tail -f /home/user/Tools/liferay-ce-portal-7.0-ga5/tomcat-8.0.32/logs/catalina.out

OR

docker logs --follow liferay

4.3.3 Get access to bash shell of the container

This is useful, if you require to access the container directly and explore.

docker exec -it liferay bash

4.4 Connect to MySql Server 

First option is to connect to MySQL Server using the IP Address of the docker container.

mysql -h -u root -p

e.g mysql -h 172.17.0.2 -u root -p

Second option, is to connect to localhost (only if you have specified -p 3306:3306 option while creating container)

mysql -u root -p

4.5 Accessing server URL from host machine

5.0 Customizing the liferay application running on docker container

All the commands given below assume that liferay is the name of the container.

There are situations where we may need to deploy OSGI Jars, Portal Properties or War file to liferay application.

5.1 Deploying custom OSGI Jars to Liferay container

Deploying OSGI Jars directly

  • Make sure your liferay container is running
  • Run below command
    docker cp XYZ.jar liferay:/home/user/Tools/liferay-ce-portal-7.0-ga5/deploy
  • OSGI jars will be deployed automatically

Deploying OSGI Jars (zip file option)

  • Create a zip file containing all OSGI Jars.
    • Name of zip file should be : osgi_jars.zip
    • Make sure the zip file only has files and no folder structure within it
  • Run following command to copy the zip file to container’s liferay base folder
    docker cp osgi_jars.zip liferay:/home/user/Tools/liferay-ce-portal-7.0-ga5/
  • Restart docker container
  • When you start it, the startup script automatically unzips the OSGI jar files in the deploy folder for automatic deployment

5.2 Deploying properties and config files

Deploying custom property files

  • There are scenarios where we want to modify the default properties. e.g portal-ext.properties
  • Copy the property file to the docker container with below command
    docker cp portal-ext.properties liferay:/home/user/Tools/liferay-ce-portal-7.0-ga5/
  • Restart the docker container

Deploying multiple custom property / config files (zip file option)

  • There could be a scenario to deploy multiple property/ configuration files. e.g
    • Portal Properties
    • OSGI Config files
  • Create a zip file of all properties and config files
    • Name of file should be : config.zip
    • Keep base folder structure in zip file, relative to liferay base folder
  • Sample zip file content would be like
    |_ portal-ext.properties
    |_ portal-setup-wizard.properties
    |_ osgi
      |_configs                                      
        |_com.liferay....BundleBlacklistConfiguration
        |_com.liferay...ElasticsearchConfiguration.config
    |_ tomcat-8.0.32
      |_ ROOT
        |_ WEB-INF
           |_ classes
              |_ log4j.properties
    
    
  • Copy this zip file to container’s base liferay folder
    docker cp config.zip liferay:/home/user/Tools/liferay-ce-portal-7.0-ga5/
  • Restart docker container
  • When you start it, the startup script automatically unzips and deploys the config files

6.0 Troubleshooting

6.1 Debugging Application

As a developer, we need to debug our application sometimes.
We were able to debug our liferay application from IntelliJ Idea, but not from Eclipse (as of now)

  • To do so, you first need to expose port 8000
  • Run instead of running startup.sh file, run following commands for starting liferay in debug mode
    export JPDA_TRANSPORT="dt_socket"
    export JPDA_ADDRESS="8000"
    export JPDA_SUSPEND="n"
    export JPDA_OPTS="-agentlib:jdwp=transport=$JPDA_TRANSPORT,address=$JPDA_ADDRESS,server=y,suspend=$JPDA_SUSPEND"
    echo $JPDA_OPTS
    sh catalina.sh jpda start

Create a new debug configuration and attach your debugger to

  • Host = Docker IP Address e.g. 172.17.0.2
  • Port = 8000

Check this section to know how to find out the docker IP Address –  4.3.1 Finding IP Address of the docker container

Note: We have modified the default docker image uploaded in Docker Hub, to always start Liferay in Debug mode.

6.2 Docx4J document import fails

We were using Docx4J for document import.
However the document import was failing due to following error in Docx4J code

Caused by: java.lang.NullPointerException 
       at org.docx4j.openpackaging.parts.WordprocessingML.ObfuscatedFontPart.deleteEmbeddedFontTempFiles(ObfuscatedFontPart.java:263) 
       at org.docx4j.openpackaging.parts.WordprocessingML.FontTablePart.deleteEmbeddedFontTempFiles(FontTablePart.java:161) 
       at org.docx4j.convert.out.common.AbstractExporter.export(AbstractExporter.java:91)

After some debugging we found that this issue is due to some missing folders that Docx4J expects.

Run below command (where liferay is the container name)

docker exec -it liferay sudo mkdir -p "/home/user/.docx4all/temporary embedded fonts"

 

Note: We have modified the default docker image uploaded in Docker Hub, to always create this directory by default

 


With this we conclude this article on how to create an use a docker image of Liferay application.