Saturday, January 15. 2022
Using test-containers for Java with podman
A very quick entry for today. Sometimes the test-containers project (in its java version) is used to add some tests that involve docker in my daily job. The idea is using containers for complicated tests which cannot be easily mocked up and therefore need the full software image. But I have a fedora laptop now and it incorporated podman instead of docker. They are two different container engines, and, although the former was implemented having docker in mind, they are similar but not exactly the same. Therefore the test-containers project needs some tweaks to make it work in my box. Here I am going to summarize the steps that are more or less explained in this issue report.
Start the podman service. One of the main difference of podman is that it is daemonless, so we need to start the service in order to have something the test-containers project can talk to.
podman system service -t 0 &
After that just export some environmental variables to the java project. The unix socket depends on our user (it is not root based) and some features should be disabled from the test-containers because they do not work under podman.
export DOCKER_HOST=unix:///run/user/${UID}/podman/podman.sock export TESTCONTAINERS_CHECKS_DISABLE=true export TESTCONTAINERS_RYUK_DISABLED=true
Usually the previous two steps are enough but remember you can specify several things under ~/.config/containers directory. For example the registries you want search images from by default.
cat ~/.config/containers/registries.conf unqualified-search-registries=["docker.io", "quay.io"]
That is all. A very short post that I wanted to have in the blog. I am going to need this from time to time. And my memory is very bad for these random things. This way I can always look to this entry and just copy the commands without thinking what they do or why I need them. Sadly the entry will be obsolete soon, but until that moment it will save me a lot of time.
Best regards!
Saturday, December 11. 2021
Wildfly bootable jar inside the OpenShift sandbox
Today's entry is going to continue the bootable jar series that was started some time ago. Red Hat currently offers the Developer Sandbox for OpenShift as a preconfigured environment of its cloud product which you can use for training or learning purposes. The sandbox needs a free Red Hat login and also requests your phone number to avoid resources overexploiting. Throughout the entry the same application that was initially used for the bootable jar solution will be adapted for the sandbox or, more generically, for OpenShift. My starting point was this good article about the maven OpenShift plugin, it explains how to use maven to deploy a wildfly application to OpenShift. As usual the full process is going to be detailed step by step.
Preparation for the sandbox
Once the sandbox is ready, log into the OpenShift console. The oc command tool needs to be downloaded and added to the system path. Click the question mark icon next to your login name and then the option Command line tools.
Download the command for your architecture, in my case the linux x86_64 option. Uncompress the file and put the oc binary in the path.
tar xvf oc.tar
mv oc ~/bin/
oc help
Now, again inside the console, click the option Copy login command and in the new page click on the DevSandbox button and then the Display Token Link. An oc login command is displayed that can be used to start working via terminal. That is more my style and now the command tool can also be used by the maven plugin.
oc login --token=xxx --server=https://api.sandbox-m2.xxxx.p1.openshiftapps.com:6443
oc whoami
rickyepoderi
oc project
Using project "rickyepoderi-dev" on server "https://api.sandbox-m2.xxxx.p1.openshiftapps.com:6443".
Keycloak/RH-SSO installation
The sample application was a jax-rs web services endpoint that used the keycloak plugin for security and swagger to document the API. So a keycloak server is needed in the deployment. As the sandbox is already provisioned with the Red Hat products the Red Hat Single Sign-On (RH-SSO, productivized version of the upstream keycloak project) can be directly installed.
There is a template called sso74-ocp4-x509-https which performs re-encryption to the https port and therefore no special certificates are needed for the configuration.
oc get templates -n openshift -o name | grep sso74-ocp4-x509-https
template.template.openshift.io/sso74-ocp4-x509-https
The server can be installed just passing the realm name and the admin username and password.
oc new-app --template=sso74-ocp4-x509-https -p SSO_REALM="ocp" -p SSO_ADMIN_USERNAME="admin" -p SSO_ADMIN_PASSWORD="xxxxx"
Wait the app to start and ask for the route used for the application. You can access the keycloak console in that https URL.
oc get route
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
sso sso-rickyepoderi-dev.apps.sandbox-m2.xxxx.p1.openshiftapps.com sso <all> reencrypt None
Preparing keycloak
The application will need several elements inside keycloak.
Once you are logged in the keycloak console, select the ocp realm and go to the roles tab.
Click Add to create the Users role.
That role is required to call the hello application endpoint because an authorization filter was developed to check for it.
Now time to create the sample user. Go to the users tab.
Fill the data for your user, in my case user ricky is created.
Set a password.
And assign the Users role to the created user.
Finally go to the clients tab and create one for the application.
Just a public client is needed named bootable-jar-sample and with the correct URL in the sandbox (you can change it later if the URL is not correct).
Application changes
The application is already implemented, but it needs to be modified to be deployed in the cloud.
First the keycloak configuration files should be modified to point to the new realm and URL in the sandbox. If you remember there are two keycloak.json files (one for the JavaScript part and another one for the Java/REST application, I used that trick for the app before).
- src/main/webapp/WEB-INF/keycloak.json
- src/main/webapp/keycloak.json
In the index.html the keycloak adapter is updated to download the JS source file from the sandbox URL.
And finally the interesting part, the pom.xml. All the versions were updated to the last one available, among them wildfly to 25.0.1 and the keycloak plugin to 15.0.2. In order to deploy on OpenShift, adding the <cloud/> tag to the wildfly-jar-maven-plugin is a must. That tag prepares the deployment to be cloud aware (adds microprofile health layer for checks, configure KUBE_PING if using clustering, assigns the pod name to the jboss.node.name,...).
<plugin> <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-jar-maven-plugin</artifactId> <version>6.1.1.Final</version> <configuration> <feature-packs> <feature-pack> <location>wildfly@maven(org.jboss.universe:community-universe)#25.0.1.Final</location> </feature-pack> <feature-pack> <groupId>org.keycloak</groupId> <artifactId>keycloak-adapter-galleon-pack</artifactId> <version>15.0.2</version> </feature-pack> </feature-packs> <layers> <layer>base-server</layer> <layer>logging</layer> <layer>jaxrs</layer> <layer>keycloak-client-oidc</layer> </layers> <cloud/> <!-- Remember this!!! --> </configuration> <executions> <execution> <goals> <goal>package</goal> </goals> </execution> </executions> </plugin>
Now the openshift-maven-plugin is needed to create the build image, all the required kubernetes resources and deploy everything to the sandbox. This plugin is the other important piece that needs to be added to your application. The following configuration will use a NodePort service, and the route is configured to default edge (OpenShift will expose https externally but the pods will internally use plain 8080 port) and Redirect (external http will be redirected to https).
<plugin> <groupId>org.eclipse.jkube</groupId> <artifactId>openshift-maven-plugin</artifactId> <version>1.5.1</version> <executions> <execution> <goals> <goal>resource</goal> <goal>build</goal> </goals> </execution> </executions> <configuration> <enricher> <config> <jkube-service> <type>NodePort</type> </jkube-service> <jkube-openshift-route> <generateRoute>true</generateRoute> <tlsInsecureEdgeTerminationPolicy>Redirect</tlsInsecureEdgeTerminationPolicy> <tlsTermination>edge</tlsTermination> </jkube-openshift-route> </config> </enricher> </configuration> </plugin>
The only missing part is what image we start with, and that information is added using a global configuration property.
<properties> <jkube.generator.from>registry.redhat.io/ubi8/openjdk-11:latest</jkube.generator.from> </properties>
In the end the real work is performed by this plugin. Once the bootable jar is ready the openshift-maven-plugin adds the bootable jar to a openjdk-11 base and creates the final image for the application. Besides it creates the service, the deployment-config and the route resources for kubernetes. All the elements that are needed to incorporate that image to the cloud.
At this point we are one maven command away of running the application inside the sandbox. Just execute the following.
mvn oc:deploy
Demo
It is just one command, but it does a lot of things: first it compiles the sources; bundles the app into a war; creates the bootable jar file; then the new OpenShift plugin constructs the image with the application and all the resources; finally everything is uploaded and deployed into the sandbox. The following video shows exactly that. OpenShift is initially only running the keycloak server inside it. The maven command is executed and all the steps are performed. After waiting the deploy to finish and the pod is fully started, the route is accessed. The browser is redirected to log in the keycloak realm. After that the hello endpoint is executed successfully.
Summary
Today's entry explains how to adapt the bootable jar idea and use it in an OpenShift environment. The two main points are adding the cloud tag to the wildfly-jar-maven-plugin (it prepares the bootable jar for OpenShift) and incorporating the openshift-maven-plugin (it performs all the interaction with the sandbox, creating the image and resources and deploying them). The first part is simple, the second one is not so easy. The OpenShift plugin has lots of options that can be tweaked. The post used the most common ones, the application is deployed and exposed using https in the new route. The complete maven project for the demo app can be downloaded from here. It can be used and/or extended to explore more complex scenarios.
Regards from the cloud!
Sunday, November 21. 2021
Configuring nginx as reverse proxy for wildfly
Another quick entry this time about nginx and wildfly. Some days ago I needed to configure a nginx web server as a reverse proxy for wildfly, but with the special requirement to be the TLS terminator. So the nginx is the one that offers the https and behind it the wildfly application server works in plain http. Today's entry is going to show that setup and, in order to do the demo complete, the application server will be configured to understand all the information from the reverse proxy, client certificate and SSL data included. I personally do not use nginx a lot so I prefer to have this recorded in the blog.
There is a lot of information about this web server and how to configure it in SSL and/or as a reverse proxy. In this case this useful entry was used as my starting point.
A debian 11.1 was installed and the nginx server is just added via its distribution package.
apt-get install nginx
As https is a requirement, some certificates are needed, one will be for the nginx server (debian11 files) and one for the client (client1 files). For this point remember that I always follow this old entry with minor modifications. In the file /etc/nginx/sites-enabled/default the certificate and key files are added to the configuration.
listen 443 ssl default_server; listen [::]:443 ssl default_server; ssl_certificate /etc/ssl/debian11.chain.pem; ssl_certificate_key /etc/ssl/private/debian11.key; ssl_client_certificate /etc/ssl/cacert.pem; ssl_verify_client optional;
Note that file debian11.chain.pem contains the full chain with the final server and the CA certificates (cat debian11.pem cacert.pem > debian11.chain.pem). The setup asks for a client certificate but it is optional.
Time to configure the nginx as a reverse proxy. For this part it is important to add all the headers that wildfly is going to manage to know that is behind another server. The final configuration for the root location is the following.
location / { proxy_pass http://192.168.100.1:8080; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header SSL_CIPHER $ssl_cipher; proxy_set_header SSL_SESSION_ID $ssl_session_id; proxy_set_header SSL_CLIENT_CERT $ssl_client_cert; }
The server is configured to proxy everything to the backend server (http://192.168.100.1:8080) and several headers are setup to pass the needed X_Forwarded and SSL information. The wildfly project is mainly developed having apache in mind, therefore the headers are mimicking those used by the apache web server. Note the SSL_CLIENT_CERT header includes the deprecated ssl_client_cert variable (instead of the recommended ssl_client_escaped_cert) because wildfly understands that format (which is also the apache format). You have more information about the nginx SSL configuration in the project documentation.
At this point the configuration is checked and the service restarted.
nginx -t nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful systemctl restart nginx
In the wildfly side the configuration is easier. Just download the current 25.0.1 zip file and add an admin user.
wget https://github.com/wildfly/wildfly/releases/download/25.0.1.Final/wildfly-25.0.1.Final.zip unzip wildfly-25.0.1.Final.zip cd wildfly-25.0.1.Final/bin ./add-user.sh -u admin -p admin ./standalone.sh
The following CLI commands configure the application sever to be behind a reverse proxy (it will use headers to obtain final addresses and certificates).
./jboss-cli.sh --connect /subsystem=undertow/server=default-server/http-listener=default:write-attribute(name=certificate-forwarding, value=true) /subsystem=undertow/server=default-server/http-listener=default:write-attribute(name=proxy-address-forwarding, value=true) reload
For testing a simple index.jsp is added as an application. The file displays some interesting information from the JavaEE request.
jar cvf info.war index.jsp ${JBOSS_HOME}/bin/jboss-cli.sh --connect -c "deploy --force info.war"
- And that is all. Send a request with the client certificate and check the information retrieved by wildfly (client certificate included) is displayed correctly by the JSP file.
The client certificate was also imported in my firefox and the following video shows that the wildfly server is setup in plain http. But when accessing the debian virtual machine the certificate is requested. Note that all the information is now correct (protocol, server name, remote host, client certificate,...) because it was retrieved from the headers.
curl -v --cacert cacert.pem --cert client1.pem --key private/client1.key https://debian11.demo.kvm/info/index.jsp
Today's entry is a quick setup to use a nginx server as a reverse proxy and TLS terminator for wildfly. Theoretically the wildfly application server is thought to be used with apache and mod-cluster, but any other web server can usually be configured to mimic the same behavior. The important point is using the proper headers to feed the server with the expected values (X-Forwarded and SSL headers).
Proxied regards!
Comments