Saturday, November 28. 2020
Can wildfly use BCFIPS provider for SSL running with jdk-11?
This time I am going to talk about the Bouncy Castle FIPS, wildfly and JDK version 11. It is known that BCFIPS can be configured with wildfly and jdk version 8 (this link is the EAP documentation but it also works for upstream wildfly). There is a step in which you need to add the bouncy castle jar files inside the ext directory of the JDK. But this is not possible since version 9, because the extension mechanism was removed. On the other hand, the certification of BCFIPS for jdk-11 is not absolutely clear to me, because in the faq it says that only jdk 7 and 8 are certified but in the roadmap version 1.0.2 certifies jre 11. So I assume that jdk 11 is valid for 1.0.2 but the faq page was not updated.
In general there is a big problem with wildfly if the extension mechanism is not present. The JDK needs to access to the BC classes in order to configure the provider, and in wildfly the jboss-modules project disallows adding jars globally. But let's go step by step, first just try a simple standalone java class. This is the sample.
import java.io.BufferedReader;
import java.io.InputStreamReader;
import java.net.HttpURLConnection;
import java.net.URL;
public class HttpURLConnectionExample {
public static void main(String[] args) throws Exception {
URL obj = new URL(args[0]);
HttpURLConnection con = (HttpURLConnection) obj.openConnection();ยก
con.setRequestMethod("GET");
int responseCode = con.getResponseCode();
System.out.println("GET Response Code: " + responseCode);
if (responseCode == HttpURLConnection.HTTP_OK) {
BufferedReader in = new BufferedReader(new InputStreamReader(con.getInputStream()));
StringBuffer response = new StringBuffer();
String inputLine;
while ((inputLine = in.readLine()) != null) {
response.append(inputLine).append(System.getProperty("line.separator"));
}
in.close();
System.out.println(response.toString());
} else {
System.out.println("GET request not worked");
}
}
}
The idea is configuring a JDK-11 to use the BCFIPS provider at JVM level. So just download a JDK-11 from adopt-openjdk and install it.
wget https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.9.1%2B1/OpenJDK11U-jdk_x64_linux_hotspot_11.0.9.1_1.tar.gz
tar zxvf OpenJDK11U-jdk_x64_linux_hotspot_11.0.9.1_1.tar.gz
After that the JDK needs to be configured to use the BCFIPS provider. For that, and following the documentation, the file ${JAVA_HOME}/conf/security/java.security is modified to add the new providers.
security.provider.1=org.bouncycastle.jcajce.provider.BouncyCastleFipsProvider
security.provider.2=com.sun.net.ssl.internal.ssl.Provider BCFIPS
security.provider.3=SUN
security.provider.4=SunRsaSign
security.provider.5=SunEC
security.provider.6=SunJCE
security.provider.7=SunJGSS
security.provider.8=SunSASL
security.provider.9=XMLDSig
security.provider.10=SunPCSC
security.provider.11=JdkLDAP
security.provider.12=JdkSASL
security.provider.13=SunPKCS11
The BCFIPS provider is added at first position and the ssl one is configured to use it at second position. The rest of the providers present in the default configuration are just moved after them (SunJSSE is removed because now is at position 2 and configured to use the BCFIPS). In order to execute the previous java file we need the default cacerts file converted to PKCS format, as BCFIPS only accepts this format.
cp ${JAVA_HOME}/lib/security/cacerts .
keytool -importkeystore -srckeystore cacerts -destkeystore cacerts.p12 -srcstoretype JKS -deststoretype PKCS12 -srcstorepass changeit -deststorepass changeit
Now compile and execute the sample class just adding the BC jar inside the classpath.
${JAVA_HOME}/bin/javac HttpURLConnectionExample.java
${JAVA_HOME}/bin/java -cp bc-fips-1.0.2.jar:. -Djavax.net.debug=all -Djavax.net.ssl.trustStorePassword=changeit -Djavax.net.ssl.trustStore=cacerts.p12 HttpURLConnectionExample https://blogs.nologin.es/rickyepoderi/
And it works. The BCFIPS can be executed successfully as a client and we can connect to retrieve the HTTPS page with it. But, as commented before, this will not work with wildfly. Because the JavaEE server uses jboss-modules and it avoids the access from normal java.base module to the BC jar. There is no way of adding the file to the JDK module classpath. In version 8 this was solved using the EXT directory but now I do not know how to overcome this. My only idea was adding the jar as boot classpath, adding the following option.
-Xbootclasspath/a:/path/to/bc-fips-1.0.2.jar
Sadly this idea does not work. The BC provider code uses in several places a call similar to XXX.class.getClassLoader().load("a.internal.jdk.class") to load classes from the SUN provider. If using the boot classpath the getClassLoader method returns null and it breaks the provider. I did a quick hack just changing the previous call with Class.forName. More or less I changed the following classes.
find . -type f -exec grep "Class.forName(" {} \; -print
Class def = Class.forName(className);
Class provClass = Class.forName("sun.security.jca.Providers");
./org/bouncycastle/jcajce/provider/BouncyCastleFipsProvider.java
Class def = Class.forName("sun.security.internal.spec.TlsPrfParameterSpec");
Class def = Class.forName("sun.security.internal.spec.TlsRsaPremasterSecretParameterSpec");
Class def = Class.forName("sun.security.internal.spec.TlsPrfParameterSpec");
Class def = Class.forName("sun.security.internal.spec.TlsRsaPremasterSecretParameterSpec");
./org/bouncycastle/jcajce/provider/ProvSunTLSKDF.java
Class def = Class.forName(className);
./org/bouncycastle/jcajce/provider/ClassUtil.java
return Class.forName(className);
./org/bouncycastle/jcajce/provider/GcmSpecUtil.java
Class def = Class.forName(className);
./org/bouncycastle/jcajce/provider/BaseSingleBlockCipher.java
Class.forName(className);
With the hack present the previous java works successfully using the BCFIPS provider configured inside the boot classpath. Note that I changed to use the sources directory with the modified classes.
${JAVA_HOME}/bin/java -Xbootclasspath/a:/path/to/bc-fips-1.0.2-sources -Djavax.net.debug=all -Djavax.net.ssl.trustStorePassword=changeit -Djavax.net.ssl.trustStore=cacerts.p12 HttpURLConnectionExample https://blogs.nologin.es/rickyepoderi/
Moving the same hack to wildfly also works. These are the configuring steps:
Modify the standalone.conf to include the modified provider:
JAVA_OPTS="$JAVA_OPTS -Xbootclasspath/a:/path/to/bc-fips-1.0.2-sources"
Create a certificate in the ${WILDFLY_HOME}/standalone/configuration folder and copy the previous cacerts.p12 to it:
keytool -genkeypair -alias localhost -keyalg RSA -keysize 2048 -validity 365 -keystore ${WILDFLY_HOME}/standalone/configuration/keystore.p12 -storetype PKCS12 -dname "CN=localhost" -storepass changeit -ext SAN=dns:localhost cp cacerts.p12 ${WILDFLY_HOME}/standalone/configuration/
Start the server using the configured JAVA_HOME.
export JAVA_HOME="/path/to/jdk-11.0.9.1+1" ./standalone.sh
Under the CLI command line, configure elytron to use all the previous stuff.
/subsystem=elytron/key-store=localhost:add(type=pkcs12, relative-to=jboss.server.config.dir, path=keystore.p12, credential-reference={clear-text=changeit}) /subsystem=elytron/key-store=ca:add(type=pkcs12, relative-to=jboss.server.config.dir, path=cacerts.p12, credential-reference={clear-text=changeit}) /subsystem=elytron/key-manager=localhost-manager:add(key-store=localhost, credential-reference={clear-text=changeit}) /subsystem=elytron/trust-manager=ca-manager:add(key-store=ca) /subsystem=elytron/server-ssl-context=localhost-context:add(key-manager=localhost-manager, trust-manager=ca-manager) batch /subsystem=undertow/server=default-server/https-listener=https:undefine-attribute(name=security-realm) /subsystem=undertow/server=default-server/https-listener=https:write-attribute(name=ssl-context, value=localhost-context) run-batch
Change the ApplicationRealm to also use the p12 certificates:
/core-service=management/security-realm=ApplicationRealm/server-identity=ssl:write-attribute(name=keystore-provider, value=PKCS12) /core-service=management/security-realm=ApplicationRealm/server-identity=ssl:write-attribute(name=alias, value=localhost) /core-service=management/security-realm=ApplicationRealm/server-identity=ssl:write-attribute(name=keystore-path, value=keystore.p12) /core-service=management/security-realm=ApplicationRealm/server-identity=ssl:write-attribute(name=keystore-password, value=changeit) /core-service=management/security-realm=ApplicationRealm/server-identity=ssl:write-attribute(name=key-password, value=changeit)
And it works. Now the wildfly server is using BCFIPS. But obviously the solution is extremely hacky and unmanageable.
So the summary is that I could not add Bouncy Castle FIPS to wildfly using jdk 11 at the moment (I am talking about adding BCFIPS for SSL, using the provider programmatically is valid). BCFIPS seems to be certified with JRE version 11, but the necessary jars cannot be added to classpath when using wildfly and jboss-modules. My idea of using the boot classpath failed because of how BCFIPS loads the sun internal classes. So no luck, sometimes you get the bear and sometimes the bear gets you, and today it was the second option . Anyway I wanted to backup my tests in detail here. if I do not summarize them in the blog I will forget the steps that were tried and, more important, what was the root problem.
Regards!Saturday, September 19. 2020
New DNIe cards need different data for the secure channel
Sometimes I think that the DNIe situation is just recurring, I feel like Bill Murray in the film Groundhog Day. During the last years the Spanish electronic ID (DNIe) has been working good inside the OpenSC project (since the DNIe v3.0 was integrated around 2017). I personally had not received any report of problems with it. But it happens that my personal card expired in august (the previous renewal was because the chip stopped working and the expiration date was not extended that time). I went to the police station and, after coming back home, my fresh new smartcard was not working with OpenSC. Again. The worst thing was that the official RPM package for linux distributed by the government failed with my card too.
Looking into the issue inside the OpenSC code, the initial problem was that a certificate that is used to establish the secure channel (secure communication with the card, similar to what https is to http) failed when it was validated. The certificate for the intermediate CA is placed in the card itself and, in order to validate that the certificate was not altered, it is verified against a public key (the public key of the CA that is issuing those certificates for the Spanish ID cards). That key (among other things needed for the secure channel like some certificates, other public and private keys, key references inside the card,...) is just hardcoded inside the code. Therefore that verification failure only meant that the Spanish Police Department (DGP) had changed that CA. The first thing I tried was just commenting that verification, but the secure channel creation failed in a later step. So it was crystal clear that all the CA structure that is needed for the secure channel had been replaced. I searched over the internet for the data. Previously that information was published by the Spanish institutions inside the sources of some public projects, like for example jmulticard or inside the FNMT MultiPKCS11 sources (code for the official PKCS11 package, the zip file can be downloaded at the end of page). But nothing, I found absolutely nothing, only the previous data was there, the one that now failed with my new card.
The official code stores the configuration in ASN.1 format inside a fixed global byte array variable. So doing some readelf/objdump magic over the binary library I got the full bytes and started a slow and manual parsing. I detected that there were two configurations there. I was very skeptical about this, in the end the official package was not working for me either, but why were there two configurations? I continued parsing the ASN.1 and obtained the modulus for the public key whose verification was failing from both confs. One was the same value used in the current OpenSC code but the other key was a different array. So I backed up the certificate from the card (the one being validated) and created a simple program to verify it with the new key found in the second configuration. And surprisingly it worked. It was the valid public key whose private counterpart signed the intermediate CA certificate in the card. That convinced me to finish the whole parsing and replace all the data for the secure channel creation with this new one. In two days I could finish the task and OpenSC was working again with my new DNIe. At least changing the data to establish the secure channel was enough, no more changes were needed.
More or less at that point I was notified that there were some issues reported in the OpenSC project about new DNIe cards not working. The main one is the issue 2105 and I started to share what I had at that moment. The final conclusion is that if you have a DNI with IDESP equals or greater than BMP100001 the new CA structure is used and current OpenSC does not work with it. The command dnie-tool -i displays the IDESP information for your card. A PR was sent to the project to manage both configurations (the old data should be maintained for previous cards) and it seems to be working for testers. Let's see if the maintainers accept it promptly.
In summary, if you have a new DNIe card and you want to use OpenSC to access with it some governmental sites, it will not work for you. The situation is the same with the official package. Only windows distribution seems to be working at this moment. If you are in a hurry you can try out the PR and compile it by yourself. As a personal comment I am going to say that maintaining the DNIe inside OpenSC this way is really complicated. The DGP (Spanish institution in charge of the DNIe) continues doing the things in linux utterly in the wrong direction. This time I was lucky and, first, I renewed my DNIe at the perfect time to discover the issue and, second, finding the new data in the binary distribution was just a long shot. In a normal situation this would have meant waiting for the publication of the new data (direct DPG announcement or some code added to the previously mentioned projects) and trying to fix it without a new card, waiting for the community for the testing. As always, remember that I am not related at all with the DGP or the FNMT. I am just a frustrated linux user who needed to use the DNIe some time ago. And I am starting to be really tired with all this.
Regards!
Saturday, November 30. 2019
SAML assertion replay in keycloak
Today's entry is again about keycloak but this time I am going to use the SAML protocol. This protocol is a very old web Single Sign On (SSO) protocol in which XML information is sent and signed between the peers. The entry is motivated because in the just released version 8.0.0 the SAML assertion can be retrieved from the logged principal and can be replayed. My idea was testing this feature using the assertion to call a CXF endpoint protected with Web Services Security (WSS). The endpoint will be configured to use the SAML assertion to validate the user. If you remember a previous series about CXF/WSS was presented in the blog, but using certificates instead of SAML.
As usual the entry summarizes the steps I followed to perform this PoC (Proof of Concept) in detail.
Download and install the keycloak server.
wget https://downloads.jboss.org/keycloak/8.0.0/keycloak-8.0.0.zip unzip keycloak-8.0.0.zip cd keycloak-8.0.0/bin ./standalone.sh
Go to the default location (http://localhost:8080) and create the initial admin user.
Now the server will be configured to use a self-signed certificate (secure https is a must for SAML). Create the server and trusted key-stores.
cd ../standalone/configuration keytool -genkeypair -keystore keystore.jks -dname "CN=localhost, OU=test, O=test, L=test, C=test" -keypass XXXX -storepass XXXX -keyalg RSA -alias localhost -validity 10000 -ext SAN=dns:localhost,ip:127.0.0.1 keytool -export -keystore keystore.jks -alias localhost -file localhost.cer keytool -import -trustcacerts -alias localhost -file localhost.cer -keystore cacerts -storepass changeit
Configure the server to use the previous certificate using the CLI interface:
cd ../../bin ./jboss-cli.sh --connect /subsystem=elytron/key-store=localhost:add(type=jks, relative-to=jboss.server.config.dir, path=keystore.jks, credential-reference={clear-text=XXXX} /subsystem=elytron/key-manager=localhost-manager:add(key-store=localhost, alias-filter=localhost, credential-reference={clear-text=XXXX}) /subsystem=elytron/server-ssl-context=localhost-context:add(key-manager=localhost-manager, protocols=["TLSv1.2"]) batch /subsystem=undertow/server=default-server/https-listener=https:undefine-attribute(name=security-realm) /subsystem=undertow/server=default-server/https-listener=https:write-attribute(name=ssl-context, value=localhost-context) run-batch
We also configure the self-signed certificate as trusted for the whole JVM using a system property.
/system-property=javax.net.ssl.trustStore:add(value="${jboss.server.config.dir}/cacerts")
Perform the same exact steps for wildfly (installation and certificate, use the same keystores, because both servers are going to run in the same localhost hostname). The wildfly server will be started with an offset of 10000.
./standalone.sh -Djboss.socket.binding.port-offset=10000
At this point we have a keycloak server in 8443 and a wildfly server in 18443 port. Both use https and the same certificate. They also trust in each other. So now the keycloak-adapters for SAML should be installed in the wildfly server.
wget https://downloads.jboss.org/keycloak/8.0.0/adapters/saml/keycloak-saml-wildfly-adapter-dist-8.0.0.zip cd ${WILDFLY_HOME} unzip /path/to/keycloak-saml-wildfly-adapter-dist-8.0.0.zip cd bin ./standalone.sh -Djboss.socket.binding.port-offset=10000 ./jboss-cli.sh --connect controller=localhost:19990 --file=adapter-elytron-install-saml.cli
Now it is the time to configure the SAML client. The idea is simple, we will have a SAML protected application using the keycloak adapter. That application will call a CXF endpoint that will be configured to process the SAML assertion and validate the user. For simplicity I am going to use the same application (the web service endpoint will be located in the same app). Go to the keycloak console and select clients and create a new SAML client. The client ID should be the endpoint location https://localhost:18443/keycloak-cxf-saml/echo-service/echo (later I will explain this limitation). Check the option Sign Assertions to ON, this way the assertion is also signed and it can be replayed in a secure way. My client settings are presented below.
For the configuration of the CXF/wss4j endpoint, the realm certificate will be needed. So go to Realm Settings, select Keys tab and click on the Certificate button of the RSA key. Copy the certificate value and create a file server.cer with the typical certificate header and footer.
-----BEGIN CERTIFICATE----- <copied certificate from keycloak console> -----END CERTIFICATE-----
And finally import it into a JKS as a trusted certificate. This will be the store that should be configured later to validate SAML signatures by the web service endpoint.
keytool -import -trustcacerts -alias saml -file server.cer -keystore server.jks -storepass YYYY
Let's start with the development. The first thing to do is configuring the keycloak SAML SSO. For that just obtain the initial template from the console. Go again to clients, select our client and click on tab Installation. Choose the option Keycloak SAML Adapter keycloak-saml.xml and a template configuration can be downloaded. This configuration should be placed in the file WEB-INF/keycloak-saml.xml inside the WAR application bundle. I customized the configuration file like below.
<keycloak-saml-adapter> <SP entityID="https://localhost:18443/keycloak-cxf-saml/echo-service/echo" sslPolicy="ALL" keepDOMAssertion="true" nameIDPolicyFormat="urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified" logoutPage="/logout.jsp"> <Keys> <Key signing="true"> <PrivateKeyPem> MII... </PrivateKeyPem> <CertificatePem> MII... </CertificatePem> </Key> </Keys> <IDP entityID="idp" signatureAlgorithm="RSA_SHA256" signatureCanonicalizationMethod="http://www.w3.org/2001/10/xml-exc-c14n#"> <SingleSignOnService signRequest="true" validateResponseSignature="true" validateAssertionSignature="false" requestBinding="POST" bindingUrl="https://localhost:8443/auth/realms/master/protocol/saml"/> <SingleLogoutService signRequest="true" signResponse="true" validateRequestSignature="true" validateResponseSignature="true" requestBinding="POST" responseBinding="POST" postBindingUrl="https://localhost:8443/auth/realms/master/protocol/saml" redirectBindingUrl="https://localhost:8443/auth/realms/master/protocol/saml"/> </IDP> </SP> </keycloak-saml-adapter>
The web.xml is configured to protect the application but not the endpoint. The CXF web service is secured via wss4j, so no web security should be applied to it.
<?xml version="1.0" encoding="UTF-8"?> <web-app xmlns="http://xmlns.jcp.org/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/web-app_3_1.xsd" version="3.1"> <welcome-file-list> <welcome-file>index.jsp</welcome-file> </welcome-file-list> <security-constraint> <web-resource-collection> <web-resource-name>Protect all application</web-resource-name> <url-pattern>/*</url-pattern> </web-resource-collection> <auth-constraint> <role-name>*</role-name> </auth-constraint> <user-data-constraint> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> <security-constraint> <web-resource-collection> <web-resource-name>The WS endpoint is public</web-resource-name> <url-pattern>/echo-service/*</url-pattern> </web-resource-collection> <user-data-constraint> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> <login-config> <auth-method>KEYCLOAK-SAML</auth-method> <realm-name>this is ignored currently</realm-name> </login-config> <security-role> <description>Role required to log in to the Application</description> <role-name>*</role-name> </security-role> <session-config> <session-timeout>30</session-timeout> </session-config> </web-app>
The application is configuring the KEYCLOAK login to use the SSO. The full application (/*) is protected, but the WS endpoint (/echo-service/*) is excluded (everyone can access the endpoint at web level). Besides any authenticated user can access the application (role *) and secure communication (https) is compulsory (transport is defined as confidential).
Time to create the web service endpoint. This part is very similar to the previous entry about WSS and certificates that I commented at the beginning. So you can review it for more information about this subject, because it is really a bit complicated. The simple echo web service is developed like this.
@Stateless @WebService(name = "echo", targetNamespace = "http://es.rickyepoderi.sample/ws", serviceName = "echo-service") @Policy(placement = Policy.Placement.BINDING, uri = "WssSamlV20Token11.xml") @SOAPBinding(style = SOAPBinding.Style.RPC) @EndpointConfig(configFile = "WEB-INF/jaxws-endpoint-config.xml", configName = "Custom WS-Security Endpoint") public class Echo { @WebMethod public String echo(String input) { Message message = PhaseInterceptorChain.getCurrentMessage(); SecurityContext context = message.get(SecurityContext.class); Principal caller = null; if (context != null) { caller = context.getUserPrincipal(); } return (caller == null? "null" : caller.getName()) + " -> " + input; } }
The endpoint is just an echo service but it obtains the user from CXF. If you check it, the web service is configured with a specific WSS policy WssSamlV20Token11.xml file and a configuration file WEB-INF/jaxws-endpoint-config.xml. The next points deal with those files.
- The most complicated part is creating the policy to request a SAML assertion for the web service. For that I checked some of the CXF tests in order to obtain samples of SAML policies and, finally, the following WssSamlV20Token11.xml file is used.
<?xml version="1.0" encoding="UTF-8" ?> <wsp:Policy wsu:Id="SecurityPolicy" xmlns:wsp="http://www.w3.org/ns/ws-policy" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:sp="http://schemas.xmlsoap.org/ws/2005/07/securitypolicy"> <wsp:ExactlyOne> <wsp:All> <sp:TransportBinding> <wsp:Policy> <sp:TransportToken> <wsp:Policy> <sp:HttpsToken> <wsp:Policy/> </sp:HttpsToken> </wsp:Policy> </sp:TransportToken> <sp:Layout> <wsp:Policy> <sp:Lax/> </wsp:Policy> </sp:Layout> <sp:AlgorithmSuite> <wsp:Policy> <sp:Basic256/> </wsp:Policy> </sp:AlgorithmSuite> </wsp:Policy> </sp:TransportBinding> <sp:SupportingTokens> <wsp:Policy> <sp:SamlToken sp:IncludeToken="http://schemas.xmlsoap.org/ws/2005/07/securitypolicy/IncludeToken/AlwaysToRecipient"> <wsp:Policy> <sp:WssSamlV20Token11/> </wsp:Policy> </sp:SamlToken> </wsp:Policy> </sp:SupportingTokens> </wsp:All> </wsp:ExactlyOne> </wsp:Policy>
The policy is requesting https protocol and a SAML version 2.0 token. I know this part is horribly complicated but this is WSS security, not an easy world.
In order to configure the validation of the assertion the file jaxws-endpoint-config.xml is provided.
<?xml version="1.0" encoding="UTF-8"?> <jaxws-config xmlns="urn:jboss:jbossws-jaxws-config:4.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:javaee="http://java.sun.com/xml/ns/javaee" xsi:schemaLocation="urn:jboss:jbossws-jaxws-config:4.0 schema/jbossws-jaxws-config_4_0.xsd"> <endpoint-config> <config-name>Custom WS-Security Endpoint</config-name> <property> <property-name>ws-security.signature.properties</property-name> <property-value>server.properties</property-value> </property> </endpoint-config> </jaxws-config>
The server.properties contains the properties to access the keystore to validate the signature of the SAML assertion.
org.apache.wss4j.crypto.provider=org.apache.ws.security.components.crypto.Merlin org.apache.wss4j.crypto.merlin.keystore.type=jks org.apache.wss4j.crypto.merlin.keystore.password=YYYY org.apache.wss4j.crypto.merlin.keystore.file=server.jks
And that server.jks is just the previous keystore created in step 8 using the keycloak certificate in the realm. So, in summary, the endpoint is configured to request a SAML token and the certificate used by keycloak is configured as trusted for the validation. This way the wss4j implementation can check the SAML assertion received and validate its signature. If everything is OK the user will be recovered by the echo service and returned.
- And here it comes the final part. How is the SAML assertion retrieved and used to call the endpoint? For that I created a simple EchoServlet that gets the assertion from the special keycloak principal and calls the endpoint.
WSClient client = new WSClient(request); out.println(client.callEcho(((SamlPrincipal) request.getUserPrincipal()).getAssertionDocument(), input));
The actual code in the servlet is bit more complicated because I decided to check the validity of the assertion. The SAML assertion usually has some time constraints to not use the same assertion forever. If the assertion is expired the application forces a re-login of the user. But I decided to not add here the details to not complicate even more the explanation.
The CXF implementation for saml uses a callback handler that should provide the assertion to be sent by the client (the handler fills a SAMLCallback with the assertion). In this case it is extremely easy because it is just there inside the principal. So I created KeycloakSamlCallbackHandler that just wraps the assertion to give it to the CXF system in order to attach it to the SOAP message.
public class KeycloakSamlCallbackHandler implements CallbackHandler { Document assertion; public KeycloakSamlCallbackHandler(Document assertion) { this.assertion = assertion; } @Override public void handle(Callback[] callbacks) throws IOException, UnsupportedCallbackException { if (callbacks != null) { for (Callback callback1 : callbacks) { if (callback1 instanceof SAMLCallback) { SAMLCallback callback = (SAMLCallback) callback1; callback.setAssertionElement(assertion.getDocumentElement()); } } } } }
And the WSClient just puts the callback to the call context. This way the CXF implementation can retrieve the SAML assertion and add it to the SOAP message.
public String callEcho(Document assertion, String input) { EchoService service = new EchoService(url); Echo echo = service.getEchoPort(); // Properties for WS-Security configuration ((BindingProvider)echo).getRequestContext().put(SecurityConstants.SAML_CALLBACK_HANDLER, new KeycloakSamlCallbackHandler(assertion)); // call the endpoint return echo.echo(input); }
Here it is important to add the option keepDOMAssertion to true, because this way the DOM document of the original assertion is stored in the SAML principal and can be recovered by the application to replay it. More information about the SAML configuration for adapters in the keycloak documentation
And that is all. Very long and complicated setup, but it shows that you can replay a SAML assertion. I decided to use CXF/wss4j because it is another complete different SAML implementation (it uses opensaml internally). Here it is a video that shows that it really works. When I access the application the browser is redirected to the keycloak login page. The typical SAML dance is accomplished and finally the browser accesses the application index. The remote user, roles and even the assertion are presented. Check that the assertion is signed and it has some restrictions (time and audience constraints). When the web service is called, the echo works and the message is returned with the user correctly identified by the CXF implementation.
But there are some issues here. At least two new features are needed in order to have a proper assertion replay. The first problem is the time restrictions that I commented before. In keycloak the different times are obtained from the Realm Settings, inside the Tokens tab. The lifespans used are Access Token Lifespan and Client login timeout (the SSO Session Max is also used but this one is very long by default and therefore it is not problematic). Those two times are usually very short (one minute) because of OIDC, and they are too short for SAML. So if you really need to use the assertion replay those values need to be increased to cover your needs. The real problem is that SAML clients cannot override the realm settings (OIDC ones can define a specific access token lifespan).
The second issue is the audience. A SAML assertion can also define which endpoints are allowed to use it. This is done by the audience tag (a list of URLs that are allowed to consume the assertion). By default the keycloak server constructs the assertion with the audience limited to the client ID (only that client can use this assertion). This fact is absolutely limitating the assertion replay. If you remember in step 7 the client was created with a specific ID, which is exactly the URL of the echo endpoint. That was a very nasty trick. This makes both (app and CXF endpoint) use the same ID and both pass the audience validation. But obviously if you want to send the assertion to a second endpoint it would fail, the implementation would check the audience constraint and would complain that its own URL is not in the list. Maybe CXF/wss4j can be configured to not check the audience but that is weird, audience is there for a reason.
Therefore I filed two new feature requests for keycloak (JIRA 12000 and 12001) and I am working on them. There is room for other improvements here but, at least, with those two new settings the assertion replay can be used. You can download the full maven project for the PoC application from here.
Best regards!
Comments