Saturday, September 19. 2020
New DNIe cards need different data for the secure channel

Sometimes I think that the DNIe situation is just recurring, I feel like Bill Murray in the film Groundhog Day. During the last years the Spanish electronic ID (DNIe) has been working good inside the OpenSC project (since the DNIe v3.0 was integrated around 2017). I personally had not received any report of problems with it. But it happens that my personal card expired in august (the previous renewal was because the chip stopped working and the expiration date was not extended that time). I went to the police station and, after coming back home, my fresh new smartcard was not working with OpenSC. Again. The worst thing was that the official RPM package for linux distributed by the government failed with my card too.
Looking into the issue inside the OpenSC code, the initial problem was that a certificate that is used to establish the secure channel (secure communication with the card, similar to what https is to http) failed when it was validated. The certificate for the intermediate CA is placed in the card itself and, in order to validate that the certificate was not altered, it is verified against a public key (the public key of the CA that is issuing those certificates for the Spanish ID cards). That key (among other things needed for the secure channel like some certificates, other public and private keys, key references inside the card,...) is just hardcoded inside the code. Therefore that verification failure only meant that the Spanish Police Department (DGP) had changed that CA. The first thing I tried was just commenting that verification, but the secure channel creation failed in a later step. So it was crystal clear that all the CA structure that is needed for the secure channel had been replaced. I searched over the internet for the data. Previously that information was published by the Spanish institutions inside the sources of some public projects, like for example jmulticard or inside the FNMT MultiPKCS11 sources (code for the official PKCS11 package, the zip file can be downloaded at the end of page). But nothing, I found absolutely nothing, only the previous data was there, the one that now failed with my new card.
The official code stores the configuration in ASN.1 format inside a fixed global byte array variable. So doing some readelf/objdump magic over the binary library I got the full bytes and started a slow and manual parsing. I detected that there were two configurations there. I was very skeptical about this, in the end the official package was not working for me either, but why were there two configurations? I continued parsing the ASN.1 and obtained the modulus for the public key whose verification was failing from both confs. One was the same value used in the current OpenSC code but the other key was a different array. So I backed up the certificate from the card (the one being validated) and created a simple program to verify it with the new key found in the second configuration. And surprisingly it worked. It was the valid public key whose private counterpart signed the intermediate CA certificate in the card. That convinced me to finish the whole parsing and replace all the data for the secure channel creation with this new one. In two days I could finish the task and OpenSC was working again with my new DNIe. At least changing the data to establish the secure channel was enough, no more changes were needed.
More or less at that point I was notified that there were some issues reported in the OpenSC project about new DNIe cards not working. The main one is the issue 2105 and I started to share what I had at that moment. The final conclusion is that if you have a DNI with IDESP equals or greater than BMP100001 the new CA structure is used and current OpenSC does not work with it. The command dnie-tool -i displays the IDESP information for your card. A PR was sent to the project to manage both configurations (the old data should be maintained for previous cards) and it seems to be working for testers. Let's see if the maintainers accept it promptly.
In summary, if you have a new DNIe card and you want to use OpenSC to access with it some governmental sites, it will not work for you. The situation is the same with the official package. Only windows distribution seems to be working at this moment. If you are in a hurry you can try out the PR and compile it by yourself. As a personal comment I am going to say that maintaining the DNIe inside OpenSC this way is really complicated. The DGP (Spanish institution in charge of the DNIe) continues doing the things in linux utterly in the wrong direction. This time I was lucky and, first, I renewed my DNIe at the perfect time to discover the issue and, second, finding the new data in the binary distribution was just a long shot. In a normal situation this would have meant waiting for the publication of the new data (direct DPG announcement or some code added to the previously mentioned projects) and trying to fix it without a new card, waiting for the community for the testing. As always, remember that I am not related at all with the DGP or the FNMT. I am just a frustrated linux user who needed to use the DNIe some time ago. And I am starting to be really tired with all this.
Regards!
Saturday, November 30. 2019
SAML assertion replay in keycloak


Today's entry is again about keycloak but this time I am going to use the SAML protocol. This protocol is a very old web Single Sign On (SSO) protocol in which XML information is sent and signed between the peers. The entry is motivated because in the just released version 8.0.0 the SAML assertion can be retrieved from the logged principal and can be replayed. My idea was testing this feature using the assertion to call a CXF endpoint protected with Web Services Security (WSS). The endpoint will be configured to use the SAML assertion to validate the user. If you remember a previous series about CXF/WSS was presented in the blog, but using certificates instead of SAML.
As usual the entry summarizes the steps I followed to perform this PoC (Proof of Concept) in detail.
Download and install the keycloak server.
wget https://downloads.jboss.org/keycloak/8.0.0/keycloak-8.0.0.zip unzip keycloak-8.0.0.zip cd keycloak-8.0.0/bin ./standalone.sh
Go to the default location (http://localhost:8080) and create the initial admin user.
Now the server will be configured to use a self-signed certificate (secure https is a must for SAML). Create the server and trusted key-stores.
cd ../standalone/configuration keytool -genkeypair -keystore keystore.jks -dname "CN=localhost, OU=test, O=test, L=test, C=test" -keypass XXXX -storepass XXXX -keyalg RSA -alias localhost -validity 10000 -ext SAN=dns:localhost,ip:127.0.0.1 keytool -export -keystore keystore.jks -alias localhost -file localhost.cer keytool -import -trustcacerts -alias localhost -file localhost.cer -keystore cacerts -storepass changeit
Configure the server to use the previous certificate using the CLI interface:
cd ../../bin ./jboss-cli.sh --connect /subsystem=elytron/key-store=localhost:add(type=jks, relative-to=jboss.server.config.dir, path=keystore.jks, credential-reference={clear-text=XXXX} /subsystem=elytron/key-manager=localhost-manager:add(key-store=localhost, alias-filter=localhost, credential-reference={clear-text=XXXX}) /subsystem=elytron/server-ssl-context=localhost-context:add(key-manager=localhost-manager, protocols=["TLSv1.2"]) batch /subsystem=undertow/server=default-server/https-listener=https:undefine-attribute(name=security-realm) /subsystem=undertow/server=default-server/https-listener=https:write-attribute(name=ssl-context, value=localhost-context) run-batch
We also configure the self-signed certificate as trusted for the whole JVM using a system property.
/system-property=javax.net.ssl.trustStore:add(value="${jboss.server.config.dir}/cacerts")
Perform the same exact steps for wildfly (installation and certificate, use the same keystores, because both servers are going to run in the same localhost hostname). The wildfly server will be started with an offset of 10000.
./standalone.sh -Djboss.socket.binding.port-offset=10000
At this point we have a keycloak server in 8443 and a wildfly server in 18443 port. Both use https and the same certificate. They also trust in each other. So now the keycloak-adapters for SAML should be installed in the wildfly server.
wget https://downloads.jboss.org/keycloak/8.0.0/adapters/saml/keycloak-saml-wildfly-adapter-dist-8.0.0.zip cd ${WILDFLY_HOME} unzip /path/to/keycloak-saml-wildfly-adapter-dist-8.0.0.zip cd bin ./standalone.sh -Djboss.socket.binding.port-offset=10000 ./jboss-cli.sh --connect controller=localhost:19990 --file=adapter-elytron-install-saml.cli
Now it is the time to configure the SAML client. The idea is simple, we will have a SAML protected application using the keycloak adapter. That application will call a CXF endpoint that will be configured to process the SAML assertion and validate the user. For simplicity I am going to use the same application (the web service endpoint will be located in the same app). Go to the keycloak console and select clients and create a new SAML client. The client ID should be the endpoint location https://localhost:18443/keycloak-cxf-saml/echo-service/echo (later I will explain this limitation). Check the option Sign Assertions to ON, this way the assertion is also signed and it can be replayed in a secure way. My client settings are presented below.
For the configuration of the CXF/wss4j endpoint, the realm certificate will be needed. So go to Realm Settings, select Keys tab and click on the Certificate button of the RSA key. Copy the certificate value and create a file server.cer with the typical certificate header and footer.
-----BEGIN CERTIFICATE----- <copied certificate from keycloak console> -----END CERTIFICATE-----
And finally import it into a JKS as a trusted certificate. This will be the store that should be configured later to validate SAML signatures by the web service endpoint.
keytool -import -trustcacerts -alias saml -file server.cer -keystore server.jks -storepass YYYY
Let's start with the development. The first thing to do is configuring the keycloak SAML SSO. For that just obtain the initial template from the console. Go again to clients, select our client and click on tab Installation. Choose the option Keycloak SAML Adapter keycloak-saml.xml and a template configuration can be downloaded. This configuration should be placed in the file WEB-INF/keycloak-saml.xml inside the WAR application bundle. I customized the configuration file like below.
<keycloak-saml-adapter> <SP entityID="https://localhost:18443/keycloak-cxf-saml/echo-service/echo" sslPolicy="ALL" keepDOMAssertion="true" nameIDPolicyFormat="urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified" logoutPage="/logout.jsp"> <Keys> <Key signing="true"> <PrivateKeyPem> MII... </PrivateKeyPem> <CertificatePem> MII... </CertificatePem> </Key> </Keys> <IDP entityID="idp" signatureAlgorithm="RSA_SHA256" signatureCanonicalizationMethod="http://www.w3.org/2001/10/xml-exc-c14n#"> <SingleSignOnService signRequest="true" validateResponseSignature="true" validateAssertionSignature="false" requestBinding="POST" bindingUrl="https://localhost:8443/auth/realms/master/protocol/saml"/> <SingleLogoutService signRequest="true" signResponse="true" validateRequestSignature="true" validateResponseSignature="true" requestBinding="POST" responseBinding="POST" postBindingUrl="https://localhost:8443/auth/realms/master/protocol/saml" redirectBindingUrl="https://localhost:8443/auth/realms/master/protocol/saml"/> </IDP> </SP> </keycloak-saml-adapter>
The web.xml is configured to protect the application but not the endpoint. The CXF web service is secured via wss4j, so no web security should be applied to it.
<?xml version="1.0" encoding="UTF-8"?> <web-app xmlns="http://xmlns.jcp.org/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/web-app_3_1.xsd" version="3.1"> <welcome-file-list> <welcome-file>index.jsp</welcome-file> </welcome-file-list> <security-constraint> <web-resource-collection> <web-resource-name>Protect all application</web-resource-name> <url-pattern>/*</url-pattern> </web-resource-collection> <auth-constraint> <role-name>*</role-name> </auth-constraint> <user-data-constraint> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> <security-constraint> <web-resource-collection> <web-resource-name>The WS endpoint is public</web-resource-name> <url-pattern>/echo-service/*</url-pattern> </web-resource-collection> <user-data-constraint> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> <login-config> <auth-method>KEYCLOAK-SAML</auth-method> <realm-name>this is ignored currently</realm-name> </login-config> <security-role> <description>Role required to log in to the Application</description> <role-name>*</role-name> </security-role> <session-config> <session-timeout>30</session-timeout> </session-config> </web-app>
The application is configuring the KEYCLOAK login to use the SSO. The full application (/*) is protected, but the WS endpoint (/echo-service/*) is excluded (everyone can access the endpoint at web level). Besides any authenticated user can access the application (role *) and secure communication (https) is compulsory (transport is defined as confidential).
Time to create the web service endpoint. This part is very similar to the previous entry about WSS and certificates that I commented at the beginning. So you can review it for more information about this subject, because it is really a bit complicated. The simple echo web service is developed like this.
@Stateless @WebService(name = "echo", targetNamespace = "http://es.rickyepoderi.sample/ws", serviceName = "echo-service") @Policy(placement = Policy.Placement.BINDING, uri = "WssSamlV20Token11.xml") @SOAPBinding(style = SOAPBinding.Style.RPC) @EndpointConfig(configFile = "WEB-INF/jaxws-endpoint-config.xml", configName = "Custom WS-Security Endpoint") public class Echo { @WebMethod public String echo(String input) { Message message = PhaseInterceptorChain.getCurrentMessage(); SecurityContext context = message.get(SecurityContext.class); Principal caller = null; if (context != null) { caller = context.getUserPrincipal(); } return (caller == null? "null" : caller.getName()) + " -> " + input; } }
The endpoint is just an echo service but it obtains the user from CXF. If you check it, the web service is configured with a specific WSS policy WssSamlV20Token11.xml file and a configuration file WEB-INF/jaxws-endpoint-config.xml. The next points deal with those files.
- The most complicated part is creating the policy to request a SAML assertion for the web service. For that I checked some of the CXF tests in order to obtain samples of SAML policies and, finally, the following WssSamlV20Token11.xml file is used.
<?xml version="1.0" encoding="UTF-8" ?> <wsp:Policy wsu:Id="SecurityPolicy" xmlns:wsp="http://www.w3.org/ns/ws-policy" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:sp="http://schemas.xmlsoap.org/ws/2005/07/securitypolicy"> <wsp:ExactlyOne> <wsp:All> <sp:TransportBinding> <wsp:Policy> <sp:TransportToken> <wsp:Policy> <sp:HttpsToken> <wsp:Policy/> </sp:HttpsToken> </wsp:Policy> </sp:TransportToken> <sp:Layout> <wsp:Policy> <sp:Lax/> </wsp:Policy> </sp:Layout> <sp:AlgorithmSuite> <wsp:Policy> <sp:Basic256/> </wsp:Policy> </sp:AlgorithmSuite> </wsp:Policy> </sp:TransportBinding> <sp:SupportingTokens> <wsp:Policy> <sp:SamlToken sp:IncludeToken="http://schemas.xmlsoap.org/ws/2005/07/securitypolicy/IncludeToken/AlwaysToRecipient"> <wsp:Policy> <sp:WssSamlV20Token11/> </wsp:Policy> </sp:SamlToken> </wsp:Policy> </sp:SupportingTokens> </wsp:All> </wsp:ExactlyOne> </wsp:Policy>
The policy is requesting https protocol and a SAML version 2.0 token. I know this part is horribly complicated but this is WSS security, not an easy world.
In order to configure the validation of the assertion the file jaxws-endpoint-config.xml is provided.
<?xml version="1.0" encoding="UTF-8"?> <jaxws-config xmlns="urn:jboss:jbossws-jaxws-config:4.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:javaee="http://java.sun.com/xml/ns/javaee" xsi:schemaLocation="urn:jboss:jbossws-jaxws-config:4.0 schema/jbossws-jaxws-config_4_0.xsd"> <endpoint-config> <config-name>Custom WS-Security Endpoint</config-name> <property> <property-name>ws-security.signature.properties</property-name> <property-value>server.properties</property-value> </property> </endpoint-config> </jaxws-config>
The server.properties contains the properties to access the keystore to validate the signature of the SAML assertion.
org.apache.wss4j.crypto.provider=org.apache.ws.security.components.crypto.Merlin org.apache.wss4j.crypto.merlin.keystore.type=jks org.apache.wss4j.crypto.merlin.keystore.password=YYYY org.apache.wss4j.crypto.merlin.keystore.file=server.jks
And that server.jks is just the previous keystore created in step 8 using the keycloak certificate in the realm. So, in summary, the endpoint is configured to request a SAML token and the certificate used by keycloak is configured as trusted for the validation. This way the wss4j implementation can check the SAML assertion received and validate its signature. If everything is OK the user will be recovered by the echo service and returned.
- And here it comes the final part. How is the SAML assertion retrieved and used to call the endpoint? For that I created a simple EchoServlet that gets the assertion from the special keycloak principal and calls the endpoint.
WSClient client = new WSClient(request); out.println(client.callEcho(((SamlPrincipal) request.getUserPrincipal()).getAssertionDocument(), input));
The actual code in the servlet is bit more complicated because I decided to check the validity of the assertion. The SAML assertion usually has some time constraints to not use the same assertion forever. If the assertion is expired the application forces a re-login of the user. But I decided to not add here the details to not complicate even more the explanation.
The CXF implementation for saml uses a callback handler that should provide the assertion to be sent by the client (the handler fills a SAMLCallback with the assertion). In this case it is extremely easy because it is just there inside the principal. So I created KeycloakSamlCallbackHandler that just wraps the assertion to give it to the CXF system in order to attach it to the SOAP message.
public class KeycloakSamlCallbackHandler implements CallbackHandler { Document assertion; public KeycloakSamlCallbackHandler(Document assertion) { this.assertion = assertion; } @Override public void handle(Callback[] callbacks) throws IOException, UnsupportedCallbackException { if (callbacks != null) { for (Callback callback1 : callbacks) { if (callback1 instanceof SAMLCallback) { SAMLCallback callback = (SAMLCallback) callback1; callback.setAssertionElement(assertion.getDocumentElement()); } } } } }
And the WSClient just puts the callback to the call context. This way the CXF implementation can retrieve the SAML assertion and add it to the SOAP message.
public String callEcho(Document assertion, String input) { EchoService service = new EchoService(url); Echo echo = service.getEchoPort(); // Properties for WS-Security configuration ((BindingProvider)echo).getRequestContext().put(SecurityConstants.SAML_CALLBACK_HANDLER, new KeycloakSamlCallbackHandler(assertion)); // call the endpoint return echo.echo(input); }
Here it is important to add the option keepDOMAssertion to true, because this way the DOM document of the original assertion is stored in the SAML principal and can be recovered by the application to replay it. More information about the SAML configuration for adapters in the keycloak documentation
And that is all. Very long and complicated setup, but it shows that you can replay a SAML assertion. I decided to use CXF/wss4j because it is another complete different SAML implementation (it uses opensaml internally). Here it is a video that shows that it really works. When I access the application the browser is redirected to the keycloak login page. The typical SAML dance is accomplished and finally the browser accesses the application index. The remote user, roles and even the assertion are presented. Check that the assertion is signed and it has some restrictions (time and audience constraints). When the web service is called, the echo works and the message is returned with the user correctly identified by the CXF implementation.
But there are some issues here. At least two new features are needed in order to have a proper assertion replay. The first problem is the time restrictions that I commented before. In keycloak the different times are obtained from the Realm Settings, inside the Tokens tab. The lifespans used are Access Token Lifespan and Client login timeout (the SSO Session Max is also used but this one is very long by default and therefore it is not problematic). Those two times are usually very short (one minute) because of OIDC, and they are too short for SAML. So if you really need to use the assertion replay those values need to be increased to cover your needs. The real problem is that SAML clients cannot override the realm settings (OIDC ones can define a specific access token lifespan).
The second issue is the audience. A SAML assertion can also define which endpoints are allowed to use it. This is done by the audience tag (a list of URLs that are allowed to consume the assertion). By default the keycloak server constructs the assertion with the audience limited to the client ID (only that client can use this assertion). This fact is absolutely limitating the assertion replay. If you remember in step 7 the client was created with a specific ID, which is exactly the URL of the echo endpoint. That was a very nasty trick. This makes both (app and CXF endpoint) use the same ID and both pass the audience validation. But obviously if you want to send the assertion to a second endpoint it would fail, the implementation would check the audience constraint and would complain that its own URL is not in the list. Maybe CXF/wss4j can be configured to not check the audience but that is weird, audience is there for a reason.
Therefore I filed two new feature requests for keycloak (JIRA 12000 and 12001) and I am working on them. There is room for other improvements here but, at least, with those two new settings the assertion replay can be used. You can download the full maven project for the PoC application from here.
Best regards!
Saturday, August 10. 2019
Configuring mod-auth-openidc with keycloak

The mod-auth-openidc is a module to provide OIDC authentication to the apache web server. In this entry I will try to configure the apache module in order to work with a keycloak server. The idea is that two clients will be configured: the first one will be a normal client (confidential) that will provide normal code-to-token redirect flow; the second one will be a bearer-only endpoint client (the application just validates the token that should be sent using a bearer authentication). The blog entry is just a summary of the steps I did to configure it. I hope that it helps to someone else. A debian box (called debian.sample.com) was used to perform the tests and configuration.
First the jdk and keycloak is installed and started.
apt-get install openjdk-11-jdk wget https://downloads.jboss.org/keycloak/6.0.1/keycloak-6.0.1.zip unzip keycloak-6.0.1.zip cd keycloak-6.0.1/bin/ ./standalone.sh
The admin users are created for the EAP and the keycloak itself.
./add-user.sh -u admin -p XXXXX ./add-user-keycloak.sh -u admin -p XXXXX
And finally the interfaces are changed to serve through the correct IPs (not only localhost).
./jboss-cli.sh --connect /interface=public:write-attribute(name=inet-address, value="${jboss.bind.address:192.168.100.20}") /interface=management:write-attribute(name=inet-address, value="${jboss.bind.address:0.0.0.0}")
Any OIDC installation uses certificates so I created a CA and a certificate for the host (same one is valid for both keycloak and the apache server because they are placed in the same host).
First the CA certificate is created:
cd /etc/ssl mkdir -p demoCA/newcerts touch /etc/ssl/demoCA/index.txt echo 01 > /etc/ssl/demoCA/serial echo 01 > /etc/ssl/demoCA/crlnumber openssl req -subj "/C=ES/O=sample.com/CN=ca.sample.com" -new -newkey rsa:2048 -keyout private/cakey.pem -out careq.pem openssl ca -out cacert.pem -days 10000 -keyfile private/cakey.pem -selfsign -extensions v3_ca -infiles careq.pem openssl x509 -in cacert.pem -outform DER -out cacert.der openssl pkcs12 -export -out cacert.p12 -in cacert.pem -inkey private/cakey.pem
And then the certificate for the server:
openssl genrsa -out private/debian.sample.com.key 2048 openssl req -subj "/C=ES/O=sample.com/CN=debian.sample.com" -key private/debian.sample.com.key -new -out debian.sample.com.csr
The alternate names for my host are added in the /etc/ssl/openssl.cnf to have a correct certificate:
[ v3_req ] # Extensions to add to a certificate request basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] DNS.1 = debian.sample.com IP.1 = 192.168.100.20
Then the CA is used to sign the certificate and it is generated:
openssl ca -in debian.sample.com.csr -cert cacert.pem -keyfile private/cakey.pem -out debian.sample.com.crt -extensions v3_req openssl pkcs12 -export -out debian.sample.com.p12 -in debian.sample.com.crt -inkey private/debian.sample.com.key keytool -importkeystore -srckeystore debian.sample.com.p12 -srcstoretype pkcs12 -srcalias 1 -destkeystore debian.sample.com.jks -deststoretype jks -destalias debian.sample.com cat debian.sample.com.crt cacert.pem >debian.sample.com.all.pem keytool -import -alias debian.sample.com -file debian.sample.com.all.pem -keystore debian.sample.com.jks
Finally copy the CA certificate to the trusted ones and update the debian certificates.
cp cacert.pem /usr/share/ca-certificates/cacert.crt echo "cacert.crt" >> /etc/ca-certificates.conf update-ca-certificates
Now the certificate is added to the keycloak. It is copied to the configuration directory:
cp debian.sample.com.jks /home/java/keycloak-6.0.1/standalone/configuration/
And the elytron subsystem is configured to use it:
/subsystem=elytron/key-store=debian:add(type=jks, relative-to=jboss.server.config.dir, path=debian.sample.com.jks, credential-reference={clear-text=XXXXX}) /subsystem=elytron/key-manager=debian-manager:add(key-store=debian, credential-reference={clear-text=XXXXX}) /subsystem=elytron/server-ssl-context=debian-context:add(key-manager=debian-manager, protocols=["TLSv1.2"]) batch /subsystem=undertow/server=default-server/https-listener=https:undefine-attribute(name=security-realm) /subsystem=undertow/server=default-server/https-listener=https:write-attribute(name=ssl-context, value=debian-context) run-batch
One client mod-auth-openidc is created with confidential type.
Remember to take note of the client's secret (in the Credentials tab) to later configure the module.
And now the bearer only client mod-auth-oauth20 is created, the access type is changed to bearer-only:
The credential will be also needed because the token will be verified using the introspect endpoint.
Now the apache and all the needed modules are installed into the debian box:
apt-get install apache2 libapache2-mod-php libapache2-mod-auth-openidc
The certificates are configured, so first they are copied into the directories:
cp debian.sample.com.crt /etc/ssl/ cp debian.sample.com.key /etc/ssl/private/
Then the /etc/apache2/sites-available/default-ssl.conf file is modified to use those certificates:
<VirtualHost *:443> SSLCertificateFile /etc/ssl/debian.sample.com.crt SSLCertificateKeyFile /etc/ssl/private/debian.sample.com.key SSLCertificateChainFile /etc/ssl/certs/cacert.pem SSLCACertificatePath /etc/ssl/certs/
Enable all the needed modules, the ssl site and restart the apache service:
a2enmod ssl a2enmod php7.3 a2enmod auth_openidc a2ensite default-ssl systemctl restart apache2
Now the openidc module is configured inside one location of the previous ssl host configuration file:
OIDCProviderMetadataURL https://debian.sample.com:8443/auth/realms/master/.well-known/openid-configuration OIDCRedirectURI https://debian.sample.com/mod-auth-openidc/oauth2callback OIDCCryptoPassphrase 0123456789 OIDCClientID mod-auth-openidc OIDCClientSecret 950225ad-3980-4a22-a14c-5ceebd366328 OIDCProviderTokenEndpointAuth client_secret_basic OIDCSessionInactivityTimeout 1800 OIDCSessionMaxDuration 28800 #OIDCUserInfoRefreshInterval 60 OIDCRefreshAccessTokenBeforeExpiry 10 OIDCRemoteUserClaim preferred_username OIDCScope openid OIDCPassIDTokenAs claims payload OIDCProviderCheckSessionIFrame "https://debian.sample.com:8443/auth/realms/master/protocol/openid-connect/login-status-iframe.html" OIDCDefaultLoggedOutURL "https://debian.sample.com" <Location /mod-auth-openidc> AuthType openid-connect Require valid-user LogLevel debug </Location>
A little show.php page is prepared to show all the OIDC variables injected (it should be copied inside the location /var/www/html/mod-auth-openidc):
<html> <body> <h1>OIDC Variables</h1> <ul> <?php foreach($_SERVER as $key => $value) { if (strlen($key) > 4 && substr($key, 0, 5) === "OIDC_") { echo "<li><strong>" . $key . "</strong>: " . $value . "</li>"; } } ?> </ul> <p><a href="oauth2callback?logout=https%3A%2F%2Fdebian.sample.com">logout</a> <iframe title='empty' style='visibility: hidden;' width='0' height='0' tabindex='-1' id='openidc-op' src='oauth2callback?session=iframe_op' > </iframe> <iframe title='empty' style='visibility: hidden;' width='0' height='0' tabindex='-1' id='openidc-rp' src='oauth2callback?session=iframe_rp&poll=5000'> </iframe></p> </body> </html>
The /mod-auth-openidc location will be protected using OIDC and the configuration is prepared to use the same settings than in keycloak (same timeouts). With this configuration the module redirects the user to the keycloak login page and, once it is logged in, the code-to-token flow finishes the process. The apache module injects a lot of variables that are shown in the PHP page. Besides the session management is configured with the keycloak iframe. This way if we log to another application (for example the account keycloak page) ane performs a logout, the iframe automatically detects the change in the cookie and the user is logged out (the browser is redirected to the default apache index page, because it was configured as the OIDCDefaultLoggedOutURL). The following video shows the module working.
In my tests the only problem I have seen is that the mod-auth-openidc, although it is configured to refresh the token before expiration (OIDCRefreshAccessTokenBeforeExpiry is set to 10 seconds, so the access token is automatically refreshed when it is near to the expiration), if the refresh fails the session is maintained. In keycloak the session can be deleted (for example removed by an admin or just because it has reached its max life) and in the apache module it would be not detected. In my tests only the timeout (OIDCSessionInactivityTimeout set to 30 minutes) detects this, the local session in the apache is removed because of inactivity and this performs a new redirect/code-to-token flow that fails, and the user should log again into the system.
The mod-auth-openidc can be configured to also use plain OAUTH 2.0. This is the typical configuration for REST endpoints which just consume a bearer token and a full OIDC (code-to-token) is not needed. The module just checks the bearer token sent in the authentication and returns a OK (200) or an unauthorized error (401). In my configuration I decided to use the keycloak introspection endpoint to validate the token (it can also be configured locally, but this way is simpler).
OIDCOAuthClientID mod-auth-oauth20 OIDCOAuthClientSecret a78de633-5eb2-4ba9-abc2-ec33b86afe83 OIDCOAuthIntrospectionEndpoint "https://debian.sample.com:8443/auth/realms/master/protocol/openid-connect/token/introspect" OIDCOAuthRemoteUserClaim preferred_username <Location /mod-auth-oauth20> AuthType oauth20 Require valid-user LogLevel debug </Location>
So another location /mod-auth-oauth20 is used to setup an oauth20 application. Here a simple hello world file is added, hello-world.php inside directory /var/www/html/mod-auth-oauth20:
<?php header("Content-Type: text/plain"); echo "Hello " . $_SERVER["REMOTE_USER"] . "!"; ?>
Finally the idea is that the first location (which is using OIDC and has access to an access token that is refreshed automatically by the module 10 seconds before its expiration) can call to the bearer-only application. The call.php does exactly that and is located again in the first location /var/www/html/mod-auth-openidc:
<?php // Get CURL resource $curl = curl_init(); curl_setopt_array($curl, [ CURLOPT_RETURNTRANSFER => 1, CURLOPT_URL => 'https://debian.sample.com/mod-auth-oauth20/hello-world.php', CURLOPT_HTTPHEADER => array('Authorization: Bearer ' . $_SERVER['OIDC_access_token']) ]); // Send the request & save response to $resp $resp = curl_exec($curl); // Close request to clear up some resources curl_close($curl); header("Content-Type: text/plain"); echo $resp; ?>
And this works, the following video shows that the endpoint returns a 401 error if called directly, but if we login into the OIDC location and access to the call page the hello world application is executed correctly. So everything is working as expected.
In this case, the only problem is that the token in keycloak is very big (around 1KB) and it's not saved in the default cache (it seems there is a key limit of 512B), so the introspection call is performed always. In a real production scenario I would try to do a local validation or even using another cache. But it works OK for my testing setup.
And that is all. My idea was showing that the mod-auth-openidc and keycloak can work together quite nicely. My only particular snag is that the refresh of the token does not logout on error. If the access token is configured to automatically perform a refresh, if the refresh fails the session in the apache keeps maintained OK and used. That will bring issues for sure (for example the call to the hello-world endpoint will fail, because the token is expired) and I think it would be nice if the module can be configured to logout on a refresh error. I am trying to improve this so I asked into the google groups list, let's see if this goes to something fruitful.
Best regards!
Comments