In Keycloak nothing made sense to me until I got myself familiar with OAuth 2.0 and OpenID Connect.
Keycloaks documentation seems vast, but isn't. There is also no way to search inside their documentation. It's a pity.
A better documentation is contained in the administration web ui itself. There are so many "hints" and tooltips for almost every option there is. It really helped me a lot.
Keycloak is good software. It never failed for me. Even upgrading from 7.x.x to 16.x.x somehow just worked.
Yes, their docker image is fat, but it's also very flexible. Now that they are basing Keycloak on Quarkus instead on Wildfly, the docker image should shrink in size.
quay.io/keycloak/keycloak 18.0.0 a6bd0f949af0 15 hours ago 562MB
quay.io/keycloak/keycloak 18.0.0-legacy 421e95f49589 46 hours ago 753MB
We're actually working on a new version of the Administration UI at the moment (I'm one of the devs) so this is useful feedback. We're looking for folks to try it out, so take a look at https://github.com/keycloak/keycloak-admin-ui/.
You can try it out on the latest Keycloak by passing the --features=admin2 flag on startup.
We have way too many issues with KeyCloak. Sometimes I wonder why did we integrate this. One of the main issue is when you authorize by Github but cancel the authentication, it redirects to KeyCloak page rather than our login page. Couldn't find any solution yet.
> Keycloaks documentation seems vast, but isn't. There is also no way to search inside their documentation. It's a pity.
> A better documentation is contained in the administration web ui itself. There are so many "hints" and tooltips for almost every option there is. It really helped me a lot.
To echo everyone else: the Keycloak documentation does not do a good job of hand-holding you at all, and the number of possible ways you can configure and use the system and the amount of jargon and terminology used is massively overwhelming to someone trying to get started. It would be very helpful to have some "white paper"-esque summaries that walk you through some simple, typical use-cases.
I looked through the docs quickly before making this post and as an example here's a basic task for initial setup ("hook up an IDP", basically giving keycloak its database of users), and it's utterly incomprehensible to any human being who doesn't already know how to work the system and really essentially worthless even then. It's just... reading me the command line options and a couple config files? What do any of those values even mean? This is core functionality for Keycloak, and the documentation consists of "yeah, here's a command line with placeholders and a text file syntax, good luck bitches!".
Honestly I feel like you could do better simply by jumping into the UI and playing with options, it's not entirely unintuitive what's going on in the UI, but the docs are basically incomprehensible.
I actually know of several projects that have pretty much bogged down because of Keycloak configuration or role/privilege mis-configuration issues and it's not hard to see why. It's the turing tar-pit of IDP, everything is possible and nothing is easy (or documented). Which is a shame because it seems like an awesome piece of software, just inscrutible to the un-initiated.
As others are noting, I'm sure some of this is due to OAuth2 being an inscrutable piece of shit in general, same thing, it tries to do everything and it's so un-opinionated that you end up with a bunch of basically incompatible implementations that are each effectively their own "standard" anyway.
(posted this on the wrong child, moving it to the parent)
Disclaimer: Former Red Hatter but worked on OpenShift, not Keycloak
Working as a person providing commercial support for open source projects, I promise it doesn't actually work that way. Incentives are entirely for creating good documentation. Having crappy docs only hurts project adoption for paying and non-paying customers, increases the support burden, and wastes the time of your employees (who are the primary consumers of that documentation).
Usually documentation isn't great because writing (and maintaining!) good documentation is really hard. It's a continual effort and it takes engineer time away from bug fixes and feature dev, two things for which there is never ending demand for.
Edit: Pro-Tip: With Red Hat projects (like Keycloak, OKD, etc) it's always worth looking at the RH product docs as well as "open source" docs. For example if you use OKD, check OpenShift docs as well as OKD docs. You do (unfortunately and I wish they'd remove this) usually have to log in to a Red Hat account but you don't have to pay. You can create a free account and use that.
> This is one area where incentives don't align correctly for open source projects that offer commercial support.
This is true in some[0] cases. It's also true that documentation is key source of customer acquisition and retention.
Projects get traction by making it useful out of the box[1] for some use-cases, making it appealing for hackers to config and extend, teasing features the former would pay for and the latter could figure out the buy v build.
Projects that do this well also learn shittons from real-world usage and feedback informing their roadmap and new opportunities to pursue.
[0] True when the perspective is "If we make the docs too good we're losing revenue" / "Everyone using Feature X gratis is a loss in MRR." It's an understandable view that's held widely. It's not often a significant revenue factor in my experience, and ~never when accounting for product and market insights gained by wider adoption.
We're still on the older one and looking forward to the Quarkus improvement specifically for boot times. Even with an empty DB, the old one takes several minutes to load and come up. It's the long pole in our install.
Very happy with KC otherwise. We make heavy use of its nice API to create providers and clients at install time.
I'm litterally about to jump from 8 to 17 this week, so that's good to hear. It seemed seamless on my local setup and was wondering if it was just too good to be true. It's a great piece of software.
You are correct about the documentation. I find the tragedy of open source documentation is that the people who need it most - the novices - are the ones whom could write it best - if they only knew if what they were saying was accurate. And then by the time you become an old-timer, and know thy ways, you just want to wipe your hands and walk away, because your tired....and still not sure if all your knowledge is accurate.
But anyway, once it's all figured out, it runs very reliably.
The base image (registry.access.redhat.com/ubi8-minimal) is about 100 MiB.
ID CREATED CREATED BY SIZE COMMENT
a6bd0f949af01b5680767225c3ac2b428d9b6921a6a9a420f6189f2523931c4c 18 hours ago ENTRYPOINT ["/opt/keycloak/bin/kc.sh"] 0 B buildkit.dockerfile.v0
<missing> 18 hours ago EXPOSE map[8443/tcp:{}] 0 B buildkit.dockerfile.v0
<missing> 18 hours ago EXPOSE map[8080/tcp:{}] 0 B buildkit.dockerfile.v0
<missing> 18 hours ago USER 1000 0 B buildkit.dockerfile.v0
<missing> 18 hours ago RUN /bin/sh -c microdnf update -y && microdnf install -y java-11-openjdk-headless && microdnf clean all && rm -rf /var/cache/yum/* && echo "keycloak:x:0:root" >> /etc/group && echo "keycloak:x:1000:0:keycloak user:/opt/keycloak:/sbin/nologin" >> /etc/passwd # buildkit 272 MB buildkit.dockerfile.v0
<missing> 18 hours ago COPY /opt/keycloak /opt/keycloak # buildkit 192 MB buildkit.dockerfile.v0
1ecf95eda522cf8db84ac321e43a353deea042480ed4e97e02c5290eb53390c3 5 days ago 20.5 kB
<missing> 5 days ago 107 MB Imported from -
For the most part I am also happy with Keycloak, but they could do a far better job documenting things, especially their language adapters. For example the "Readme" for the `keycloak-connect` Node.js package has a link to documentation, but that documentation fails to document anything around the package.
Likewise I had better luck once I understood OpenID and then treating Keycloak as an extension of that. I even ended up writing my own code to deal with the bearer token passed to our API, because I couldn't find anything. If anyone is interested I can share it, but it isn't anything amazing.
Most of my best help came from outside of the Keycloak support groups and instead reaching out to other people who use Keycloak.
> In Keycloak nothing made sense to me until I got myself familiar with OAuth 2.0 and OpenID Connect.
Hot take: OAuth2 is a really shitty protocol. It is one of those technologies that get a lot of good press, because it enables you to do stuff you wouldn't be able to do in standardized manner without resorting to abysmal alternatives (SAML in this case). And because of that it shines in comparison. But looking at it from a secure protocol design perspective it is riddled with accidental complexity producing unnecessary footguns.
The main culprit is the idea to transfer security critical data over URLs. IIUC this was done to reduce state on the involved servers, but that advantage has completely vanished, if you follow today's best practices to use the PKCE, state and nonce parameter (together with the authorization code flow). And more than half of the attacks you need to prevent or mitigate with the modern extensions to the original OAuth concepts are possible because grabbing data from URLs is so easy: An attacker can trick you to use a malicious redirect URL? Lock down the possible redirects with an explicitly managed URL allow-list. URLs can be cached and later accessed by malicious parties? Don't transmit the main secret (bearer token) via URL parameters, but instead transmit an authorization code which you can exchange (exactly) once for the real bearer token. A malicious app can register your URL schema in your smartphone OS? Add PKCE via server-side state to prove that the second request is really from the same party as the first request...
It could have been so simple (see [1] for the OAuth2 roles): The client (third party application) opens a session at the authorization server, detailing the requested rights and scopes. The authorization server returns two random IDs – a public session identifier, and a secret session identifier for the client – and stores everything in the database. The client directs the user (resource owner) to the authorization server giving them the public session identifier (thus the user and possible attackers only ever have the possibility to see the public session identifier). The authorization server uses the public session identifier to look up all the details of the session (requested rights and scopes and who wants access) and presents that to the user (resource owner) for approval. When that is given, the user is directed back to the client carrying only the public session identifier (potentially not even that is necessary, if the user can be identified via cookies), and the client can fetch the bearer token from the authorization server using the secret session identifier. That would be so much easier...
Alas, we are stuck with OAuth2 for historic reasons.
You're right about the complexity and the steep learning curve, but there's hope that OAuth 2.1 will simplify this mess by forcing almost everyone to use a simple setup: authorization code + PKCE + dPoP. No "implicit flow" madness.
Another big problem with OAuth is the lack of quality client/server libraries. For example, in JS/Node, there's just one lone hero (https://github.com/panva) doing great work against an army of rubbish JWT/OAuth libs.
For most Keycloak users, a very tiny subset of OIDC is being used too. Usually there is no three way relationship between a third party developer, an API provider and a user anymore. You could rip scopes out of Keycloak and few users wouldn't be able to cover their use cases. Rarely is there more than one set of scopes being used with the same client.
Keycloak also supports some very obscure specs, my favourite probably being "Client Initiated Backend Authentication" which can enable a push message sent to authenticator app type authentication flow using a lot of polling and/or webhooks.
Can you disclose the number of users & apps you have? Are you using Keycloak or do you pay for Red Hat Single Sign-On (for context, that's the name of the downstream product that Red Hat sell subscriptions for).
The downside to using Red Hat Single Sign-On is that it is a vastly inferior product to using Keycloak upstream as it is so many versions behind.
This means that bug fixes and features haven't trickled down yet. Although RH SSO 7.5 jumped from Keycloak version 9.0.17 (in RH SSO 7.4) to 15.0.2 so there's some improvement there... but Keycloak just released 18.0.0...
Thanks for mentioning this, I didn't realize Keycloak is a RedHat product. I'll plan to move to something else. Anything RedHat makes turns into a catastrophe.
My company used Keycloak for a long time (I'm not there any more) and I agree with everyone here, it works great, but it's hard to understand unless you already know oauth/oidc, and it is a huge binary.
While Keycloak is a great out-of-the-box solution, my #1 complaint at the time was how heavyweight it was, which was a burden for development, followed closely by its packaging as a J2EE app and bundling with Wildfly (at the time).
This meant we needed to know not only about Keycloak itself, but also about Wildfly's special quirks, the clustering system (Infinispan??), Java and Docker.
Now it's packaged with Quarkus, which is another dependency to learn about, and to be honest despite the quality of the finished product, all those dependencies have become pretty off-putting.
So while I can recommend Keycloak's functionality, if you're not already deploying Java apps as part of your job, I suspect it will present a pretty serious administrative burden to deploy into serious production.
For me this is all kind of opaque, apart from a theme put into a folder and some Environment Variables set i don't touch anything in Keycloak, first had no requirement to consider it and second very likely would be doing something maybe not best practice i.e oidc, oauth based.
Its the only java app we run in stack, but doesn't matter to me in Docker and within Windows we run a portable java from a subfolder of keycloak so not System wide in Path
I was interested in Zitadel, but because it requires Kubernetes, it can't replace Keycloak in my docker-compose managed homelab setup. If you could just run it as a standalone container, I'd give it a shot.
What is your opinion on ORY, specifically ORY Kratos? We have been building on Kratos for some time now and find that it is not super well documented, but it is still a very pleasant experience and their ORY Cloud project is backed by support from their team.
How does Zitadel differ/compare? Do you have similar goals as an organization?
Was working at a Java shop once which used Keycloak as a central IAM solution. As an FE Dev, I was tasked to customize/style the login-page provided by Keycloak, and quickly faced what you described: Pretty heavily Java-based, even to edit HTML templates I had to recompile using a full blown Java/JVM stack.
As an FE dev without Java background, this became pretty difficult. But once we finished that with the help of some of the BE Java devs, it ran (and still runs) quite stable and also the KeycloakJS adapter I integrated was alright without much surprises.
> While creating a theme it’s a good idea to disable caching as this makes it possible to edit theme resources directly from the themes directory without restarting Keycloak.
I customized Keycloak 10 login page a while back, and it did not require anything but markup. My Keycloak 18 instance runs the same customization now, unchanged.
> my #1 complaint at the time was how heavyweight it was, which was a burden for development
What do you mean by "heavyweight"?
I ask because Java in other large open source projects (Elastic Search, Cassandra, Android) and it really depends on how its being used (by Keycloak as well)
I've also come across GraalVM which can significantly reduce Java memory consumption (if that is something you are referring to)
Well just bear in mind that this was about 4 years ago (we were fairly early adopters apparently), and I understand that my comments here are probably no longer fair, and probably don't apply to KC at it is today.
But since you asked, basically Keycloak was by far the single largest component of our stack, which didn't sit well with me because by any measure, our application was far more complex than Keycloak.
At the time we were transitioning away from JEE to JSE microservices, and Keycloak was a full-blown JEE application with all the heaviness that we were successfully leaving behind; it took over a minute to start, while our other Java services would launch in less than a second. Of course this didn't impact production at all - but most engineering time is spent in development and testing, where this was a real problem, especially if we were experimenting with and learning about new features.
All of this contributed to my perception of heaviness.
We hear comments like this a lot. Keycloak has a lot of functionality but also a lot of quirks. We have a product, FusionAuth, that folks often consider at the same time.
Similarities between our products:
* Overall base feature set (OAuth, OIDC, SAML, user management, authentication, RBAC) is similar.
* Both written in Java.
* Both use container technology to hide Java from you :)
* Both develop in the open (we use GitHub issues, they use a mailing list).
Differences:
* They're OSS, we are free as in beer.
* I haven't found a compelling hosting solution for Keycloak, most folks self host. FusionAuth offers a hosted product if you'd like.
* Keycloak has more niche features (CAS SSO support) and a bigger community.
* FusionAuth has better, more straightforward docs.
* FusionAuth user UI customization is easier.
* FusionAuth supports a number of languages with client libraries for easier config management. I only saw a python client library for Keycloak.
* FusionAuth supports unlimited tenants, limited only by your server's resources. We have folks running thousands of tenants. Last time I looked Keycloak had issues around 400 realms (their term for tenants): https://keycloak.discourse.group/t/maximum-limit-of-realms/8...
We tried using keycloak in a startup where I worked. It needed a loooooooot of memory and was very slow to start. It probably needed some JVM tuning, but we were just deploying as a stateful set (for the postgres). The docker images were also huge.
We had to use another FOSS project called Gatekeeper as an authentication software along with keycloak, which got obsoleted and replaced with a different project (lukedo proxy or some such).
The community support was also relatively not that active like other FOSS project that we used (for other areas of the stack). Overall, the experience was not so great that we decided to ditch both keycloak and gatekeeper. This was about a couple of years back. Not sure what the current status is.
Some other alternatives that we wanted to try were from the Ory project (Kratos etc.) But we just went with some proprietary AUTH solution in our startup.
We have been using it for 6-7 years now. Have been able to run very stable and integrated a lot of external IdP's to offer proper SSO on mulitple of our software stacks. Added a lot of other open source pieces of software we run for our backoffice needs (hashi vault, adminbro, etc.). So far very happy with it. Running in clustered mode without issues and as such little issues with the startup times. It probably helps that we have a solid background in Java based development and deployment and are less worried about the amount of memory it uses for the full suite.
This fascinated me - where and how does this fit into other identity providers (and thence into SSO).
I kind of yearn for client certificates everywhere simply because I can grok how that remains secure as we pass through layer after layer. the rest I just worry about
With Quarkus, they went into other extreme -- if you are using docker, to get as fast startup as possible, you have to build your image with your configuration/modules used baked in.
Overall, I'm pretty satisfied. There are some bumps, but they are not Keycloak fault (having two different keytabs for two different host names for two different container images sharing same IP and reverse name not matching is kind of difficult).
In my experience, Keycloak is best treated as a "pet" in the "pet v.s. cattle" spectrum. It takes a while to warm up, so you don't want to be constantly restarting it. I deployed it out-of-sync with the main application deployments.
As an open source option, it's quite powerful and full-featured. It's also quite configurable.
If I had one feature ask, it's that it doesn't play well with infrastructure-as-code ideas. While you can load a new realm from a JSON, it's harder to keep changes synced after that.
Have used and brought keycloak into many companies over the years as a solution. Steep learning curve a little. But it essentially works as designed either as the IDP (rare in my exp) or as a IAM broker more common.
Big companies need it because their hands are tied to old and inflexible vendor's APIs. However they can with some effort craft a branded and modern UI/UX. Backend works with just about anything old Auth related whilst supporting a newer modern Auth schemes.
I am surprised IBM has not made RHEL ruin it yet.
To say IBM is a slightly better steward of their open source efforts than Oracle never leaves one with much comfort.
RedHat never needed anyone's help to ruin things. Their solutions are poorly designed bloated crap that "can get the job done" if you run them within a RedHat platform and don't mind banging your head against a brick wall. Just because they're open source darlings doesn't mean we can't call a spade a spade.
The secret to making keycloak UI/UX good is to disregard the account console and build your own with the new accounts API (which the accounts2 console also uses).
Also, if you just use one broker you can skip the login experience entirely.
I was interested in Authentik, but I was perusing the docs and was extremely put off by the way in which Authentik manages itself as well as additional “outposts”.
> The docker integration will automatically deploy and manage outpost containers using the Docker HTTP API.
> This integration has the advantage over manual deployments of automatic updates (whenever authentik is updated, it updates the outposts)
NO. I do not want software I use to update itself automatically in prod. And I especially do not want to give a docker socket to it so that it can automatically add new components.
I’ve been a keycloak advocate since my jboss days (really the only good thing that came out of jboss). I have never heard of authentik and I’m so glad you mentioned it, it looks amazing! I had been contemplating doing a similar project but this would satisfy that need. Thank you.
This looks awesome! I dropped Keycloak because it's 2 GiB of RAM was too much for me to commit to SSO on my tiny VPS, so I just switched to static htpasswd management. But this looks like it might be a great replacement.
It seems to support groups/selective access to services through group membership, which is great. Does it support username authentication or does it require that an LDAP server or other OIDP is used as a source of truth?
Using Keycloak for all authentication both in the cloud under Linux and within Windows Enterprise Environments. Use it for SSO for Node-RED, Grafana, Aspnet & VueJS apps. Was fairly easy to move from Auth0.
(Caddy maintainer here) I don't use that plugin myself but AFAICT most users ask questions on the GitHub repo so probably best to ask for help there if you need it.
Thank you, my question was more a kind of bottle to the sea, if anybody was using it. I was able to configure the plugin nicely and get gitea as a source of users and groups to control access to other applications.
And by the way, thank you, I am really impressed by the quality of Caddy.
I have a few services on my family server (say, Gitea, Grafana, finance tracking app etc.). I'd like to have a SSO but also limit which users can use which services (e.g. my significant other can use Grafana but no Gitea).
Is integrating above services with Keycloak enough? Or would I need another components? Or maybe I've got it wrong and should reconsider the architecture?
It will definitely work - Keycloak can provide its own user database, or it can use external one, as well as do some crazier things that go outside of the scope you mentioned.
In simplest setup (non-HA, local user database), you would create users inside Keycloak, assign them to different groups, then create applications (which handle configuration for individual applications like grafana and gitea) and create rules that specify that only users that belong to specific group can login to specific application.
You can also allow linking multiple external SSOs this way to single keycloak identity, and even include login through kerberos5 or client certificates.
Keycloaks documentation seems vast, but isn't. There is also no way to search inside their documentation. It's a pity.
A better documentation is contained in the administration web ui itself. There are so many "hints" and tooltips for almost every option there is. It really helped me a lot.
Keycloak is good software. It never failed for me. Even upgrading from 7.x.x to 16.x.x somehow just worked.
Yes, their docker image is fat, but it's also very flexible. Now that they are basing Keycloak on Quarkus instead on Wildfly, the docker image should shrink in size.
ok, still big :).Beware: they aren't using Docker Hub anymore. Newer versions are on Quay only (https://quay.io/repository/keycloak/keycloak).
I'm happy with Keycloak. Also nice folks around Keycloak.
You can try it out on the latest Keycloak by passing the --features=admin2 flag on startup.
> A better documentation is contained in the administration web ui itself. There are so many "hints" and tooltips for almost every option there is. It really helped me a lot.
To echo everyone else: the Keycloak documentation does not do a good job of hand-holding you at all, and the number of possible ways you can configure and use the system and the amount of jargon and terminology used is massively overwhelming to someone trying to get started. It would be very helpful to have some "white paper"-esque summaries that walk you through some simple, typical use-cases.
I looked through the docs quickly before making this post and as an example here's a basic task for initial setup ("hook up an IDP", basically giving keycloak its database of users), and it's utterly incomprehensible to any human being who doesn't already know how to work the system and really essentially worthless even then. It's just... reading me the command line options and a couple config files? What do any of those values even mean? This is core functionality for Keycloak, and the documentation consists of "yeah, here's a command line with placeholders and a text file syntax, good luck bitches!".
https://www.keycloak.org/server/configuration-provider
Honestly I feel like you could do better simply by jumping into the UI and playing with options, it's not entirely unintuitive what's going on in the UI, but the docs are basically incomprehensible.
I actually know of several projects that have pretty much bogged down because of Keycloak configuration or role/privilege mis-configuration issues and it's not hard to see why. It's the turing tar-pit of IDP, everything is possible and nothing is easy (or documented). Which is a shame because it seems like an awesome piece of software, just inscrutible to the un-initiated.
As others are noting, I'm sure some of this is due to OAuth2 being an inscrutable piece of shit in general, same thing, it tries to do everything and it's so un-opinionated that you end up with a bunch of basically incompatible implementations that are each effectively their own "standard" anyway.
(posted this on the wrong child, moving it to the parent)
This is one area where incentives don't align correctly for open source projects that offer commercial support.
Working as a person providing commercial support for open source projects, I promise it doesn't actually work that way. Incentives are entirely for creating good documentation. Having crappy docs only hurts project adoption for paying and non-paying customers, increases the support burden, and wastes the time of your employees (who are the primary consumers of that documentation).
Usually documentation isn't great because writing (and maintaining!) good documentation is really hard. It's a continual effort and it takes engineer time away from bug fixes and feature dev, two things for which there is never ending demand for.
Edit: Pro-Tip: With Red Hat projects (like Keycloak, OKD, etc) it's always worth looking at the RH product docs as well as "open source" docs. For example if you use OKD, check OpenShift docs as well as OKD docs. You do (unfortunately and I wish they'd remove this) usually have to log in to a Red Hat account but you don't have to pay. You can create a free account and use that.
This is true in some[0] cases. It's also true that documentation is key source of customer acquisition and retention.
Projects get traction by making it useful out of the box[1] for some use-cases, making it appealing for hackers to config and extend, teasing features the former would pay for and the latter could figure out the buy v build.
Projects that do this well also learn shittons from real-world usage and feedback informing their roadmap and new opportunities to pursue.
[0] True when the perspective is "If we make the docs too good we're losing revenue" / "Everyone using Feature X gratis is a loss in MRR." It's an understandable view that's held widely. It's not often a significant revenue factor in my experience, and ~never when accounting for product and market insights gained by wider adoption.
[1] https://news.ycombinator.com/item?id=31259034
Deleted Comment
Oh, right, Quaycloak.
Do you know why? Is it because of the docker hub pricing changes?
I found this discussion on the mailing list but didn't see a reason why: https://lists.jboss.org/pipermail/keycloak-user/2019-March/0...
https://lists.jboss.org/pipermail/keycloak-user/2019-March/0...
Mostly I think the answer is just that it's a Red Hat project and Red Hat wants to use their ecosystem.
Having said that, I know there has been some falling out between RH and Docker some time ago, which was one of the reason RH ended up creating Podman.
Deleted Comment
Very happy with KC otherwise. We make heavy use of its nice API to create providers and clients at install time.
You are correct about the documentation. I find the tragedy of open source documentation is that the people who need it most - the novices - are the ones whom could write it best - if they only knew if what they were saying was accurate. And then by the time you become an old-timer, and know thy ways, you just want to wipe your hands and walk away, because your tired....and still not sure if all your knowledge is accurate.
But anyway, once it's all figured out, it runs very reliably.
>> 562MB
Curious, why is the Quay image/container so large? Is there a way to list the contents without downloading it?
Likewise I had better luck once I understood OpenID and then treating Keycloak as an extension of that. I even ended up writing my own code to deal with the bearer token passed to our API, because I couldn't find anything. If anyone is interested I can share it, but it isn't anything amazing.
Most of my best help came from outside of the Keycloak support groups and instead reaching out to other people who use Keycloak.
Hot take: OAuth2 is a really shitty protocol. It is one of those technologies that get a lot of good press, because it enables you to do stuff you wouldn't be able to do in standardized manner without resorting to abysmal alternatives (SAML in this case). And because of that it shines in comparison. But looking at it from a secure protocol design perspective it is riddled with accidental complexity producing unnecessary footguns.
The main culprit is the idea to transfer security critical data over URLs. IIUC this was done to reduce state on the involved servers, but that advantage has completely vanished, if you follow today's best practices to use the PKCE, state and nonce parameter (together with the authorization code flow). And more than half of the attacks you need to prevent or mitigate with the modern extensions to the original OAuth concepts are possible because grabbing data from URLs is so easy: An attacker can trick you to use a malicious redirect URL? Lock down the possible redirects with an explicitly managed URL allow-list. URLs can be cached and later accessed by malicious parties? Don't transmit the main secret (bearer token) via URL parameters, but instead transmit an authorization code which you can exchange (exactly) once for the real bearer token. A malicious app can register your URL schema in your smartphone OS? Add PKCE via server-side state to prove that the second request is really from the same party as the first request...
It could have been so simple (see [1] for the OAuth2 roles): The client (third party application) opens a session at the authorization server, detailing the requested rights and scopes. The authorization server returns two random IDs – a public session identifier, and a secret session identifier for the client – and stores everything in the database. The client directs the user (resource owner) to the authorization server giving them the public session identifier (thus the user and possible attackers only ever have the possibility to see the public session identifier). The authorization server uses the public session identifier to look up all the details of the session (requested rights and scopes and who wants access) and presents that to the user (resource owner) for approval. When that is given, the user is directed back to the client carrying only the public session identifier (potentially not even that is necessary, if the user can be identified via cookies), and the client can fetch the bearer token from the authorization server using the secret session identifier. That would be so much easier...
Alas, we are stuck with OAuth2 for historic reasons.
[1] https://aaronparecki.com/oauth-2-simplified/#roles
Another big problem with OAuth is the lack of quality client/server libraries. For example, in JS/Node, there's just one lone hero (https://github.com/panva) doing great work against an army of rubbish JWT/OAuth libs.
Keycloak also supports some very obscure specs, my favourite probably being "Client Initiated Backend Authentication" which can enable a push message sent to authenticator app type authentication flow using a lot of polling and/or webhooks.
This means that bug fixes and features haven't trickled down yet. Although RH SSO 7.5 jumped from Keycloak version 9.0.17 (in RH SSO 7.4) to 15.0.2 so there's some improvement there... but Keycloak just released 18.0.0...
https://www.keycloak.org/server/containers
Thanks for mentioning this, I didn't realize Keycloak is a RedHat product. I'll plan to move to something else. Anything RedHat makes turns into a catastrophe.
While Keycloak is a great out-of-the-box solution, my #1 complaint at the time was how heavyweight it was, which was a burden for development, followed closely by its packaging as a J2EE app and bundling with Wildfly (at the time).
This meant we needed to know not only about Keycloak itself, but also about Wildfly's special quirks, the clustering system (Infinispan??), Java and Docker.
Now it's packaged with Quarkus, which is another dependency to learn about, and to be honest despite the quality of the finished product, all those dependencies have become pretty off-putting.
So while I can recommend Keycloak's functionality, if you're not already deploying Java apps as part of your job, I suspect it will present a pretty serious administrative burden to deploy into serious production.
Its the only java app we run in stack, but doesn't matter to me in Docker and within Windows we run a portable java from a subfolder of keycloak so not System wide in Path
If you are intrigued into the differences, you can read some of them here [2]
Oh and judging from your username: it could be interesting to you... because we use eventsourcing and cqrs ;-)
Disclaimer: I am one of the authors
1. https://github.com/zitadel/zitadel/
2. https://zitadel.ch/blog/zitadel-vs-keycloak
How does Zitadel differ/compare? Do you have similar goals as an organization?
As an FE dev without Java background, this became pretty difficult. But once we finished that with the help of some of the BE Java devs, it ran (and still runs) quite stable and also the KeycloakJS adapter I integrated was alright without much surprises.
Next time check the docs and turn off the theme cache: https://www.keycloak.org/docs/latest/server_development/#cre...
> While creating a theme it’s a good idea to disable caching as this makes it possible to edit theme resources directly from the themes directory without restarting Keycloak.
I customized Keycloak 10 login page a while back, and it did not require anything but markup. My Keycloak 18 instance runs the same customization now, unchanged.
[1] https://www.npmjs.com/package/keycloak-js
I ask because Java in other large open source projects (Elastic Search, Cassandra, Android) and it really depends on how its being used (by Keycloak as well)
I've also come across GraalVM which can significantly reduce Java memory consumption (if that is something you are referring to)
But since you asked, basically Keycloak was by far the single largest component of our stack, which didn't sit well with me because by any measure, our application was far more complex than Keycloak.
At the time we were transitioning away from JEE to JSE microservices, and Keycloak was a full-blown JEE application with all the heaviness that we were successfully leaving behind; it took over a minute to start, while our other Java services would launch in less than a second. Of course this didn't impact production at all - but most engineering time is spent in development and testing, where this was a real problem, especially if we were experimenting with and learning about new features.
All of this contributed to my perception of heaviness.
https://github.com/curveball/a12n-server
Similarities between our products:
* Overall base feature set (OAuth, OIDC, SAML, user management, authentication, RBAC) is similar.
* Both written in Java.
* Both use container technology to hide Java from you :)
* Both offer commercial support (Redhat SSO is the commercial offering for Keycloak, FusionAuth has paid editions with support). FusionAuth is much less expensive (compare https://marketplace.redhat.com/en-us/products/red-hat-single... with https://fusionauth.io/pricing .)
* Both offer the ability to self-host.
* Both develop in the open (we use GitHub issues, they use a mailing list).
Differences:
* They're OSS, we are free as in beer.
* I haven't found a compelling hosting solution for Keycloak, most folks self host. FusionAuth offers a hosted product if you'd like.
* Keycloak has more niche features (CAS SSO support) and a bigger community.
* FusionAuth has better, more straightforward docs.
* FusionAuth user UI customization is easier.
* FusionAuth supports a number of languages with client libraries for easier config management. I only saw a python client library for Keycloak.
* FusionAuth supports unlimited tenants, limited only by your server's resources. We have folks running thousands of tenants. Last time I looked Keycloak had issues around 400 realms (their term for tenants): https://keycloak.discourse.group/t/maximum-limit-of-realms/8...
Disclosure: I work for FusionAuth.
Don't worry, that was obvious.
You can run many tenants inside a single realm (we do). They can all log in to their account using Google, LI, etc., as well as email/password.
It's only if your tenant requires SSO, e.g. to their corporate idp, that their own realm is required.
Of course you may choose architecturally to put every tenant in their own realm but that is overkill IMO.
We had to use another FOSS project called Gatekeeper as an authentication software along with keycloak, which got obsoleted and replaced with a different project (lukedo proxy or some such).
The community support was also relatively not that active like other FOSS project that we used (for other areas of the stack). Overall, the experience was not so great that we decided to ditch both keycloak and gatekeeper. This was about a couple of years back. Not sure what the current status is.
Some other alternatives that we wanted to try were from the Ory project (Kratos etc.) But we just went with some proprietary AUTH solution in our startup.
I kind of yearn for client certificates everywhere simply because I can grok how that remains secure as we pass through layer after layer. the rest I just worry about
It's reliable, flexible and actively developed, so not a bad choice as a self-hosted IAM solution.
Overall, I'm pretty satisfied. There are some bumps, but they are not Keycloak fault (having two different keytabs for two different host names for two different container images sharing same IP and reverse name not matching is kind of difficult).
As an open source option, it's quite powerful and full-featured. It's also quite configurable.
If I had one feature ask, it's that it doesn't play well with infrastructure-as-code ideas. While you can load a new realm from a JSON, it's harder to keep changes synced after that.
Big companies need it because their hands are tied to old and inflexible vendor's APIs. However they can with some effort craft a branded and modern UI/UX. Backend works with just about anything old Auth related whilst supporting a newer modern Auth schemes.
I am surprised IBM has not made RHEL ruin it yet.
To say IBM is a slightly better steward of their open source efforts than Oracle never leaves one with much comfort.
Also, if you just use one broker you can skip the login experience entirely.
The biggest benefit is that Authentik supports Forward Auth out of box. This means that you might not need oauth2proxy.
> The docker integration will automatically deploy and manage outpost containers using the Docker HTTP API.
> This integration has the advantage over manual deployments of automatic updates (whenever authentik is updated, it updates the outposts)
NO. I do not want software I use to update itself automatically in prod. And I especially do not want to give a docker socket to it so that it can automatically add new components.
https://goauthentik.io/docs/outposts/integrations/docker
It seems to support groups/selective access to services through group membership, which is great. Does it support username authentication or does it require that an LDAP server or other OIDP is used as a source of truth?
I think Keycloak 18 should use less memory? Might be worth a try.
Edit: Ok now. Looks like they use Let's Encrypt which will default to a self signed cert if it can't get a LE one for whatever reason IIRC.
[0]: https://github.com/greenpau/caddy-security
As an aside, I've been working on making the Forward Auth usecase viable with Caddy, and we just got it working today https://github.com/caddyserver/caddy/pull/4739
And by the way, thank you, I am really impressed by the quality of Caddy.
I have a few services on my family server (say, Gitea, Grafana, finance tracking app etc.). I'd like to have a SSO but also limit which users can use which services (e.g. my significant other can use Grafana but no Gitea).
Is integrating above services with Keycloak enough? Or would I need another components? Or maybe I've got it wrong and should reconsider the architecture?
The biggest hurdle I see is do all of your apps support SAML or OAuth/OIDC for authentication/authorization? The SSO tax is a real thing.
In simplest setup (non-HA, local user database), you would create users inside Keycloak, assign them to different groups, then create applications (which handle configuration for individual applications like grafana and gitea) and create rules that specify that only users that belong to specific group can login to specific application.
You can also allow linking multiple external SSOs this way to single keycloak identity, and even include login through kerberos5 or client certificates.