Skip to content

Commit

Permalink
[release-1.4] update RHBK config docs with security consideration (#869)
Browse files Browse the repository at this point in the history
Co-authored-by: Fabrice Flore-Thébault <ffloreth@redhat.com>
JessicaJHee and themr0c authored Jan 27, 2025
1 parent 1a023cf commit 015709f
Showing 2 changed files with 35 additions and 6 deletions.
Original file line number Diff line number Diff line change
@@ -23,11 +23,6 @@ Save the value for the next step:
* **Client ID**
* **Client Secret**

.. Configure your {rhbk} realm for performance and security:
... Navigate to the **Configure** > **Realm Settings**.
... Set the **Access Token Lifespan** to a value greater than five minutes (preferably 10 or 15 minutes) to prevent performance issues from frequent refresh token requests for every API call.
... Enable the **Revoke Refresh Token** option to improve security by enabling the refresh token rotation strategy.

.. To prepare for the verification steps, in the same realm, get the credential information for an existing user or link:https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html-single/getting_started_guide/index#getting-started-zip-create-a-user[create a user]. Save the user credential information for the verification steps.

. To add your {rhsso} credentials to your {product-short}, add the following key/value pairs to link:{plugins-configure-book-url}#provisioning-your-custom-configuration[your {product-short} secrets]:
@@ -182,6 +177,13 @@ auth:

--

.Security consideration
If multiple valid refresh tokens are issued due to frequent refresh token requests, older tokens will remain valid until they expire. To enhance security and prevent potential misuse of older tokens, enable a refresh token rotation strategy in your {rhbk} realm.

. From the *Configure* section of the navigation menu, click *Realm Settings*.
. From the *Realm Settings* page, click the *Tokens* tab.
. From the *Refresh tokens* section of the *Tokens* tab, toggle the *Revoke Refresh Token* to the *Enabled* position.

.Verification
. Go to the {product-short} login page.
. Your {product-short} sign-in page displays *Sign in using OIDC* and the Guest user sign-in is disabled.
29 changes: 28 additions & 1 deletion modules/release-notes/ref-release-notes-known-issues.adoc
Original file line number Diff line number Diff line change
@@ -9,12 +9,39 @@ This section lists known issues in {product} {product-version}.

Currently, when deploying {product-short} using the Helm Chart, two replicas cannot run on different cluster nodes. This might also affect the upgrade from 1.3 to 1.4.0 if the new pod is scheduled on a different node.

A possible workaround for the upgrade is to manually scale down the number of replicas to 0 before upgrading your Helm release. Or manually remove the old {product-short} pod after upgrading the Helm release. However, this would imply some application downtime. You can also leverage a Pod Affinity rule to force the cluster scheduler to run your {product-short} pods on the same node.
Possible workarounds for the upgrade include the following actions:
* Manually scale down the number of replicas to 0 before upgrading your Helm release.
* Manually remove the old {product-short} pod after upgrading the Helm release. However, this would imply some application downtime.
* Leverage a Pod Affinity rule to force the cluster scheduler to run your {product-short} pods on the same node.


.Additional resources
* link:https://issues.redhat.com/browse/RHIDP-5344[RHIDP-5344]

[id="known-issue-rhidp-5342"]
== [Helm] Cannot run two RHDH replicas on different nodes due to Multi-Attach errors on the dynamic plugins root PVC

If you are deploying {product-short} using the Helm Chart, it is currently impossible to have 2 replicas running on different cluster nodes. This might also affect the upgrade from 1.3 to 1.4.0 if the new pod is scheduled on a different node.

A possible workaround for the upgrade is to manually scale down the number of replicas to 0 before upgrading your Helm release. Or manually remove the old {product-short} pod after upgrading the Helm release. However, this would imply some application downtime.
You can also leverage a Pod Affinity rule to force the cluster scheduler to run your {product-short} pods on the same node.



.Additional resources
* link:https://issues.redhat.com/browse/RHIDP-5342[RHIDP-5342]

[id="known-issue-rhidp-4695"]
== [Doc] OIDC refresh token behavior

When using {rhsso-brand-name} or {rhbk-brand-name} as an OIDC provider, the default access token lifespan is set to 5 minutes, which corresponds to the token refresh grace period set in {product-short}. This 5-minute grace period is the threshold used to trigger a new refresh token call. Since the token is always near expiration, frequent refresh token requests will cause performance issues.

This issue will be resolved in the 1.5 release. To prevent the performance issues, increase the lifespan in the {rhsso-brand-name} or {rhbk-brand-name} server by setting *Configure &gt; Realm Settings &gt; Access Token Lifespan* to a value greater than five minutes (preferably 10 or 15 minutes).


.Additional resources
* link:https://issues.redhat.com/browse/RHIDP-4695[RHIDP-4695]

[id="known-issue-rhidp-3396"]
== Topology plugin permission is not displayed in the RBAC front-end UI

0 comments on commit 015709f

Please sign in to comment.