-
Notifications
You must be signed in to change notification settings - Fork 409
[ANSIENG-4742] | Fix c3 kafka listener #1985
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: 7.8.x
Are you sure you want to change the base?
Conversation
… and also handling c3 to kafka communication for file based
Clubbed all three cases for C3 to kafka into single. Instead of using auth_mode == mtls used listener name internal_token condition as it is needed for upgrades
No longer need to use special file based users in C3
- assert: | ||
that: | ||
- mds_ssl_client_authentication != 'none' | ||
fail_msg: "When auth mode is mtls mds must have ssl client authentication is set to required or requested" | ||
fail_msg: "When auth mode is mtls, mds must have ssl client authentication is set to required or requested" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is it necessary for the customer to set this? why doesn't ansible set mds_ssl_client_authentication when auth_mode is set to mtls?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2 things
- Lets say ansible doesn't set it. Then should this validation be
mds_ssl_client_authentication: required or != none
?- if auth mode is mtls then mds cant have ldap/oauth. So it seems mds ssl client authentication should be required in that case. As there is no fallback so requested or required behaves the same way. Both will fail if cert is not given to mds. Thus have required makes it clearer that if cert is not given while talking to mds it will fail.
- Lets say we internally define it something like mds_ssl_client_authentication: required if auth_mode == 'mtls' else none. Then it would make the customer use one less config in their inventory file but if they have to explicitly define it then it would in a way make it clearer to them that auth mode mtls means MDS must have mtls. So I think either of them are fine and have their own pros and cons
roles/variables/vars/main.yml
Outdated
@@ -1606,7 +1611,7 @@ kafka_rest_properties: | |||
'client.', kafka_rest_truststore_path, kafka_rest_truststore_storepass, public_certificates_enabled, kafka_rest_keystore_path, kafka_rest_keystore_storepass, kafka_rest_keystore_keypass, | |||
false, sasl_plain_users_final.kafka_rest.principal, sasl_plain_users_final.kafka_rest.password, sasl_scram_users_final.kafka_rest.principal, sasl_scram_users_final.kafka_rest.password, sasl_scram256_users_final.kafka_rest.principal, sasl_scram256_users_final.kafka_rest.password, | |||
kerberos_kafka_broker_primary, kafka_rest_keytab_path, kafka_rest_kerberos_principal|default('rp'), | |||
false, kafka_rest_ldap_user, kafka_rest_ldap_password, mds_bootstrap_server_urls, oauth_enabled, kafka_rest_oauth_user, kafka_rest_oauth_password, oauth_groups_scope, oauth_token_uri, idp_self_signed, false) }}" | |||
kafka_rest_kafka_listener_name == 'internal_token', kafka_rest_ldap_user, kafka_rest_ldap_password, mds_bootstrap_server_urls, oauth_enabled, kafka_rest_oauth_user, kafka_rest_oauth_password, oauth_groups_scope, oauth_token_uri, idp_self_signed, false) }}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if we are using this logic to omit oauth configs, how do we plan to add the handler and OAUTHBEARER config needed for SASL_SSL?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
https://docs.confluent.io/platform/7.9/security/authorization/rbac/configure-mtls-rbac.html#configure-crest-request-forwarding-over-mtls-with-rbac
according to this rest proxy doesnt even need to have login callback handler class.
And when i ran tests like this where it had no added handlers it was still working.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
got it, and how is client.sasl.mechanism=OAUTHBEARER
client.security.protocol=SASL_SSL getting set here? is it based on sasl_protocol of internal_token?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
security.protocol
gets added before we have the section for omit_oauth_configs so it is unaffected (thus gets added as before).
sasl.mechanism
is not getting added currently. Surprizingly it still works. I'll need to understand this better that how come it is working without that property
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I confirmed with Security team.
- We should be adding sasl.mechanism. (Without adding it how it worked is still a mystery). But no harm in adding so i have added it.
- Handler is hard coded in rest proxy's code therefore here we dont need to give it handler
roles/variables/vars/main.yml
Outdated
@@ -1643,7 +1648,7 @@ kafka_rest_properties: | |||
'client.confluent.monitoring.interceptor.', kafka_rest_truststore_path, kafka_rest_truststore_storepass, public_certificates_enabled, kafka_rest_keystore_path, kafka_rest_keystore_storepass, kafka_rest_keystore_keypass, | |||
false, sasl_plain_users_final.kafka_rest.principal, sasl_plain_users_final.kafka_rest.password, sasl_scram_users_final.kafka_rest.principal, sasl_scram_users_final.kafka_rest.password, sasl_scram256_users_final.kafka_rest.principal, sasl_scram256_users_final.kafka_rest.password, | |||
kerberos_kafka_broker_primary, kafka_rest_keytab_path, kafka_rest_kerberos_principal|default('rp'), | |||
false, kafka_rest_ldap_user, kafka_rest_ldap_password, mds_bootstrap_server_urls, oauth_enabled, kafka_rest_oauth_user, kafka_rest_oauth_password, oauth_groups_scope, oauth_token_uri, idp_self_signed, false) }}" | |||
kafka_rest_kafka_listener_name == 'internal_token', kafka_rest_ldap_user, kafka_rest_ldap_password, mds_bootstrap_server_urls, oauth_enabled, kafka_rest_oauth_user, kafka_rest_oauth_password, oauth_groups_scope, oauth_token_uri, idp_self_signed, false) }}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need the SASL_SSL listener for monitoring interceptor ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes this can work on ssl listener but we dont support users to configure 2 different things talking to 2 different listeners. So we can either hard code it to talk to ssl listener or make it talk to same same listener as client.
In case we make it talk to mtls only listener then we would need to create an extra listener because we cant rely on internal listener as during migration it might be talking over SASL_SSL
properties: "{{ kafka_broker_listeners[control_center_kafka_listener_name] | confluent.platform.client_properties(ssl_enabled, False, ssl_mutual_auth_enabled, sasl_protocol, | ||
'confluent.monitoring.interceptor.', control_center_truststore_path, control_center_truststore_storepass, public_certificates_enabled, control_center_keystore_path, control_center_keystore_storepass, control_center_keystore_keypass, | ||
false, sasl_plain_users_final.control_center.principal, sasl_plain_users_final.control_center.password, sasl_scram_users_final.control_center.principal, sasl_scram_users_final.control_center.password, sasl_scram256_users_final.control_center.principal, sasl_scram256_users_final.control_center.password, | ||
kerberos_kafka_broker_primary, control_center_keytab_path, control_center_kerberos_principal|default('c3'), | ||
false, control_center_ldap_user, control_center_ldap_password, mds_bootstrap_server_urls, oauth_enabled, control_center_oauth_user, control_center_oauth_password, oauth_groups_scope, oauth_token_uri, idp_self_signed, false) }}" | ||
control_center_kafka_listener_name == 'internal_token', control_center_ldap_user, control_center_ldap_password, mds_bootstrap_server_urls, oauth_enabled, control_center_oauth_user, control_center_oauth_password, oauth_groups_scope, oauth_token_uri, idp_self_signed, false) }}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same as above, is SASL_SSL required for monitoring interceptor? as per my understanding, SSL listener is sufficient for interceptor
…tion for listener internal token but mds on ldap/oauth + mtls
…from filters instead of roles vars
plugins/filter/filters.py
Outdated
# This is because it is not getting added for ERP currently due to omit_oauth_configs currently. | ||
final_dict[config_prefix + 'sasl.mechanism'] = 'OAUTHBEARER' | ||
final_dict[config_prefix + 'sasl.jaas.config'] = 'org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required metadataServerUrls=\"' + mds_bootstrap_server_urls + '\";' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will this config also get added in rest proxy ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yep
Description
Changing the C3 to kafka listener from SSL to SASL_SSL in case of SSO.
Using the newly introduced TokenCert Login Callback Handler
Fixes # (issue)
Type of change
How Has This Been Tested?
This is tested by merging all PRs for RBAC mtls Brownfield in a single branch and then running the tests on that branch
old after using new TokenCert Handler for SSO in C3
kraft
zookeeper
new - after using new TokenCert Handler for SSO and non SSO in C3
kraft
zookeeper
after updating the condition for rest proxy as kafka client creation
zookeeper
Checklist: