Skip to content

KAFKA-17821: the set of configs displayed by logAll could be invalid #17993

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: trunk
Choose a base branch
from

Conversation

m1a2st
Copy link
Collaborator

@m1a2st m1a2st commented Nov 30, 2024

Jira: https://issues.apache.org/jira/browse/KAFKA-17821

User can config different protocol in consumer config. We will print all config in log, but some configs has the default value and is unsupported for each protocol, thus we print these config will misdirect user.
We should improve this, and won't show the unsupported configs in log.

test in my local, if use CLASSIC protocol won't show group.remote.assignor

[2024-11-30 19:25:10,460] INFO ConsumerConfig values: 
	metric.reporters = [org.apache.kafka.common.metrics.JmxReporter]
	sasl.oauthbearer.token.endpoint.url = null
	sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
	retry.backoff.max.ms = 1000
	reconnect.backoff.max.ms = 1000
	partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor]
	ssl.engine.factory.class = null
	sasl.oauthbearer.expected.audience = null
	ssl.keystore.type = JKS
	enable.auto.commit = false
	sasl.oauthbearer.header.urlencode = false
	interceptor.classes = []
	exclude.internal.topics = true
	ssl.truststore.password = null
	default.api.timeout.ms = 60000
	ssl.endpoint.identification.algorithm = https
	max.poll.records = 500
	check.crcs = true
	sasl.login.refresh.buffer.seconds = 300
	receive.buffer.bytes = 65536
	ssl.truststore.type = JKS
	sasl.oauthbearer.clock.skew.seconds = 30
	client.dns.lookup = use_all_dns_ips
	fetch.min.bytes = 1
	send.buffer.bytes = 131072
	sasl.oauthbearer.jwks.endpoint.url = null
	value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
	enable.metrics.push = true
	sasl.login.retry.backoff.ms = 100
	metadata.recovery.rebootstrap.trigger.ms = 300000
	ssl.secure.random.implementation = null
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	ssl.trustmanager.algorithm = PKIX
	sasl.jaas.config = null
	sasl.kerberos.min.time.before.relogin = 60000
	connections.max.idle.ms = 540000
	session.timeout.ms = 45000
	internal.leave.group.on.close = true
	ssl.keystore.certificate.chain = null
	socket.connection.setup.timeout.ms = 10000
	ssl.provider = null
	ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
	ssl.cipher.suites = null
	security.protocol = PLAINTEXT
	allow.auto.create.topics = true
	ssl.keymanager.algorithm = SunX509
	sasl.login.callback.handler.class = null
	auto.offset.reset = latest
	metadata.max.age.ms = 300000
	reconnect.backoff.ms = 50
	sasl.kerberos.ticket.renew.window.factor = 0.8
	max.partition.fetch.bytes = 1048576
	bootstrap.servers = []
	metrics.recording.level = INFO
	ssl.truststore.certificates = null
	security.providers = null
	sasl.mechanism = GSSAPI
	client.id = consumer-null-1
	request.timeout.ms = 30000
	sasl.login.retry.backoff.max.ms = 10000
	heartbeat.interval.ms = 3000
	auto.commit.interval.ms = 5000
	sasl.login.class = null
	ssl.truststore.location = null
	ssl.keystore.password = null
	fetch.max.bytes = 52428800
	max.poll.interval.ms = 300000
	group.protocol = classic
	sasl.login.connect.timeout.ms = null
	socket.connection.setup.timeout.max.ms = 30000
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.min.period.seconds = 60
	sasl.oauthbearer.scope.claim.name = scope
	group.id = null
	sasl.oauthbearer.expected.issuer = null
	sasl.login.read.timeout.ms = null
	retry.backoff.ms = 100
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	internal.throw.on.fetch.stable.offset.unsupported = false
	metadata.recovery.strategy = rebootstrap
	ssl.key.password = null
	fetch.max.wait.ms = 500
	ssl.keystore.key = null
	sasl.client.callback.handler.class = null
	metrics.num.samples = 2
	key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
	ssl.protocol = TLSv1.3
	group.instance.id = null
	client.rack = 
	ssl.keystore.location = null
	sasl.oauthbearer.sub.claim.name = sub
	sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
	metrics.sample.window.ms = 30000
	isolation.level = read_uncommitted
	sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
	sasl.login.refresh.window.jitter = 0.05
 (org.apache.kafka.common.config.AbstractConfig:380)

Committer Checklist (excluded from commit message)

  • Verify design and implementation
  • Verify test coverage and CI build status
  • Verify documentation (including upgrade notes)

Copy link
Contributor

@kirktrue kirktrue left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @m1a2st for the PR.

It appears that the overridden version of clearUnsupportedConfigsForLogging() doesn't return a Map of type TreeMap. The result would be that the entries wouldn't be sorted.

Also, are there any unit tests that could be written to verify this change?

Thanks!

Comment on lines 120 to 123
if (doLog)
logAll();
if (doLog) {
Map<String, Object> loggingConfig = clearUnsupportedConfigsForLogging(this.values);
logAll(loggingConfig);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it possible to invoke clearUnsupportedConfigsForLogging() inside logAll() and thus the method signature can remain as is?

return new TreeMap<>(values);
}

private void logAll(Map<String, Object> values) {
StringBuilder b = new StringBuilder();
b.append(getClass().getSimpleName());
b.append(" values: ");
b.append(Utils.NL);

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That way we can invoke this from inside logAll(), keeping the logging bits closer together.

Suggested change
Map<String, Object> valuesToLog - clearUnsupportedConfigsForLogging(values);

@m1a2st
Copy link
Collaborator Author

m1a2st commented Dec 14, 2024

Thanks for @kirktrue review, addressed all comments.

Copy link

This PR is being marked as stale since it has not had any activity in 90 days. If you
would like to keep this PR alive, please leave a comment asking for a review. If the PR has
merge conflicts, update it with the latest from the base branch.

If you are having difficulty finding a reviewer, please reach out on the [mailing list](https://kafka.apache.org/contact).

If this PR is no longer valid or desired, please feel free to close it. If no activity occurs in the next 30 days, it will be automatically closed.

@github-actions github-actions bot added the stale Stale PRs label Mar 15, 2025
@github-actions github-actions bot removed the stale Stale PRs label Mar 16, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants