Description
I'm running filebeat on the server for sending the logs to kafka output [kafka_ip:9092].
However kafka output is in the confluent cloud and my server can't reach the outside internet directly.
There is an HTTP proxy [HTTP_ip:80] which can be used to reach all that is needed.
HTTP proxy cannot be used to send the logs from filebeat to kafka as it doesn't support HTTP.
Therefore, i was looking for other solution and found this Kafka Proxy.
The README file lists a use case with an HTTP proxy, but i'm not sure if it can solve my problem.
I used this kafka-proxy settings:
kafka-proxy server --bootstrap-server-mapping "[kafka_ip:9092],127.0.0.1:32501"
--forward-proxy http://[HTTP_ip]
--sasl-enable
--sasl-username "username"
--sasl-password "password"
--sasl-method PLAIN
--tls-enable
--proxy-listener-tls-enable
--proxy-listener-cert-file $path/cert.pem
--proxy-listener-key-file $path/cert-key.pem
--log-level debug
And this filebeat config (part of it):
output.kafka:
enabled: true
ssl.enabled: true
initial brokers for reading cluster metadata
hosts: ["127.0.0.1:32501"]
...
Kafka proxy seems to be running without any issue and the bootstrap server can be reached via http proxy:
INFO[2025-03-24T12:57:46+01:00] Starting kafka-proxy version 0.4.1
INFO[2025-03-24T12:57:46+01:00] initial server certs loading
INFO[2025-03-24T12:57:46+01:00] stored x509 server certs for names [[something]]
INFO[2025-03-24T12:57:46+01:00] Bootstrap server [kafka_ip:9092] advertised as 127.0.0.1:32501
INFO[2025-03-24T12:57:46+01:00] Listening on 127.0.0.1:32501 (127.0.0.1:32501) for remote [kafka_ip:9092]
INFO[2025-03-24T12:57:46+01:00] initial client certs loading
INFO[2025-03-24T12:57:46+01:00] stored x509 client root certs, client cert []
INFO[2025-03-24T12:57:46+01:00] Kafka clients will connect through the HTTP proxy [HTTP_ip:80] using CONNECT
INFO[2025-03-24T12:57:46+01:00] Ready for new connections
INFO[2025-03-24T12:57:50+01:00] New connection for [kafka_ip:9092]
DEBU[2025-03-24T12:57:50+01:00] Sending SaslHandshakeRequest mechanism: PLAIN version: 0
DEBU[2025-03-24T12:57:50+01:00] Successful SASL handshake. Available mechanisms: [PLAIN OAUTHBEARER]
DEBU[2025-03-24T12:57:50+01:00] Sending authentication opaque packets, mechanism PLAIN
DEBU[2025-03-24T12:57:50+01:00] Kafka request key 17, version 0, length 22
DEBU[2025-03-24T12:57:50+01:00] Kafka response key 17, version 0, length 10
INFO[2025-03-24T12:57:50+01:00] Client closed local connection on 127.0.0.1:32501 from 127.0.0.1:58582 ([kafka_ip:9092])
INFO[2025-03-24T12:57:51+01:00] New connection for [kafka_ip:9092]
DEBU[2025-03-24T12:57:51+01:00] Sending SaslHandshakeRequest mechanism: PLAIN version: 0
DEBU[2025-03-24T12:57:51+01:00] Successful SASL handshake. Available mechanisms: [PLAIN OAUTHBEARER]
DEBU[2025-03-24T12:57:51+01:00] Sending authentication opaque packets, mechanism PLAIN
DEBU[2025-03-24T12:57:51+01:00] Kafka request key 17, version 0, length 22
DEBU[2025-03-24T12:57:51+01:00] Kafka response key 17, version 0, length 10
INFO[2025-03-24T12:57:51+01:00] Client closed local connection on 127.0.0.1:32501 from 127.0.0.1:58588 ([kafka_ip:9092])
And continues like this with other ports on the localhost.
However, the are errors in the filebeat logs and there is no acked libbeat output event, since the brokers cannot be reached:
{"log.level":"error","@timestamp":"2025-03-24T12:58:41.044+0100","log.logger":"kafka","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/outputs/kafka.(*client).errorWorker","file.name":"kafka/client.go","file.line":338},"message":"Kafka (topic=test_topic): kafka: client has run out of available brokers to talk to (Is your cluster reachable?)","service.name":"filebeat","ecs.version":"1.6.0"}
My question - Is it even possible to how such a setting using Filebeat, Kafka Proxy and HTTP Proxy for sending logs to Kafka ? If yes, what am i doing wrong ?
Or do i need to use just the Kafka proxy with HTTP proxy ?
Thanks