Skip to content

Releases: rabbitmq/rabbitmq-stream-go-client

v1.5.5

14 May 12:27
v1.5.5
aad983b
Compare
Choose a tag to compare

Version 1.5.5

Please read before updating.

What's Changed

Bug Fixes

  • Fix producers reconnection deadlock by @yurahaid in #394
  • Do not panic during chunk dispatching if consumer suddenly closed by @rsperl in #393

Dependency Updates

New Contributors

Full Changelog: v1.5.4...v1.5.5

v1.5.4

07 May 08:27
v1.5.4
fa4a8a1
Compare
Choose a tag to compare

Version 1.5.4

Please read before updating.

What's Changed

Bug Fixes

  • Fix handling and storing offset by timestamp by @yurahaid in #392

Dependency Updates

New Contributors

Full Changelog: v1.5.3...v1.5.4

v1.5.3

15 Apr 14:02
v1.5.3
75a6cd7
Compare
Choose a tag to compare

Version 1.5.3

Please read before updating.

What's Changed

Enhancements

Full Changelog: v1.5.2...v1.5.3

v1.5.2

08 Apr 12:41
v1.5.2
1bd0191
Compare
Choose a tag to compare

Version 1.5.2

Please read before updating.

What's Changed

Bug Fixes

Full Changelog: v1.5.1...v1.5.2

v1.5.1

01 Apr 14:24
v1.5.1
e19fc7b
Compare
Choose a tag to compare

Version 1.5.1

Please read before updating.

What's Changed

  • Expose StoreOffset Api to the Env in #385 by @Gsantomaggio
  • Update go version to 1.23

Dependency Updates

Full Changelog: v1.5.0...v1.5.1

v1.5.0

05 Feb 13:29
v1.5.0
4f1b90b
Compare
Choose a tag to compare

Version 1.5

What's Changed

Please read before updating.

This version focuses on stability during the reconnection and introduces the dynamic send.
There are no breaking changes, but deprecations:

  • BatchPublishingDelay int is not used anymore.

Dynamic send

Dynamic send improves the latency when the traffic is low; for example, with 50 msg per second, there is ~3ms latency:

go run perftest.go --rate 50 --async-send
Published     42.9 msg/s | Confirmed     42.9 msg/s |  Consumed     42.9 msg/s |  Rate Fx: 50 | Body sz: 8 | latency: 3 ms
Published     43.8 msg/s | Confirmed     43.8 msg/s |  Consumed     43.8 msg/s |  Rate Fx: 50 | Body sz: 8 | latency: 3 ms
Published     44.4 msg/s | Confirmed     44.4 msg/s |  Consumed     44.4 msg/s |  Rate Fx: 50 | Body sz: 8 | latency: 3 ms
Published     45.0 msg/s | Confirmed     45.0 msg/s |  Consumed     45.0 msg/s |  Rate Fx: 50 | Body sz: 8 | latency: 2 ms

With the 1.4.x is~ 90ms (the aggregation timeout)

go run perftest.go --rate 50 --async-send
Published     44.4 msg/s | Confirmed     44.4 msg/s |  Consumed     44.4 msg/s |  Rate Fx: 50 | Body sz: 8 | latency: 88 ms
Published     45.0 msg/s | Confirmed     45.0 msg/s |  Consumed     45.0 msg/s |  Rate Fx: 50 | Body sz: 8 | latency: 89 ms
Published     45.5 msg/s | Confirmed     45.5 msg/s |  Consumed     45.5 msg/s |  Rate Fx: 50 | Body sz: 8 | latency: 90 ms

Enhancements

Bug Fixes

Thanks a lot to @hiimjako for helping with this version

Full Changelog: v1.5.0-beta.1...v1.5.0-rc.1

v1.5.0-rc.1

27 Jan 10:15
v1.5.0-rc.1
83553a8
Compare
Choose a tag to compare
v1.5.0-rc.1 Pre-release
Pre-release

Version 1.5

What's Changed

Please read before updating.

This version focuses on stability during the reconnection and introduces the dynamic send.
There are no breaking changes, but deprecations:

  • BatchPublishingDelay int is not used anymore.

Dynamic send

Dynamic send improves the latency when the traffic is low; for example, with 50 msg per second, there is ~3ms latency:

go run perftest.go --rate 50 --async-send
Published     42.9 msg/s | Confirmed     42.9 msg/s |  Consumed     42.9 msg/s |  Rate Fx: 50 | Body sz: 8 | latency: 3 ms
Published     43.8 msg/s | Confirmed     43.8 msg/s |  Consumed     43.8 msg/s |  Rate Fx: 50 | Body sz: 8 | latency: 3 ms
Published     44.4 msg/s | Confirmed     44.4 msg/s |  Consumed     44.4 msg/s |  Rate Fx: 50 | Body sz: 8 | latency: 3 ms
Published     45.0 msg/s | Confirmed     45.0 msg/s |  Consumed     45.0 msg/s |  Rate Fx: 50 | Body sz: 8 | latency: 2 ms

With the 1.4.x is~ 90ms (the aggregation timeout)

go run perftest.go --rate 50 --async-send
Published     44.4 msg/s | Confirmed     44.4 msg/s |  Consumed     44.4 msg/s |  Rate Fx: 50 | Body sz: 8 | latency: 88 ms
Published     45.0 msg/s | Confirmed     45.0 msg/s |  Consumed     45.0 msg/s |  Rate Fx: 50 | Body sz: 8 | latency: 89 ms
Published     45.5 msg/s | Confirmed     45.5 msg/s |  Consumed     45.5 msg/s |  Rate Fx: 50 | Body sz: 8 | latency: 90 ms

Enhancements

Bug Fixes

Thanks a lot to @hiimjako for helping with this version

Full Changelog: v1.5.0-beta.1...v1.5.0-rc.1

v1.5.0-beta.1

14 Jan 14:32
v1.5.0-beta.1
d52e281
Compare
Choose a tag to compare
v1.5.0-beta.1 Pre-release
Pre-release

Version 1.5

What's Changed

Please read before updating.

This version focuses on stability during the reconnection and introduces the dynamic send.
There are no breaking changes, but deprecations:

  • BatchPublishingDelay int is not used anymore.
  • QueueSize is not used anymore

Dynamic send

Dynamic send improves the latency when the traffic is low, for example, with 50 msg per second, there is ~3ms latency:

go run perftest.go --rate 50 --async-send
Published     42.9 msg/s | Confirmed     42.9 msg/s |  Consumed     42.9 msg/s |  Rate Fx: 50 | Body sz: 8 | latency: 3 ms
Published     43.8 msg/s | Confirmed     43.8 msg/s |  Consumed     43.8 msg/s |  Rate Fx: 50 | Body sz: 8 | latency: 3 ms
Published     44.4 msg/s | Confirmed     44.4 msg/s |  Consumed     44.4 msg/s |  Rate Fx: 50 | Body sz: 8 | latency: 3 ms
Published     45.0 msg/s | Confirmed     45.0 msg/s |  Consumed     45.0 msg/s |  Rate Fx: 50 | Body sz: 8 | latency: 2 ms

With the 1.4.x is~ 90ms (the aggregation timeout)

go run perftest.go --rate 50 --async-send
Published     44.4 msg/s | Confirmed     44.4 msg/s |  Consumed     44.4 msg/s |  Rate Fx: 50 | Body sz: 8 | latency: 88 ms
Published     45.0 msg/s | Confirmed     45.0 msg/s |  Consumed     45.0 msg/s |  Rate Fx: 50 | Body sz: 8 | latency: 89 ms
Published     45.5 msg/s | Confirmed     45.5 msg/s |  Consumed     45.5 msg/s |  Rate Fx: 50 | Body sz: 8 | latency: 90 ms

Enhancements

Bug Fixes

Thanks a lot to @hiimjako for helping with this version

Full Changelog: v1.4.11...v1.5.0-beta.1

v1.4.11

02 Dec 08:49
v1.4.11
ab4d470
Compare
Choose a tag to compare

What's Changed

Please read before updating.

This version focuses on performances for BatchSend and the Consumer side.
We changed the default TCP Parameters see below. The old parameters can be restored with: SetReadBuffer(65536) and SetNoDelay(false)

Enhancements

  • Improve batch send performances #366
  • Add the latency info on perftest #363
  • Restored the pertTest docker build image. Here: pivotalrabbitmq/go-stream-perf-test
  • Change the default TCP parameter to 8192 for TCP Read and Write and Disable Nagle's algorithm

Bug Fix

  • Adapt the heartbeat checker to the configuration #361

Full Changelog: v1.4.10...v1.4.11

v1.4.10

27 Sep 09:06
v1.4.10
9318b94
Compare
Choose a tag to compare

What's Changed

Enhancements

Full Changelog: v1.4.9...v1.4.10