Skip to content

Commit d2114d0

Browse files
committed
elf4j version bump
1 parent fb92334 commit d2114d0

File tree

1 file changed

+62
-62
lines changed

1 file changed

+62
-62
lines changed

README.md

Lines changed: 62 additions & 62 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
1-
[![](https://img.shields.io/static/v1?label=github&message=repo&color=blue)](https://github.com/q3769/chunk4j)
1+
# chunk4j
22

33
A Java API to chop up larger data blobs into smaller "chunks" of a pre-defined size, and stitch the chunks back together
44
to restore the original data when needed.
55

6-
# User story
6+
## User story
77

88
As a user of the chunk4j API, I want to chop a data blob (bytes) into smaller pieces of a pre-defined size and, when
99
needed, restore the original data by stitching the pieces back together.
@@ -21,112 +21,112 @@ Notes:
2121
the data entries being transported at run-time will, by their intrinsic nature, never go beyond the default or
2222
customized limit.
2323

24-
# Prerequisite
24+
## Prerequisite
2525

2626
Java 8 or better
2727

28-
# Get it...
28+
## Get it...
2929

3030
[![Maven Central](https://img.shields.io/maven-central/v/io.github.q3769/chunk4j.svg?label=Maven%20Central)](https://search.maven.org/search?q=g:%22io.github.q3769%22%20AND%20a:%22chunk4j%22)
3131

32-
# Use it...
32+
## Use it...
3333

3434
- The implementation of chunk4j API is thread-safe.
3535

36-
## The Chopper
36+
### The Chopper
3737

38-
### API:
38+
#### API:
3939

4040
```java
4141
public interface Chopper {
4242

43-
/**
44-
* @param bytes the original data blob to be chopped into chunks
45-
* @return the group of chunks which the original data blob is chopped into. Each chunk carries a portion of the
46-
* original bytes; and the size of that portion has a pre-configured maximum (a.k.a. the {@code Chunk}'s
47-
* capacity). Thus, if the size of the original bytes is smaller or equal to the chunk's capacity, then the
48-
* returned chunk group will have only one chunk element.
49-
*/
50-
List<Chunk> chop(byte[] bytes);
43+
/**
44+
* @param bytes the original data blob to be chopped into chunks
45+
* @return the group of chunks which the original data blob is chopped into. Each chunk carries a portion of the
46+
* original bytes; and the size of that portion has a pre-configured maximum (a.k.a. the {@code Chunk}'s
47+
* capacity). Thus, if the size of the original bytes is smaller or equal to the chunk's capacity, then the
48+
* returned chunk group will have only one chunk element.
49+
*/
50+
List<Chunk> chop(byte[] bytes);
5151
}
5252
```
5353

5454
A larger blob of data can be chopped up into smaller "chunks" to form a "group". When needed, often on a different
5555
network node, the group of chunks can be collectively stitched back together to restore the original data.
5656

57-
### Usage example:
57+
#### Usage example:
5858

5959
```java
6060
public class MessageProducer {
6161

62-
private Chopper chopper = ChunkChopper.ofByteSize(1024); // each chopped off chunk holds up to 1024 bytes
62+
private final Chopper chopper = ChunkChopper.ofByteSize(1024); // each chopped off chunk holds up to 1024 bytes
6363

64-
@Autowired private MessagingTransport transport;
64+
@Autowired private MessagingTransport transport;
6565

66-
/**
67-
* Sender method of business data
68-
*/
69-
public void sendBusinessDomainData(String domainDataText) {
70-
chopper.chop(domainDataText.getBytes()).forEach((chunk) -> transport.send(toMessage(chunk)));
71-
}
66+
/**
67+
* Sender method of business data
68+
*/
69+
public void sendBusinessDomainData(String domainDataText) {
70+
chopper.chop(domainDataText.getBytes()).forEach((chunk) -> transport.send(toMessage(chunk)));
71+
}
7272

73-
/**
74-
* pack/serialize/marshal the chunk POJO into a transport-specific message
75-
*/
76-
private Message toMessage(Chunk chunk) {
77-
//...
78-
}
73+
/**
74+
* pack/serialize/marshal the chunk POJO into a transport-specific message
75+
*/
76+
private Message toMessage(Chunk chunk) {
77+
//...
78+
}
7979
}
8080
```
8181

8282
On the `Chopper` side, you only have to say how big you want the chunks chopped up to be. The chopper will internally
8383
divide up the original data bytes based on the chunk size you specified, and assign a unique group ID to all the chunks
8484
in the same group representing the original data unit.
8585

86-
## The Chunk
86+
### The Chunk
8787

88-
### API:
88+
#### API:
8989

9090
```java
9191
public class Chunk implements Serializable {
9292

93-
private static final long serialVersionUID = 42L;
94-
95-
/**
96-
* The group ID of the original data blob. All chunks in the same group share the same group ID.
97-
*/
98-
@EqualsAndHashCode.Include UUID groupId;
99-
100-
/**
101-
* Ordered index at which this current chunk is positioned inside the group. Chunks are chopped off from the
102-
* original data bytes in sequential order, indexed as such, and assigned with the same group ID as all other chunks
103-
* in the group that represents the original data bytes.
104-
*/
105-
@EqualsAndHashCode.Include int index;
106-
107-
/**
108-
* Total number of chunks the original data blob is chopped to form the group.
109-
*/
110-
int groupSize;
111-
112-
/**
113-
* Data bytes chopped for this current chunk to hold.
114-
*/
115-
byte[] bytes;
93+
private static final long serialVersionUID = 42L;
94+
95+
/**
96+
* The group ID of the original data blob. All chunks in the same group share the same group ID.
97+
*/
98+
@EqualsAndHashCode.Include UUID groupId;
99+
100+
/**
101+
* Ordered index at which this current chunk is positioned inside the group. Chunks are chopped off from the
102+
* original data bytes in sequential order, indexed as such, and assigned with the same group ID as all other chunks
103+
* in the group that represents the original data bytes.
104+
*/
105+
@EqualsAndHashCode.Include int index;
106+
107+
/**
108+
* Total number of chunks the original data blob is chopped to form the group.
109+
*/
110+
int groupSize;
111+
112+
/**
113+
* Data bytes chopped for this current chunk to hold.
114+
*/
115+
byte[] bytes;
116116
}
117117
```
118118

119-
### Usage example:
119+
#### Usage example:
120120

121121
Chunk4J aims to handle most details of the `Chunk` behind the scenes of the `Chopper` and `Stitcher` API. For the API
122122
client, it suffices to know that `Chunk` is a simple POJO data holder; serializable, it carries the data bytes
123123
travelling from the `Chopper` to the `Stitcher`. To transport Chunks over the network, the API client simply needs to
124124
pack the Chunk into a transport-specific message on the Chopper's end, and unpack the message back to a Chunk on the
125125
Stitcher's end, using the POJO marshal-unmarshal technique applicable to the transport.
126126

127-
## The Stitcher
127+
### The Stitcher
128128

129-
### API:
129+
#### API:
130130

131131
```java
132132
public interface Stitcher {
@@ -147,7 +147,7 @@ public interface Stitcher {
147147
On the stitcher side, a group must gather all the previously chopped chunks before the original data blob represented by
148148
this group can be stitched back together and restored.
149149

150-
### Usage example:
150+
#### Usage example:
151151

152152
```java
153153
public class MessageConsumer {
@@ -219,15 +219,15 @@ This stitcher is customized by a combination of both aspects:
219219
new ChunkStitcher.Builder().maxStitchTime(Duration.ofSeconds(5)).maxStitchingGroups(100).build()
220220
```
221221

222-
## Hints on using chunk4j API in messaging
222+
### Hints on using chunk4j API in messaging
223223

224-
### Chunk size/capacity
224+
#### Chunk size/capacity
225225

226226
Chunk4J works on the application layer of the network (Layer 7). There is a small fixed-size overhead in addition to
227227
a chunk's byte size to serialize the entire Chunk object. Take all possible overheads into account when designing
228228
to keep the **overall** message size under the transport limit.
229229

230-
### Message acknowledgment/commit
230+
#### Message acknowledgment/commit
231231

232232
When working with a messaging provider, you want to acknowledge/commit all the messages of an entire group of chunks in
233233
an all-or-nothing fashion, e.g. by using the individual and explicit commit mechanism. The all-or-nothing group commits

0 commit comments

Comments
 (0)