You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Capture a bunch of changes I had sitting around for a while. Fixes a bug
in the mdbook, but otherwise no functional changes.
Signed-off-by: Moritz Hoffmann <[email protected]>
Copy file name to clipboardExpand all lines: README.md
+6-10
Original file line number
Diff line number
Diff line change
@@ -10,16 +10,14 @@ Be sure to read the [documentation for timely dataflow](https://docs.rs/timely).
10
10
11
11
To use timely dataflow, add the following to the dependencies section of your project's `Cargo.toml` file:
12
12
13
-
```
13
+
```toml
14
14
[dependencies]
15
15
timely="*"
16
16
```
17
17
18
18
This will bring in the [`timely` crate](https://crates.io/crates/timely) from [crates.io](http://crates.io), which should allow you to start writing timely dataflow programs like this one (also available in [timely/examples/simple.rs](https://github.com/timelydataflow/timely-dataflow/blob/master/timely/examples/simple.rs)):
19
19
20
20
```rust
21
-
externcrate timely;
22
-
23
21
usetimely::dataflow::operators::*;
24
22
25
23
fnmain() {
@@ -32,7 +30,7 @@ fn main() {
32
30
33
31
You can run this example from the root directory of the `timely-dataflow` repository by typing
34
32
35
-
```
33
+
```text
36
34
% cargo run --example simple
37
35
Running `target/debug/examples/simple`
38
36
seen: 0
@@ -54,8 +52,6 @@ This is a very simple example (it's in the name), which only just suggests at ho
54
52
For a more involved example, consider the very similar (but more explicit) [examples/hello.rs](https://github.com/timelydataflow/timely-dataflow/blob/master/timely/examples/hello.rs), which creates and drives the dataflow separately:
@@ -96,7 +92,7 @@ We first build a dataflow graph creating an input stream (with `input_from`), wh
96
92
We then drive the computation by repeatedly introducing rounds of data, where the `round` itself is used as the data. In each round, each worker introduces the same data, and then repeatedly takes dataflow steps until the `probe` reveals that all workers have processed all work for that epoch, at which point the computation proceeds.
97
93
98
94
With two workers, the output looks like
99
-
```
95
+
```text
100
96
% cargo run --example hello -- -w2
101
97
Running `target/debug/examples/hello -w2`
102
98
worker 0: hello 0
@@ -120,7 +116,7 @@ The `hello.rs` program above will by default use a single worker thread. To use
120
116
To use multiple processes, you will need to use the `-h` or `--hostfile` option to specify a text file whose lines are `hostname:port` entries corresponding to the locations you plan on spawning the processes. You will need to use the `-n` or `--processes` argument to indicate how many processes you will spawn (a prefix of the host file), and each process must use the `-p` or `--process` argument to indicate their index out of this number.
121
117
122
118
Said differently, you want a hostfile that looks like so,
@@ -187,7 +183,7 @@ The communication layer is based on a type `Content<T>` which can be backed by t
187
183
188
184
**NOTE**: Differential dataflow demonstrates how to do this at the user level in its `operators/arrange.rs`, if somewhat sketchily (with a wrapper that lies about the properties of the type it transports).
189
185
190
-
This would allow us to safely pass Rc<T> types around, as long as we use the `Pipeline` parallelization contract.
186
+
This would allow us to safely pass `Rc<T>` types around, as long as we use the `Pipeline` parallelization contract.
This program gives us a bit of a flavor for what a timely dataflow program might look like, including a bit of what Rust looks like, without getting too bogged down in weird stream processing details. Not to worry; we will do that in just a moment!
@@ -39,9 +37,7 @@ If we run the program up above, we see it print out the numbers zero through nin
39
37
This isn't very different from a Rust program that would do this much more simply, namely the program
40
38
41
39
```rust
42
-
fnmain() {
43
-
(0..10).for_each(|x|println!("seen: {:?}", x));
44
-
}
40
+
(0..10).for_each(|x|println!("seen: {:?}", x));
45
41
```
46
42
47
-
Why would we want to make our life so complicated? The main reason is that we can make our program *reactive*, so that we can run it without knowing ahead of time the data we will use, and it will respond as we produce new data.
43
+
Why would we want to make our life so complicated? The main reason is that we can make our program *reactive*, so that we can run it without knowing ahead of time the data we will use, and it will respond as we produce new data.
We can run this program in a variety of configurations: with just a single worker thread, with one process and multiple worker threads, and with multiple processes each with multiple worker threads.
Copy file name to clipboardExpand all lines: mdbook/src/chapter_4/chapter_4_4.md
+4-4
Original file line number
Diff line number
Diff line change
@@ -114,7 +114,7 @@ We can check out the examples `examples/capture_send.rs` and `examples/capture_r
114
114
115
115
The `capture_send` example creates a new TCP connection for each worker, which it wraps and uses as an `EventPusher`. Timely dataflow takes care of all the serialization and stuff like that (warning: it uses abomonation, so this is not great for long-term storage).
116
116
117
-
```rust,ignore
117
+
```rust,no_run
118
118
extern crate timely;
119
119
120
120
use std::net::TcpStream;
@@ -138,7 +138,7 @@ fn main() {
138
138
139
139
The `capture_recv` example is more complicated, because we may have a different number of workers replaying the stream than initially captured it.
0 commit comments