-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathdev-diary.txt
1709 lines (1231 loc) · 63.6 KB
/
dev-diary.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
running recap
useful libraries:
* serde_scan
useful data structures i hadn't touched in a while:
* deque
TODOs:
* read through std lib docs
* watch all of jon gjengest's videos
* read rust by example
* read nomicon
* read source code for a rust web server
* read https://github.com/bheisler/Corrosion
* implement a doubly linked list that lets you insert in the middle and pop off the middle (with unsafe?)
* really experiment with debugging
ideas for a program to write after this:
* port quinto to rust and compare perf? (might require a react-like rust lib to replace reagent, does one exist?)
* look at the `battery` program and figure out how it generates its terminal graphs, come up with something to graph on the pi
* irc bot (good excuse to try out tokio)
* profiler (????)
* flocking simulator (?)
===
2/4/19
at long last, chapter 13 is behind me and i'm about ready to start
reading through https://doc.rust-lang.org/std/iter/trait.Iterator.html before i begin
the bit about using collect() to collect a list of Results into a Result of lists is interesting
btw here are the vim keybinds i've set up that i need to remember
gd - go to definition
gs - go to definition in a split
,gd - look up documentation
C-x C-o - completion (could this work with C-p instead?)
useful-looking methods aside from the usual FP ones:
* partition
* inspect
* filter_map
* by_ref
* all, any
* find
* find_map
* position, rposition
* max_by_key, max_by, min_by_key, min_by
* rev
* cloned
things i know by different names:
* `flat_map` is clj `mapcat`
* `scan` is clj `reductions`
* `skip` is clj `drop`
* `fold` is clj `reduce`
* `partition` is yelp_lib.iteration `winnow`
things to watch out for:
some of these methods (particularly map, filter, take_while, skip_while, etc)
have this in their documentation (copy-pasting follows):
Because the closure passed to filter() takes a reference, and many iterators iterate over
references, this leads to a possibly confusing situation, where the type of the closure is a double reference.
```
let a = [0, 1, 2];
let mut iter = a.into_iter().filter(|x| **x > 1); // need two *s!
assert_eq!(iter.next(), Some(&2));
assert_eq!(iter.next(), None);
```
so let's just keep an eye out for that.
TODO read through https://doc.rust-lang.org/std/index.html
ok here goes
1a: sum all the numbers
pretty straightforward
1b is straightforward too but i'm confused about this:
let set: HashSet<_> = [1, 2, 3].iter().cloned().collect();
assert_eq!(set.contains(&1), true);
assert_eq!(set.contains(&4), false);
why does set.contains(&1) have to have the &?
https://www.forrestthewoods.com/blog/learning-rust-via-advent-of-code/
has a good list of iterator methods and useful crates
mentions https://crates.io/crates/nom and https://crates.io/crates/pest as being
possibly useful for day24
recommends https://github.com/Amanieu/hashbrown
let's try it out!
2x speedup!!
oh wow nice i hadn't tried building in release mode - 0.01 seconds for the whole thing atm
ok so i went through a few rounds of changes for 2b
originally i had this
fn two_b() -> String {
let contents = fs::read_to_string("src/inputs/2.txt").unwrap();
let lines: Vec<&str> = contents.lines().collect();
for (i, line) in lines.iter().enumerate() {
for other_line in lines.iter().skip(i) {
let diff_positions = differing_character_positions(line, other_line);
if diff_positions.iter().count() == 1 {
let mut ret = String::new();
for (i, character) in line.chars().enumerate() {
if i != diff_positions[0] {
ret.push(character);
}
}
return ret;
}
}
}
"unreachable".to_string()
}
then i went crazy with iterator methods for fun and got this
fn two_b() -> String {
let contents = fs::read_to_string("src/inputs/2.txt").unwrap();
let lines: Vec<&str> = contents.lines().collect();
let (box_a, box_b) = lines
.iter()
.enumerate()
.flat_map(|(i, line)| {
lines
.iter()
.skip(i)
.map(move |other_line| (line, other_line))
})
.find(|(line, other_line)| {
differing_character_positions(line, other_line)
.iter()
.count()
== 1
})
.unwrap();
let differing_index = differing_character_positions(box_a, box_b)[0];
let mut ret = String::new();
for (i, character) in box_a.chars().enumerate() {
if i != differing_index {
ret.push(character);
}
}
return ret;
}
but i was sure that there must be some way to improve that - in particular, the bit where
i was like trying to zip lines against itself (or get, like, a one-way cartesian product?)
felt like it must be implemented in itertools somewhere
and lo and behold, .combinations(2) does exactly what i wanted
except that it puts things in a Vec, so i have to put the two strings into a tuple
so i can destructure them later
and then we get this
fn two_b() -> String {
let contents = fs::read_to_string("src/inputs/2.txt").unwrap();
let lines: Vec<&str> = contents.lines().collect();
let (box_a, box_b) = lines
.iter()
.combinations(2)
.map(|pair| (pair[0], pair[1]))
.find(|(box_a, box_b)| {
differing_character_positions(box_a, box_b)
.iter()
.count()
== 1
})
.unwrap();
let differing_index = differing_character_positions(box_a, box_b)[0];
let mut ret = String::new();
for (i, character) in box_a.chars().enumerate() {
if i != differing_index {
ret.push(character);
}
}
return ret;
}
i still feel like the bit at the end where we make a string that's got all the chars
except for one at a particular index must be improvable somehow, but it's not obvious to me how
eh, it turns out this does the trick:
box_a
.chars()
.enumerate()
.filter(|(i, _)| *i != differing_index)
.map(|(_, character)| character)
.collect::<String>()
but the forloop approach is simpler so i'll stick with that.
======
2/5/19
3a+3b were easy
starting 4a - first challenge is to sort the log entries chronologically
https://crates.io/crates/chrono is apparently a good crate for time stuff
also, HN says:
`
For sorting sequential fields you can chain comparisons with .then():
struct Date {
year: u32,
month: u32,
day: u32,
}
let mut vec: Vec<Date> = Vec::new();
vec.sort_by(|a,b| {
a.year.cmp(&b.year)
.then(a.month.cmp(&b.month))
.then(a.day.cmp(&b.day))
});
Alternatively you can derive PartialEq, PartialOrd, Eq and Ord for your struct, which will
produce a lexicographic ordering based on the top-to-bottom declaration order of the struct's members:
#[derive(PartialEq, PartialOrd, Eq, Ord)]
struct Date {
year: u32,
month: u32,
day: u32,
}
`
`
Or you can use sort_by_key and extract the relevant sorting key as a tuple (or any other Ord structure) e.g.
vec.sort_by_key(|d| (d.year, d.month, d.day))
sort_by is more flexible as it works fine with borrows, but when sorting on a series of integer values or references sort_by_key is great.
`
starting off with chrono
====
2/6/19
trying out vs code
seems reasonable so far
had to turn off intellisense (autocomplete) in text files just now,
https://stackoverflow.com/questions/38832753/how-to-disable-intellisense-in-vs-code-for-markdown was really useful for that
USEFUL KEYBOARD SHORTCUTS I WILL NEED TO MEMORIZE
gd - go to definition
gh - "equivalent to hovering your mouse over wherever the cursor is. Handy for seeing types and error messages without reaching for the mouse!"
Cmd-shift-, - go one tab to the left
Cmd-shift-. - go one tab to the right
Cmd-t - open terminal pane
Cmd-shift-up - toggle maximized/regular pane size
less often:
Cmd-shift-X - show extensions
Cmd-shift-F - show search
Cmd-shift-e - show explorer
Cmd-shift-U - show output pane
Cmd-shift-m - show problems pane
Cmd-b - show/hide sidebar
full list at https://code.visualstudio.com/shortcuts/keyboard-shortcuts-macos.pdf
TODO consider learning about vim-easymotion, https://github.com/easymotion/vim-easymotion
vs code vim bindings plugin has support for it
DONE use dbg!() instead of println for inspecting data
DONE read the edition guide after the book: https://rust-lang-nursery.github.io/edition-guide/introduction.html
DONE read https://blog.burntsushi.net/rust-error-handling/
https://www.ragona.com/posts/learning_rust_2019 talks about Box, i'm sure i'll understand this more
once i've gotten to that part of the book
====
2/7/19
spent some time with flamer trying to improve the speed of 5, didn't make much progress, just gonna skip it for now
ok so i've been having a slight amount of trouble with chapter 15, let's write down my understanding of these things
* Box<T> is for storing data on the heap instead of the stack; one common reason you might want to do that
is when storing a piece of data whose size can't be known at compile time.
* Rc<T> is for storing data that can have multiple owners. it's used to make sure that the data is correctly
and automatically dropped once its owners have all gone away.
* RefCell<T> is for situations where you want to mutate data in ways/places that you're sure are safe,
but which the compiler can't tell are safe via static analysis. the compiler enforces ownership/borrowing rules
at compile time, which is good for a bunch of reasons that are obvious to me (catch errors earlier, less overhead, etc).
RefCell<T> enforces ownership/borrowing rules at runtime, which is bad because you catch errors later (or not at all,
a user might find them instead) and there's more overhead, but good because it lets you do things that
you wouldn't be able to do otherwise. the book talks a lot about the "interior mutability pattern" in relation to this.
you can combine Rc and RefCell to have multiple owners of mutable data. careful though because
this can lead to reference cycles.
to avoid this, you can use Rc::downgrade instead of Rc::clone
this gives you a Weak<T>
"the difference is the weak_count doesn't need to be 0 for the Rc<T> instance to be cleaned up."
so i think it's the case that if i see RefCell, i should think: "this field gets mutated
in ways that the compiler can't check the safety of statically."
the book puts it this way: "The RefCell<T> type with its interior mutability gives us a type
that we can use when we need an immutable type but need to change an inner value of that type;
it also enforces the borrowing rules at runtime instead of compile time."
====
2/8/19
so i'm noticing something in implementing 6
i want to use the name "location" here, because it's short but clear
but i'm worried about polluting the global namespace with a vague name, and so
i'm calling it DangerLocation, and having functions with names like `initialize_danger_location_grid()`
instead of `initialize_grid()`, etc.
i think that this is a smell that means that i should be using modules!
DONE after 6b: consider splitting each solution out into its own module + file,
and having a "utils" module for now with shared functions like frequencies
yeah that's definitely the right thing to do
have a binary crate and a lib crate
DONE rename DangerLocation and LocationGrid and etc back to less-specific names once we have that six namespace
====
2/9/19
where's my snowpocalypse?
so the concurrency chapter is talking about channels and mutexes
it's not obvious to me which is the right approach for parallelizing 5b,
they seem pretty equivalent in this case, and channels seem to have a nicer api for what i'm imagining
the section on RefCell<T>/Rc<t> vs Mutex<T>/Arc<T> is interesting
"In the same way we used RefCell<T> to allow us to mutate contents inside an Rc<T>,
we use Mutex<T> to mutate contents inside an Arc<T>."
i think i'll start with channels for 5b, then google to see what rayon is, i've heard it's relevant
ok cool chapter 16 done, let's do this
ok initial naive channel-based implementation gets us from 7.6 seconds to 4.2!
still lots of room for improvement and i'm sure i did this in a clumsy way, but that's cool!
next up i'm going to dig into rayon, starting by watching https://www.youtube.com/watch?v=gof_OEv71Aw
this rust belt youtube channel looks promising, i'm sure i'm going to spend all day watching rust videos now
around 6:20 - "and another problem is that some of the iterator combinators actually just don't make sense
in parallel at all, like fold as we'll see" - eep - "and so we have to do different combinators that work
a bit differently" - phew
more about fold around 14:00
with rayon, use .reduce() instead; it basically does several folds at once and then folds the results together
hm, this doesn't sound like a good fit for 5b
with rayon's .reduce(), the types of acc and x have to be the same, whereas with .fold() you can have acc be different
(i'd want acc to be of a different type, eg (string last_char) or something)
ok yeah just gonna watch vids all day and then come back to rayon once i've closed all these youtube tabs
NVM for 5b: maybe split the string up into two sets of pairs (so we don't miss should-react letters on the pair boundary), like
"aBbAcdeaFf" gets turned into
"aB", "bA", "cd", "ea", "Ff" and
"a", "Bb", "Ac", "de", "aF", f"
and also each pair is marked with its associated indices in the string
and then you check each pair to see if it should react, and record the indices if so
and then sort the list of marked indices
and if an index shows up twice, then you know that [index-1 index] is the right pair to react,
and [index index+1] should be ignored, and you can handle that somehow
https://www.youtube.com/watch?v=Fe6_LFGiqP0 - systems programming in rust
"systems programming is when you spend more time looking at man pages than at stack overflow" - nice
not much actual substance in this talk, this is all just stuff that's apparent if you read the book, oh well
https://www.youtube.com/watch?v=Wz2oFEDwiOk - documentation talk
this way of framing docs is useful:
* the meet cute (what is this, why should i care)
* the black triangle (hello world but with a focus on exercising the library's full pipeline)
* the walkthrough
* the reference
https://www.youtube.com/watch?v=C4jQbc1RJPY - nov 2018 core rust devs talk
interesting bit about "momentum" around the 16 minute mark
"most RFCs never survive the first comment", meaning that the first comment
sets the tone for the discussion that follows (particularly in open-source)
she suggests that a tactic you can use is to ping someone you know and ask them
to be the one to make the first comment, to ensure the discussion doesn't
immediately go off the rails
that all makes tons of sense!
ok taking a break from vids
so i snuck a peek at forrest's day5 solution (only that one tho!) and it looks like
his is really similar to mine but doesn't have any perf issues
i think that's because my `polymer_chars_react_one_way_check()` function happens
in a tight loop and calls .to_uppercase() a million times and that's probably slow
let's try having two arrays of letters a-z and A-Z
nvm, chars have methods like .to_ascii_uppercase() and they're way way faster, that
solved the problem! i tried an approach with explicit hand-coded math and it wasn't
faster than .to_ascii_uppercase() and friends
also rayon rules
====
2/10/19
https://www.youtube.com/watch?v=grU-4u0Okto - traits in rust
trait objects bit in the third segment seems to mainly be about Box?
i need to learn more about what trait objects are, this is a good intro tho
DONE: took another peek at forrest's day5 - instead of having his buffer be a string,
he does:
let mut new_polymer = Vec::<u8>::with_capacity(source_polymer.len());
i should try this!
also check out char.eq_ignore_ascii_case(other_char)
oh, i see - the thing forrest does makes sense.
instead of having a recursive series of react_one_step() calls,
he just realizes that the polymer being worked on is itself a stack
that's clever.
so the reason that that took so long was that i came up with the multi-pass solution
and immediately tunnel-visioned on it and didn't allow myself to think of other solutions.
instead of trying over and over to optimize the multi-pass solution, i should have
taken the opportunity to think: ok, putting that aside for a moment,
what are some other ways that i might solve this problem from scratch?
i'd like to think that i'd have come up with the single-pass solution if i'd done that,
but even if i didn't come up with it, looking for fresh solutions still would have been the correct thing to do.
https://www.youtube.com/watch?v=Dbytx0ivH7Q - concurrency in rust
i'm familiar with everything in the first 30 mins, but then he talks about crossbeam and futures
tokio lets you do async i/o with futures
gives an implementation of an event loop
around 44:00 he talks about how instead of doing `struct Future`, `trait Future` makes a lot more sense
allows for different implementations of the same concept
there's a bit around 48:00 i didn't quite follow about Box and virtual dispatch and type erasure (?)
doodled around to see how to have a list of functions, got this:
let x: Vec<Box<Fn() -> ()>> = vec![
Box::new(|| println!("1a: {}", one::one_a())),
Box::new(|| println!("1b: {}", one::one_b())),
];
x.iter().for_each(|y| y());
might end up using that with rayon to execute all of our solutions at once once all 25 are done
ok i'm done with youtube videos!
http://smallcultfollowing.com/babysteps/blog/2015/12/18/rayon-data-parallelism-in-rust/ - old rayon article
"one of the main points of this post is to advocate for _potential parallelism_ as the basis for Rust
data parallelism libraries, in contrast to the _guaranteed concurrency_ that we have seen thus far."
post is pretty similar to the brief rayon talk the creator gave in 2017 i think
interesting note about how he has a sequential fallback for his quicksort algo
"I think the reason that the code does as well as it does is because it gets the “big things” right
– that is, Rayon avoids memory allocation and virtual dispatch"
"Previously I thought that “dangling pointers” in sequential programs and “data races” were
sort of distinct bugs: but now I see them as two heads of the same Hydra. Basically both are caused
by having rampant aliasing and mutation, and both can be solved by the ownership and borrowing."
https://blog.faraday.io/saved-by-the-compiler-parallelizing-a-loop-with-rust-and-rayon/
https://github.com/rayon-rs/rayon/blob/master/FAQ.md
has a good bit on usage guidelines for refcells
"using very small borrow sections like this is an anti-pattern: you ought to be enclosing the entire transaction together"
re: usage of .borrow()
also an interesting example about how when doing a multi-thread search, you want to keep a `lowest_cost` variable
around (or similar) and let all threads have access to it so that they can bail early if they're evaluating
a result that costs more than the cheapest one already found, etc.
https://docs.rs/rayon/*/rayon/iter/index.html
`Strings (&str) offer methods like par_split and par_lines.`
`Slices (&[T], &mut [T]) offer methods like par_split and par_windows, as well as various parallel sorting operations.`
`Various collections offer par_extend, which grows a collection given a parallel iterator.
(If you don't have a collection to extend, you can use collect() to create a new one from scratch.)`
par_chunks for slices looks useful
for the full list of parallel iterator methods, see
https://docs.rs/rayon/*/rayon/iter/trait.ParallelIterator.html
https://docs.rs/rayon/*/rayon/iter/trait.IndexedParallelIterator.html
====
2/11/19
neat, rust game engine https://github.com/amethyst/amethyst
parallel and uses entity component system
having a lottttt of trouble with graphs in 7a.
my Nodes look like this
#[derive(Clone, Debug)]
struct Node {
step: char,
children: RefCell<Vec<Rc<Node>>>,
}
and i'm trying to write a function that returns a Node if it's in the graph
or None otherwise
and this is what i have so far:
fn find_step_in_graph(root_node: &Rc<Node>, step: char) -> Option<Rc<Node>> {
if root_node.step == step {
return Some(Rc::clone(root_node));
} else {
for node in *root_node.children.borrow() {
if let Some(ret) = find_step_in_graph(&node, step) {
return Some(Rc::clone(&ret));
}
}
}
None
}
and i'm getting errors about how i can't move out of borrowed content, ie root_node.children.borrow()
which makes sense i guess but i don't really fully understand the ramifications and what i should do
just need to give it a little bit of time to let it simmer in my brain
also i found https://www.reddit.com/r/rust/comments/5ux4uq/rusts_poor_support_for_pointerbased_graphs/
and several other links to this "petgraph" thing - TODO look that up after i get a graph working on my own
things i'm confused about include: passing an Rc vs passing an &Rc
https://stackoverflow.com/questions/29401626/how-do-i-return-a-reference-to-something-inside-a-refcell-without-breaking-encap ?
ok i made a lot of progress thanks to smt - he says you never want to do an &Rc, instead you do a Rc::clone(thing)
that fixed everything
except so it turns out that in the actual puzzle input, there isn't a single root node, there are instead four
and that breaks an assumption i made when writing this solution
so i need to return to the drawing board and figure out how to adapt this solution to handle four top-level nodes
(i did)
====
2/12/19
https://tratt.net/laurie/blog/entries/a_quick_look_at_trait_objects_in_rust.html
"static dispatch leads to faster performance" ->
"Static dispatch makes inlining trivial. In my opinion, inlining is the mother of all optimisations:
the more of it you can do, the faster your program will run. Note that inlining doesn’t have to be done statically
(though it’s easier that way), but doing it dynamically tends to cause noticeable warmup."
"the type parameter means that monomorphisation kicks in: a specialised version of f is generated for every
distinct type we pass to X, allowing Rust to make everything statically dispatched."
apparently this:
fn f(x: &T) {
println!("{}", x.m())
}
gives you dynamic dispatch - i thought Box<T> was required, but &T apparently works too
"That such an implicit coercion is possible is, in my experience, very surprising to those new to Rust.
If it’s any consolation, even experienced Rust programmers can fail to spot these coercions:
nothing in f's signature tells you that such a coercion will happen, unless you happen
to know that T is a trait, and not a struct. To that end, recent versions of Rust let you
add a syntactic signpost to make it clear:"
__interesting!!!__
fn f(x: &dyn T) {
println!("{}", x.m())
}
"The extra dyn keyword has no semantic effect, but you might feel it makes it a little more obvious
that a coercion to a trait object is about to happen. Unfortunately, because the use of that keyword
is currently a bit haphazard, one can never take its absence as a guarantee that dynamic dispatch won’t occur."
"while it's true that all references to an object of our trait type T are of the same size,
it's not necessarily true that references to objects of different types are the same size."
====
2/13/19
(cont'd)
the article goes on to explain fat pointers and vtables
"that object’s vtable (which, itself, is a list of pointers to a struct’s dynamically dispatched functions)."
in the comment thread on reddit, steveklabnik says that there's an idiom lint you can turn on
to enforce adding &dyn notation for trait objects - another user says that it's #![warn(bare_trait_objects)]
so: i finished the book!
time to read a bunch of other stuff!
for starters, steve says that the macros appendix has been moved to be part of a chapter, so let's read that:
https://doc.rust-lang.org/book/ch19-06-macros.html
it refers to "the little book of macros",
https://danielkeep.github.io/tlborm/book/index.html
which i will not read for now
"There are some strange corners with macro_rules!. In the future, there will be a second kind of
declarative macro with the macro keyword that will work in a similar fashion but fix some of these edge cases.
After that is done, macro_rules! will be effectively deprecated. With this in mind, as well as the fact that
most Rust programmers will use macros more than write macros, we won’t discuss macro_rules! any further."
interesting!
procedural macros "must reside in their own crate with a special crate type.
This is for complex technical reasons that we hope to eliminate in the future."
main differences in proc macros since the book:
derive macros have a lot less boilerplate
and there are two other kinds of proc macros:
attribute-like macros, which let you decorate functions as well as structs/enums
the example they give is:
```
#[route(GET, "/")]
fn index() {
```
and function-like macros, which look a lot like an alternative version of declarative macros
followup piece: https://blog.rust-lang.org/2018/12/21/Procedural-Macros-in-Rust-2018.html
"macros are now integrated with the module system in Rust. This mainly means that
you no longer need the clunky #[macro_use] attribute when importing macros!"
hm, is that true?
it is!!!!
"The syn crate not only comes with the ability to parse built-in syntax but you can also
easily write a recursive descent parser for your own syntax.
The syn::parse module has more information about this capability."
interesting bit about spans + error messages
the article also mentions that the serde crate has a crazy amount of configuration;
someday i should spend more time reading through that crate's docs and trying to figure out applications for it.
apparently it's really useful!
https://rocket.rs/ web framework to look into later
ok steve also said that https://doc.rust-lang.org/book/ch07-00-packages-crates-and-modules.html was reworked
"A package has a Cargo.toml that describes how to build one or more crates.
A package can contain zero or one library crates and as many binary crates as you’d like.
There must be at least one crate (either a library or a binary) in a package."
"If a package contains both src/main.rs and src/lib.rs, then it has two crates:
a library and a binary, both with the same name. If we only had one of the two, the package
would have either a single library or binary crate. A package can have multiple binary crates by
placing files in the src/bin directory: each file will be a separate binary crate."
looks like you can specify an absolute path by using the `crate` keyword
at the start of your path, like `crate::foo::bar()` - i think that's new
"If you want to bring an item into scope with use and a relative path, there’s a small difference
from directly calling the item using a relative path: instead of starting from a name in the current scope,
you must start the path given to use with self."
"Starting relative paths with self when specified after use might not be neccesary in the future;
it’s an inconsistency in the language that people are working on eliminating."
"your authors tend to specify absolute paths starting with crate"
"For functions, it’s considered idiomatic to specify the function’s parent module with use, and then
specify the parent module when calling the function. Doing so rather than specifying the path to the
function with use, as Listing 7-16 does, makes it clear that the function isn’t locally defined,
while still minimizing repetition of the full path."
you can also rename your imports with the `as` keyword
nested path imports is neat, you can do
use std::{cmp::Ordering, io};
instead of
use std::cmp::Ordering;
use std::io;
also you can do
use std::io::{self, Write};
instead of
use std::io;
use std::io::Write;
hm, this new version of the chapter doesn't mention mod.rs!
https://github.com/rust-lang-nursery/edition-guide/issues/104
ok now let's read the edition guide
TODO read https://doc.rust-lang.org/std/
"In Rust 2018, it's considered idiomatic to use the dyn keyword for trait objects."
man there's a lotta stuff in the edition guide
`loop` can now break with a value! that's interesting
associated constants are interesting, i guess that's how std::i32::MAX works
you can match on slices, neat
https://rust-lang-nursery.github.io/edition-guide/rust-2018/rustup-for-managing-rust-versions.html
talks about how you can specify version toolchain per-command or per-directory, neat!
====
2/14/19
https://hacks.mozilla.org/2018/03/making-webassembly-better-for-rust-for-all-languages/
seems like the main push (at least at the time?) is to have the ability to drop into rust
for the most perf-sensitive parts of your js app
https://rustwasm.github.io/2018/06/25/vision-for-rust-and-wasm.html
"Surgically inserting Rust compiled to WebAssembly should be the best choice for speeding
up the most performance-sensitive JavaScript code paths. Do not throw away your existing code base,
because Rust plays well with others. Regardless of whether you are a Rust or Web developer,
your natural workflow shouldn’t change because Rust compiled to wasm integrates seamlessly into your preferred tools."
"We can publish .wasm packages to npm, and you can depend on them in package.json
just like you normally would any other JavaScript library."
"If you are a Rust hacker and want to compile your crate to .wasm and share it on npm,
you shouldn’t have to change your workflow either. In fact, you shouldn’t even need to install npm,
Node.js, and a whole JavaScript development environment. wasm-pack will compile, optimize,
and generate JavaScript bindings for your crate. And then it will publish it to npm for you too!"
cool stuff!
looks like in the interim, several crates have been published that give you all the necessary
extern bindings so you can access eg window.requestAnimationFrame etc without having to define their
signature in rust, which is super rad (js-sys and web-sys)
https://rustwasm.github.io/2018/12/06/reflecting-on-rust-and-wasm-in-2018.html
note the bit where they have these four different application templates, looks useful
```
wasm-pack-template for creating NPM libraries with Rust and Wasm.
create-wasm-app for creating Web applications built on top of Rust-generated wasm NPM libraries.
rust-webpack-template for creating whole Web applications with Rust, WebAssembly, and the Webpack bundler.
rust-parcel-template for creating whole Web applications with Rust, WebAssembly, and the Parcel bundler.
```
https://doc.rust-lang.org/std/convert/trait.From.html
https://blog.burntsushi.net/rust-error-handling/
"the key to ergonomic error handling is reducing the amount of explicit
case analysis the programmer has to do while keeping code composable."
this is way too long.
https://speice.io/2019/02/understanding-allocations-in-rust.html
unrelated DONE: been thinking about what program i want to write post-aoc.
so far main contenders are an irc bot and a profiling library.
irc bot would be easy, profiling library would be hard.
evan's written some stuff about sampling profilers, not sure if it applies to rust,
i have the vague idea that since rust doesn't have reflection it's a lot harder to profile,
could be super wrong!
ok let's figure out how to use vs code debug mode on a rust program
it works ok but not great, you can't really see values in the sidebar
details: https://github.com/vadimcn/vscode-lldb/blob/master/MANUAL.md#rust-language-support
https://code.visualstudio.com/docs/editor/debugging
ok let's try day 8
ok so 8 is tricky
at first it seems like it'd be perfect for a recursive solution,
but you can't know the length of the vec of numbers to pass to the recursive call
without having processed the whole input first
so anyway this has to be done in one pass
right now i'm mulling over some sort of, like, stack of stacks
where each sub-stack represents the children of the rightmost entry in the parent stack
so like
2 3 0 3 10 11 12 1 1 0 1 99 2 1 1 2
becomes
[
Node {num_c: 2 num_m: 3}
[
Node {num_c 0 num_m 3}
// at this point we see we're not processing any children
// so we push the next 3 entries into the most recent node's metadata vec
// at this point we see that the current substack has 1 item in it,
// and the parent node has num_c 2, so we still have more children to process
Node {num_c: 1 num_m: 1}
// so basically if we instantiate a new node and its num_c is > 0,
// then it becomes the new parent node and we start a new substack
[
Node {num_c: 0 num_m: 1}
// we're not processing any children
// so we push the next 1 entry as metadata
// at this point we see that the current substack has 1 item in it,
// and the parent node has num_c 1,
// so we set the parent's children to the current substack
]
// at this point we see that the current node has 1 metadata to process, so we add that
// at this point we see that the current substack has 2 items in it,
// and the parent node (DONE: how do we get back tot he parent node? separate stack of parent nodes?)
// has num_c 2, so we set the parent's children to the current substack
]
// and now we process the next 3 entries as metadata
]
so it's looking to me like actually we have two stacks:
one stack of parent nodes,
one stack of child vectors
that doesn't make sense when written downb ut let's look at the example done that way
input:
2 3 0 3 10 11 12 1 1 0 1 99 2 1 1 2
original state:
parent_stack []
children_vec_stack []
read 2 entries to make node:
A {num_c: 2, num_m 3}
set it as parent
num_c is > 0, so start a new children_vec
parent_stack []
children_vec_stack []
parent A {num_c: 2, num_m 3}
children_vec []
read 2 entries to make node:
B {num_c: 0, num_m: 3}
num_c is 0, so:
read 3 entires, put them in B's metadata vec
add B to children_vec
parent_stack []
children_vec_stack []
parent A {num_c: 2, num_m 3}
children_vec [B {num_c: 0, num_m: 3, metadata: [10, 11, 12]}]
read 2 entries to make node:
C {num_c: 1, num_m: 1}
num_c is > 0, so
push parent on parent_stack
set parent to C
push children_vec on children_vec stack
start a new children_vec
parent_stack [A {num_c: 2, num_m 3}]
children_vec_stack [[B {num_c: 0, num_m: 3, metadata: [10, 11, 12]}]]
parent C {num_c: 1, num_m: 1}
children_vec []
read 2 entries to make node:
D {num_c: 0, num_m: 1}
num_c is 0, so:
read 1 entry, put it in D's metadata vec
add D to children_vec
parent_stack [A {num_c: 2, num_m 3}]
children_vec_stack [[B {num_c: 0, num_m: 3, metadata: [10, 11, 12]}]]
parent C {num_c: 1, num_m: 1}
children_vec [D {num_c: 0, num_m: 1, metadata: [99]}]
children_vec has len 1 and parent has num_c 1, so we're done with parent's children
read parent.num_m entries and put them in parent.metadata
set parent.children to children_vec
set children_vec to children_vec_stack.pop()
add parent to children_vec
set parent to parent_stack.pop()
parent_stack []
children_vec_stack []
parent A {num_c: 2, num_m 3}
children_vec [
B {num_c: 0, num_m: 3, metadata: [10, 11, 12]}
C {num_c: 1, num_m: 1, children: [D {num_c: 0, num_m: 1, metadata: [99]}], metadata: [2]}
]
children_vec has len 2 and parent has num_c 2, so we're done with parent's children
read parent.num_m entries and put them in parent.metadata
set parent.children to children_vec
children_vec_stack is empty, so we're done! return parent
so i think that algorithm makes sense, and the two-stack approach makes sense!
DONE: how do we actually program that?
ok nvm i think we can use recursion after all
just with a mutable buffer input
ok yeah it was recursion after all lol
i'm just so used to having recursive functions be pure
the idea of having a recursive function that operates on a mutable buffer input
and only consumes some small number of items off of its front just never occurred to me!
totally worked first try tho once i banged the code out!
DONE profiling links at https://www.reddit.com/r/rust/comments/aqqr0z/what_are_best_resources_on_profiling_and/
===
2/15/19
starting 9
seems pretty straightforward?
ok so the thing about 9b is that my naive 9a implementation is about
a bajllion times too slow for 9b.
so there must be something smarter we can do here!
my initial reaction is that .insert() and .remove() are probably really slow
when the vector is like 8mb large!
what can we do instead?
have a bitmask of valid vector indices?
alternatively, keeping with the theme of solutions so far - do we have a buffer of
[0..max_marble] integers,
and we somehow consume it in one pass?
let's try thinking about a version of this game except where scoring happens every 5 marbles
and you move 2 to the left instead of 7 to to the left.
plan so far: (this is several plans at once mixed together, DONE distill)
{
allocate a bitfield of max_marble bits, all set to 0, representing whether or not an index into the list is valid
this bitfield only gains valid indexes from left to right -