Description
Hey Guys! I recently found this package, and have been using the compression functions to compress and decompress raw vectors for me. I had been doing most of my work on a local machine, but recently I moved my code to a docker container running linux and noticed a rather large slowdown in the compression speeds. I thought that it was interesting considering that I moved the work from a 4 core windows machine to the docker container that has full access to 32 cores and faster/much more ram. Since the issue is with different machines, I took screen shots of the results of the two R environments running the same code.
I also included the lz4 package (https://github.com/bwlewis/lz4) which supposedly is implementing the same lz4 compression algorithm as another comparison that does not slow down anywhere near as drastically.
Not sure if this is a bug, expected, or what, but thought I would let you all know!
This is the code I used on both:
library(fst)
library(lz4)
library(microbenchmark)
set.seed(9917)
sampleObject<-data.frame(matrix(runif(10000),ncol=10))
serializedObject<-serialize(sampleObject,NULL)
microbenchmark(
fstCompression<-compress_fst(serializedObject),
lz4Compression<-lzCompress(serializedObject),
baseCompression<-memCompress(serializedObject,"bzip2"))
sessionInfo()