Skip to content

Commit 88aa095

Browse files
authored
Add back default support for parquet fix #315 (#444)
* Add back default support for parquet ref #315 * Bump ver.
1 parent 894bd2e commit 88aa095

12 files changed

+18
-14
lines changed

DESCRIPTION

+3-2
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
Package: rio
22
Type: Package
33
Title: A Swiss-Army Knife for Data I/O
4-
Version: 1.1.1
4+
Version: 1.2.0
55
Authors@R: c(person("Jason", "Becker", role = "aut", email = "[email protected]"),
66
person("Chung-hong", "Chan", role = c("aut", "cre"), email = "[email protected]",
77
comment = c(ORCID = "0000-0002-6232-7530")),
@@ -53,7 +53,8 @@ Imports:
5353
writexl,
5454
lifecycle,
5555
R.utils,
56-
readr
56+
readr,
57+
nanoparquet
5758
Suggests:
5859
datasets,
5960
bit64,

NEWS.md

+3-1
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,8 @@
1-
# rio 1.1.1.999 (development)
1+
# rio 1.2.0
22

33
* Fix lintr issues #434 (h/t @bisaloo Hugo Gruson)
4+
* Drop support for R < 4.0.0 see #436
5+
* Add support for parquet in the import tier using `nanoparquet` see rio 1.0.1 below.
46

57
Bug fixes
68

R/export.R

+1-1
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@
3232
#' \item Weka Attribute-Relation File Format (.arff), using [foreign::write.arff()]
3333
#' \item Fixed-width format data (.fwf), using [utils::write.table()] with `row.names = FALSE`, `quote = FALSE`, and `col.names = FALSE`
3434
#' \item [CSVY](https://github.com/csvy) (CSV with a YAML metadata header) using [data.table::fwrite()].
35-
#' \item Apache Arrow Parquet (.parquet), using [arrow::write_parquet()]
35+
#' \item Apache Arrow Parquet (.parquet), using [nanoparquet::write_parquet()]
3636
#' \item Feather R/Python interchange format (.feather), using [arrow::write_feather()]
3737
#' \item Fast storage (.fst), using [fst::write.fst()]
3838
#' \item JSON (.json), using [jsonlite::toJSON()]. In this case, `x` can be a variety of R objects, based on class mapping conventions in this paper: [https://arxiv.org/abs/1403.2805](https://arxiv.org/abs/1403.2805).

R/export_methods.R

+1-1
Original file line numberDiff line numberDiff line change
@@ -282,7 +282,7 @@ export_delim <- function(file, x, fwrite = lifecycle::deprecated(), sep = "\t",
282282

283283
#' @export
284284
.export.rio_parquet <- function(file, x, ...) {
285-
.docall(arrow::write_parquet, ..., args = list(x = x, sink = file))
285+
.docall(nanoparquet::write_parquet, ..., args = list(x = x, file = file))
286286
}
287287

288288
#' @export

R/import.R

+1-1
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@
4242
#' \item Fortran data (no recognized extension), using [utils::read.fortran()]
4343
#' \item Fixed-width format data (.fwf), using a faster version of [utils::read.fwf()] that requires a `widths` argument and by default in rio has `stringsAsFactors = FALSE`
4444
#' \item [CSVY](https://github.com/csvy) (CSV with a YAML metadata header) using [data.table::fread()].
45-
#' \item Apache Arrow Parquet (.parquet), using [arrow::read_parquet()]
45+
#' \item Apache Arrow Parquet (.parquet), using [nanoparquet::read_parquet()]
4646
#' \item Feather R/Python interchange format (.feather), using [arrow::read_feather()]
4747
#' \item Fast storage (.fst), using [fst::read.fst()]
4848
#' \item JSON (.json), using [jsonlite::fromJSON()]

R/import_methods.R

+2-2
Original file line numberDiff line numberDiff line change
@@ -413,8 +413,8 @@ extract_html_row <- function(x, empty_value) {
413413

414414
#' @export
415415
.import.rio_parquet <- function(file, which = 1, ...) {
416-
.check_pkg_availability("arrow")
417-
.docall(arrow::read_parquet, ..., args = list(file = file, as_data_frame = TRUE))
416+
#.check_pkg_availability("arrow")
417+
.docall(nanoparquet::read_parquet, ..., args = list(file = file, options = nanoparquet::parquet_options(class = "data.frame")))
418418
}
419419

420420
#' @export

R/sysdata.rda

7 Bytes
Binary file not shown.

README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -133,6 +133,7 @@ The full list of supported formats is below:
133133
| Gzip | gz / gzip | base | base | Default | |
134134
| Zip files | zip | utils | utils | Default | |
135135
| Ambiguous file format | dat | data.table | | Default | Attempt as delimited text data |
136+
| Apache Arrow (Parquet) | parquet | nanoparquet | nanoparquet | Default | |
136137
| CSVY (CSV + YAML metadata header) | csvy | data.table | data.table | Default | |
137138
| Comma-separated data | csv | data.table | data.table | Default | |
138139
| Comma-separated data (European) | csv2 | data.table | data.table | Default | |
@@ -159,7 +160,6 @@ The full list of supported formats is below:
159160
| Text Representations of R Objects | dump | base | base | Default | |
160161
| Weka Attribute-Relation File Format | arff / weka | foreign | foreign | Default | |
161162
| XBASE database files | dbf | foreign | foreign | Default | |
162-
| Apache Arrow (Parquet) | parquet | arrow | arrow | Suggest | |
163163
| Clipboard | clipboard | clipr | clipr | Suggest | default is tsv |
164164
| EViews | eviews / wf1 | hexView | | Suggest | |
165165
| Fast Storage | fst | fst | fst | Suggest | |

data-raw/single.json

+3-3
Original file line numberDiff line numberDiff line change
@@ -2,10 +2,10 @@
22
{
33
"input": "parquet",
44
"format": "parquet",
5-
"type": "suggest",
5+
"type": "import",
66
"format_name": "Apache Arrow (Parquet)",
7-
"import_function": "arrow::read_parquet",
8-
"export_function": "arrow::write_parquet",
7+
"import_function": "nanoparquet::read_parquet",
8+
"export_function": "nanoparquet::write_parquet",
99
"note": ""
1010
},
1111
{

man/export.Rd

+1-1
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

man/import.Rd

+1-1
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

man/rio.Rd

+1
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

0 commit comments

Comments
 (0)