You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/cloud/aws-lambda/index.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -335,7 +335,7 @@ From the response payload we see the function was successful and calculated the
335
335
336
336
## Stream data from Amazon S3
337
337
338
-
To demonstrate a q/kdb+ Lambda function processing multiple events, we detail how to stream data from AWS Simple Storage Service (S3). Using FIFO named pipes and [`.Q.fps`](../../ref/dotq.md#fps-streaming-algorithm) within q, data can be streamed in for processing. To illustrate this example, we create 100 files each containing 1 million Black-Scholes input parameters. The files are placed in a S3 bucket. This S3 bucket is the trigger for the Lambda function.
338
+
To demonstrate a q/kdb+ Lambda function processing multiple events, we detail how to stream data from AWS Simple Storage Service (S3). Using FIFO named pipes and [`.Q.fps`](../../ref/dotq.md#fps-pipe-streaming) (pipe streaming) within q, data can be streamed in for processing. To illustrate this example, we create 100 files each containing 1 million Black-Scholes input parameters. The files are placed in a S3 bucket. This S3 bucket is the trigger for the Lambda function.
Copy file name to clipboardExpand all lines: docs/interfaces/q-server-for-odbc3.md
+2Lines changed: 2 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -70,6 +70,8 @@ Ensure you have `ps.k` loaded into the kdb+ process specified in your DSN:
70
70
q)\l ps.k
71
71
```
72
72
73
+
The kdb+ process should also be [listening on port](../basics/listening-port.md) which relates to the port choosen and defined in the odbc configuration.
Copy file name to clipboardExpand all lines: docs/kb/loading-from-large-files.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,7 +13,7 @@ The [Load CSV](../ref/file-text.md#load-csv) form of the File Text operator load
13
13
14
14
If the data in the CSV file is too large to fit into memory, we need to break the large CSV file into manageable chunks and process them in sequence.
15
15
16
-
Function [`.Q.fs`](../ref/dotq.md#fs-streaming-algorithm) and its variants help automate this process. `.Q.fs` loops over a file in conveniently-sized chunks of complete records, and applies a function to each chunk. This lets you implement a _streaming algorithm_ to convert a large CSV file into an on-disk database without holding all the data in memory at once.
16
+
Function [`.Q.fs`](../ref/dotq.md#fs-file-streaming) (file streaming) and its variants help automate this process. `.Q.fs` loops over a file in conveniently-sized chunks of complete records, and applies a function to each chunk. This lets you implement a _streaming algorithm_ to convert a large CSV file into an on-disk database without holding all the data in memory at once.
17
17
18
18
19
19
## Using `.Q.fs`
@@ -96,11 +96,11 @@ date open high low close volume sym
96
96
Variants of `.Q.fs` extend it to [named pipes](named-pipes.md) and control chunk size.
97
97
98
98
:fontawesome-solid-book:
99
-
[`.Q.fsn`](../ref/dotq.md#fsn-streaming-algorithm) for chunk size
99
+
[`.Q.fsn`](../ref/dotq.md#fsn-file-streaming) for chunk size
Since V3.4 it has been possible to read FIFOs/named pipes on Unix.
12
11
@@ -19,24 +18,69 @@ q)/ At most, n bytes will be read, perhaps fewer
19
18
q)hclose h / Close the file to clean up
20
19
```
21
20
22
-
[`.Q.fps`](../ref/dotq.md#fps-streaming-algorithm"streaming algorithm") is [`.Q.fs`](../ref/dotq.md#fs-streaming-algorithm"streaming algorithm") for pipes.
23
-
(`.Q.fpn` corresponds to [`.Q.fsn`](../ref/dotq.md#fsn-streaming-algorithm"streaming algorithm").)
21
+
A `` `:fifo://`` handle is also useful for reading certain non-seekable or zero-length (therefore, unsuitable for the regular `read1`) system files or devices, e.g.
22
+
23
+
```q
24
+
q)a:hopen`:fifo:///dev/urandom
25
+
q)read1 (a;8)
26
+
0x8f172b7ea00b85e6
27
+
q)hclose a
28
+
```
29
+
30
+
## Streaming
31
+
32
+
[`.Q.fps`](../ref/dotq.md#fps-pipe-streaming) and [`.Q.fpn`](../ref/dotq.md#fpn-pipe-streaming) provide the ability to streaming data from a fifo/named pipe.
24
33
25
-
The following example loads a CSV via FIFO, avoiding decompressing to disk:
34
+
This can be useful for various applications, such as streaming data in from a compressed file without having to decompress the contents to disk.
35
+
36
+
For example, using a csv file (t.csv) with the contents
37
+
```csv
38
+
MSFT,12:01:10.000,A,O,300,55.60
39
+
APPL,12:01:20.000,B,O,500,67.70
40
+
IBM,12:01:20.100,A,O,100,61.11
41
+
MSFT,12:01:10.100,A,O,300,55.60
42
+
APPL,12:01:20.100,B,O,500,67.70
43
+
IBM,12:01:20.200,A,O,100,61.11
44
+
MSFT,12:01:10.200,A,O,300,55.60
45
+
APPL,12:01:20.200,B,O,500,67.70
46
+
IBM,12:01:20.200,A,O,100,61.11
47
+
MSFT,12:01:10.300,A,O,300,55.60
48
+
APPL,12:01:20.400,B,O,500,67.70
49
+
IBM,12:01:20.500,A,O,100,61.11
50
+
MSFT,12:01:10.500,A,O,300,55.60
51
+
APPL,12:01:20.600,B,O,500,67.70
52
+
IBM,12:01:20.600,A,O,100,61.11
53
+
MSFT,12:01:10.700,A,O,300,55.60
54
+
APPL,12:01:20.700,B,O,500,67.70
55
+
IBM,12:01:20.800,A,O,100,61.11
56
+
MSFT,12:01:10.900,A,O,300,55.60
57
+
APPL,12:01:20.900,B,O,500,67.70
58
+
IBM,12:01:20.990,A,O,100,61.11
59
+
```
60
+
61
+
If the file is compressed into a ZIP archive (t.zip), the system command `unzip` has the option to uncompress to stdout, which can be combined with a `fifo`.
62
+
The following loads the CSV file through a FIFO without having the intermediary step of creating the unzipped file:
A `` `:fifo://`` handle is also useful for reading certain non-seekable or zero-length (therefore, unsuitable for the regular `read1`) system files or devices, e.g.
72
+
Alternatively, if the file was compressed using gzip (t.gz), the system command `gunzip` can be used:
0 commit comments