You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
newOption{Key=nameof(context.CsvComment),Setter= value =>context.CsvComment=value,Getter=()=>context.CsvComment,Description="CSV comment lines begin with this prefix"},
190
190
newOption{Key=nameof(context.CsvSkipRows),Setter= value =>context.CsvSkipRows=int.Parse(value),Getter=()=>context.CsvSkipRows.ToString(),Description="Number of CSV rows to skip before parsing"},
191
191
newOption{Key=nameof(context.CsvHasHeaderRow),Setter= value =>context.CsvHasHeaderRow=bool.Parse(value),Getter=()=>string.Empty,Description="Does the CSV have a header row naming the columns. [default: true if any columns are referenced by name]"},
192
+
newOption{Key=nameof(context.CsvHeaderStartsWith),Setter= value =>context.CsvHeaderStartsWith=value,Getter=()=>context.CsvHeaderStartsWith,Description="A comma separated list of of the first expected header column names"},
192
193
newOption{Key=nameof(context.CsvIgnoreInvalidRows),Setter= value =>context.CsvIgnoreInvalidRows=bool.Parse(value),Getter=()=>context.CsvIgnoreInvalidRows.ToString(),Description="Ignore CSV rows that can't be parsed"},
193
194
newOption{Key=nameof(context.CsvRealign),Setter= value =>context.CsvRealign=bool.Parse(value),Getter=()=>context.CsvRealign.ToString(),Description=$"Realign imported CSV points to the /{nameof(context.StartTime)} value"},
194
195
newOption{Key=nameof(context.CsvRemoveDuplicatePoints),Setter= value =>context.CsvRemoveDuplicatePoints=bool.Parse(value),Getter=()=>context.CsvRemoveDuplicatePoints.ToString(),Description="Remove duplicate points in the CSV before appending."},
Copy file name to clipboardExpand all lines: TimeSeries/PublicApis/SdkExamples/PointZilla/Readme.md
+38-1
Original file line number
Diff line number
Diff line change
@@ -9,6 +9,7 @@ Points can be specified from:
9
9
- Signal generators: linear, saw-tooth, square-wave, or sine-wave signals. Useful for just getting *something* into a time-series
10
10
- CSV files (including CSV exports from AQTS Springboard)
11
11
- Points retrieved live from other AQTS systems, including from legacy 3.X systems.
12
+
- The results of a database query (via direct support fo SqlServer, Postgres, and MySql. ODBC connections are supported too, but require configuration)
12
13
-`CMD.EXE`, `PowerShell` or `bash`: `PointZilla` works well from within any shell.
13
14
14
15
Basic time-series will append time/value pairs. Reflected time-series also support setting grade codes and/or qualifiers to each point.
### Use column names or 1-based column indexes to reference a column from your CSV
168
169
169
-
You can reference a column either by a name (eg. `-CsvDateTimeField="ISO 8601 UTC"`) or by a 1-based column index (eg. `-CsvDateTimeField=1`). When at least one field has a column name, the `-CsvHasHeaderRow=true` option is assumed.
170
+
You can reference a column either by a header name (eg. `-CsvDateTimeField="ISO 8601 UTC"`) or by a 1-based column index (eg. `-CsvDateTimeField=1`). When at least one field has a column name, the `-CsvHasHeaderRow=true` option is assumed.
171
+
172
+
Referencing columns by name has some nice benefits:
173
+
- Columns can appear in any order in the header line.
174
+
- Column name matching is case-insensitive.
170
175
171
176
Referencing columns by name is usually more robust, but you may not have control over the format of the CSV file being consumed.
172
177
178
+
### When your data isn't at the start of your CSV
179
+
180
+
Some data files have extra rows at the start. PointZilla has a few options to help locate the start of the data to extract:
181
+
182
+
The `/CsvComment={prefix}` option tells the CSV parser to skip over any lines that begin with the given prefix.
183
+
184
+
The `/CsvSkipRows={integer}` option tells the CSV parser to skip over the specified number of lines before parsing the data. Skipped rows are not counted for lines matching the `/CsvComment=` test.
185
+
186
+
The `/CsvHeaderStartsWith={hint1, hint2, ..., hintN}` option provides the CSV parser with a header-row detection hint, a comma-separated list of expected column names:
187
+
188
+
- Each hint is trimmed of leading/trailing whitespace.
189
+
- Column name matching is case-insensitive.
190
+
- If none of the expected column hints are empty, then the match is performed against non-empty fields from the header row. This is usually what you want.
191
+
- If any of the expected column hints are empty, then the match is performed column-by-column and blank hints must match blank columns in the header row.
192
+
193
+
So `/CsvHeaderStartsWith="Date, Time, Value, Grade"` and `/CsvHeaderStartsWith=Date,Time,Value,Grade` will both match:
194
+
195
+
```csv
196
+
Date, Time, Value, Grade, Status, Note
197
+
2021-Oct-12, 12:56, 4.5, Good, Normal, Things are fine
198
+
```
199
+
200
+
And will also this CSV with 3 blank columns between the `Date` and `Time` columns:
201
+
202
+
```csv
203
+
Date,,,,Time,Value,Grade,Status,Note
204
+
2021-Oct-12,,,,12:56,4.5,Good,Normal,Things are fine
205
+
```
206
+
207
+
Adding empty hint colums like `/CsvHeaderStartsWith="Date, , , , Time, Value, Grade"` or `/CsvHeaderStartsWith=Date,,,,Time,Value,Grade` will only match the second CSV.
208
+
173
209
### Reading timestamps from CSV files
174
210
175
211
Timestamps can be extracted in a few ways:
@@ -597,6 +633,7 @@ Supported -option=value settings (/option=value works too):
597
633
-CsvComment CSV comment lines begin with this prefix
598
634
-CsvSkipRows Number of CSV rows to skip before parsing [default: 0]
599
635
-CsvHasHeaderRow Does the CSV have a header row naming the columns. [default: true if any columns are referenced by name]
636
+
-CsvHeaderStartsWith A comma separated list of of the first expected header column names
600
637
-CsvIgnoreInvalidRows Ignore CSV rows that can't be parsed [default: False]
601
638
-CsvRealign Realign imported CSV points to the /StartTime value [default: False]
602
639
-CsvRemoveDuplicatePoints Remove duplicate points in the CSV before appending. [default: True]
0 commit comments