You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/README.md
+14Lines changed: 14 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -479,6 +479,11 @@ For the options below, only one response setting may be configured.
479
479
* "broker.keylocation": File Path to key file to be used to authenticate to Kafka Topic (Optional)
480
480
* "broker.calocation": File Path to CA Certificate bundle to be used to authenticate to Kafka Topic (Optional)
481
481
* "broker.topic": Full topic name of the Kafka Topic to connect to (Optional)
482
+
* "s3.accesskey": Access Key of the bucket to send redundancy files to (Optional)
483
+
* "s3.secretkey": Secret Key of the bucket that you want send redundancy files to (Optional)
484
+
* "s3.bucketName": Name of the bucket to send redundancy files to (Optional)
485
+
* "s3.region": Region that the bucket to send redundancy files resides in (Optional)
486
+
* "s3.endpoint": Endpoint of the bucket to send redundancy files to (Optional)
482
487
483
488
#### manager
484
489
* "coordinator.addr": network address of the coordinator (defaults to strelka_coordinator_1:6379)
@@ -750,6 +755,15 @@ Currently this is toggled on and off in the Frontend Dockerfile, which is overwr
750
755
751
756
The Kafka Producer that is created with the abbove command line options is fully configurable, and placeholder fields have already been added to the frontend.yaml configuration file. This file will need to be updated in order to point to an existing Kafka Topic, as desired. In cases where some fields are not used (e.g when security has not been enable on the desired Kafka Topic, etc) then unused fields in the broker configuration section of the frontend.yaml file may simply be replaced with an empty string.
752
757
758
+
#### Optional: S3 Redundancy
759
+
Dependant on a Kafka producer being created and a boolean in the Kafka config set to true, S3 redundancy can be toggled on in order to account for any issues with a Kafka connection. S3, in this case, is referring to either a AWS S3 bucket, or a Ceph Opensource Object Storage bucket.
760
+
761
+
Currently, if the option for S3 redundancy is toggled on, if the Kafka connection as desribed in the Kafka logging section of this document is interrupted, then, after the local log file is updated, the contents of that log file will be uploaded to the configureable S3 location. By default logs are kept for three hours after the start of the interuption of the Kafka connection, and, will rotate logs in S3 on the hour to maintain relevancy in the remote bucket location.
762
+
763
+
Once connection is re-established to the original Kafka broker, then the stored logs are sent in parallel to new logs to the Kafka broker. If a restart of the Frontend is required to reset the connection, then the logs will be sent to the Kafka Broker (if they are not stale) at the next start up.
764
+
765
+
This option is set to false by default.
766
+
753
767
## Scanners
754
768
Each scanner parses files of a specific flavor and performs data collection and/or file extraction on them. Scanners are typically named after the type of file they are intended to scan (e.g. "ScanHtml", "ScanPe", "ScanRar") but may also be named after the type of function or tool they use to perform their tasks (e.g. "ScanExiftool", "ScanHeader", "ScanOcr").
0 commit comments