Skip to content

DLP Inspect initial commit #48

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 9 commits into
base: file_tracker
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -47,17 +47,22 @@ public static PipelineResult run(S3ReaderOptions options) {
Pipeline p = Pipeline.create(options);

PCollection<KV<String, String>> nonInspectedContents =
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we please return a tuple? Anywhere you have try/catch you should use multi output with error tag. I think there are quite a few places, you will need this change. Without multi output pipeline will fail to recover.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay sure. I will change this code block to return tuple and will take care about such try-catch blocks.

p.apply(
"File Read Transforrm",
FileReaderTransform.newBuilder().setSubscriber(options.getSubscriber()).build());
p.apply(
"File Read Transform",
FileReaderTransform.newBuilder()
.setSubscriber(options.getSubscriber())
.setBatchSize(options.getBatchSize())
.build());

PCollectionTuple inspectedData =
nonInspectedContents.apply(
"DLPScanner",
DLPTransform.newBuilder()
.setInspectTemplateName(options.getInspectTemplateName())
.setProjectId(options.getProject())
.build());
nonInspectedContents
.get(Util.readRowSuccess)
.apply(
"DLPScanner",
DLPTransform.newBuilder()
.setInspectTemplateName(options.getInspectTemplateName())
.setProjectId(options.getProject())
.build());

PCollection<Row> inspectedContents =
inspectedData.get(Util.inspectData).setRowSchema(Util.bqDataSchema);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ public Row apply(Row input) {
input.getRow("value").getInt64("total_bytes_inspected").longValue(),
Util.INSPECTED)
.build();
LOG.info("Audit Row {}", aggrRow.toString());
LOG.info("FileTrackerTransform:MergePartialStatsRow: Audit Row {}", aggrRow.toString());
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is AuditInspectDataTransform

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes the class name is AuditInspectDataTransform. But for logging I have maintained the usage of Transformation names that will be displayed on UI DAG. So here the name of this Transformation is FileTrackerTransform and the sub step is MergePartialStatsRow. Can take out MergePartialStatsRow if it doesn't seem to be right as a log message.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok makes sense.

return aggrRow;
}
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@ public void processElement(ProcessContext c) throws IOException {
if (this.requestBuilder.build().getSerializedSize() > DLP_PAYLOAD_LIMIT) {
String errorMessage =
String.format(
"Payload Size %s Exceeded Batch Size %s",
"DLPTransform:DLPInspect: Payload Size %s Exceeded Batch Size %s",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Optional- Change the log.error to warn

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes got that. Will run this through the team and decide accordingly.

this.requestBuilder.build().getSerializedSize(), DLP_PAYLOAD_LIMIT);
LOG.error(errorMessage);
} else {
Expand All @@ -109,7 +109,7 @@ public void processElement(ProcessContext c) throws IOException {
long bytesInspected = contentItem.getSerializedSize();
int totalFinding =
Long.valueOf(response.getResult().getFindingsList().stream().count()).intValue();
LOG.debug("bytes inspected {}", bytesInspected);
LOG.debug("DLPTransform:DLPInspect: Bytes inspected {}", bytesInspected);
boolean hasErrors = response.findInitializationErrors().stream().count() > 0;
if (response.hasResult() && !hasErrors) {
response
Expand All @@ -128,7 +128,7 @@ public void processElement(ProcessContext c) throws IOException {
finding.getLocation().getCodepointRange().getStart(),
finding.getLocation().getCodepointRange().getEnd())
.build();
LOG.debug("Row {}", row);
LOG.debug("DLPTransform:DLPInspect: Row {}", row);

c.output(Util.inspectData, row);
});
Expand All @@ -148,7 +148,9 @@ public void processElement(ProcessContext c) throws IOException {
Row.withSchema(Util.errorSchema)
.addValues(fileName, timeStamp, error.toString())
.build());
LOG.info("DLPTransform:DLPInspect: Initialization error in DLP response - {}",error);
});
//Need to change 0 to 0L
c.output(
Util.auditData,
Row.withSchema(Util.bqAuditSchema)
Expand All @@ -157,6 +159,14 @@ public void processElement(ProcessContext c) throws IOException {
}
}
}
else{
LOG.info("DLPTransform:DLPInspect: "+fileName+" is an empty file | Size of the file in bytes - "+c.element().getValue().length());
c.output(
Util.auditData,
Row.withSchema(Util.bqAuditSchema)
.addValues(fileName, Util.getTimeStamp(),0L, "EMPTY")
.build());
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the processElement i think error is thrown and caught in the class after DLP is called. Can we have tuple like this? This should make sure internal error is not going to crash the pipeline.

c.output(apiResponseFailedElements, e.toString());

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes I got it. But just for clarification the try block is already handling the exception, right?
We are just adding the catch block to be sure.

Also, this class is already returning a tuple. I will add this additional TupleTag for the Errors.
Is there a need to log these in our main DLPS3ScannerPipeline.java file separately or can we leave it ?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you can leave it. Ideally you will flatten all the errors and write back somewhere. processElement throws the error cand catch clock is to output without crashing the pipeline.

}
}
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,11 @@
package com.google.swarm.tokenization.common;

import com.google.protobuf.ByteString;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.nio.channels.SeekableByteChannel;
import java.util.Arrays;
import org.apache.beam.sdk.io.FileIO.ReadableFile;
import org.apache.beam.sdk.io.range.OffsetRange;
import org.apache.beam.sdk.transforms.DoFn;
Expand All @@ -31,7 +33,11 @@
public class FileReaderSplitDoFn extends DoFn<KV<String, ReadableFile>, KV<String, String>> {
public static final Logger LOG = LoggerFactory.getLogger(FileReaderSplitDoFn.class);
public static Integer SPLIT_SIZE = 900000;
private static Integer BATCH_SIZE = 520000;
public Integer BATCH_SIZE;

public FileReaderSplitDoFn(Integer batchSize) {
this.BATCH_SIZE = batchSize;
}

@ProcessElement
public void processElement(ProcessContext c, RestrictionTracker<OffsetRange, Long> tracker)
Expand All @@ -50,19 +56,19 @@ public void processElement(ProcessContext c, RestrictionTracker<OffsetRange, Lon
buffer = ByteString.copyFrom(readBuffer);
readBuffer.clear();
LOG.debug(
"Current Restriction {}, Content Size{}", tracker.currentRestriction(), buffer.size());
"File Read Transform:ReadFile: Current Restriction {}, Content Size{}", tracker.currentRestriction(), buffer.size());
c.output(KV.of(fileName, buffer.toStringUtf8().trim()));
}
} catch (Exception e) {

LOG.error(e.getMessage());
LOG.error("File Read Transform:ReadFile: Error processing the file "+ fileName +" - " + Arrays.toString(e.getStackTrace()));
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is where the multi output should be added. First comment added. So this class should output a Tuple.

Copy link
Author

@YadnikiPawar YadnikiPawar May 22, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So should be keep the LOG as a warning here or just take out the log and keep output statement only?
Or log these in our main DLPS3ScannerPipeline.java file separately?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think let's log all in the main class after flatten.

}
}

@GetInitialRestriction
public OffsetRange getInitialRestriction(@Element KV<String, ReadableFile> file)
throws IOException {
long totalBytes = file.getValue().getMetadata().sizeBytes();

long totalBytes = file.getValue().getMetadata().sizeBytes();
long totalSplit = 0;
if (totalBytes < BATCH_SIZE) {
totalSplit = 2;
Expand All @@ -75,10 +81,12 @@ public OffsetRange getInitialRestriction(@Element KV<String, ReadableFile> file)
}

LOG.info(
"Total Bytes {} for File {} -Initial Restriction range from 1 to: {}",
"File Read Transform:ReadFile: Total Bytes {} for File {} -Initial Restriction range from 1 to: {}. {} chunk/(s) created of size {} bytes. ",
totalBytes,
file.getKey(),
totalSplit);
totalSplit,
totalSplit-1,
BATCH_SIZE);
return new OffsetRange(1, totalSplit);
}

Expand All @@ -97,14 +105,10 @@ public OffsetRangeTracker newTracker(@Restriction OffsetRange range) {
return new OffsetRangeTracker(new OffsetRange(range.getFrom(), range.getTo()));
}

private static SeekableByteChannel getReader(ReadableFile eventFile) {
private static SeekableByteChannel getReader(ReadableFile eventFile) throws IOException, FileNotFoundException {
SeekableByteChannel channel = null;
try {
channel = eventFile.openSeekable();
} catch (IOException e) {
LOG.error("Failed to Open File {}", e.getMessage());
throw new RuntimeException(e);
}
LOG.info("File Read Transform:ReadFile: event File Channel {}",eventFile.getMetadata().resourceId().getFilename());
channel = eventFile.openSeekable();
return channel;
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -19,31 +19,50 @@
import com.google.swarm.tokenization.common.CSVFileReaderTransform.Builder;
import org.apache.beam.sdk.extensions.gcp.util.gcsfs.GcsPath;
import org.apache.beam.sdk.io.FileIO;
import org.apache.beam.sdk.io.fs.EmptyMatchTreatment;
import org.apache.beam.sdk.io.gcp.pubsub.PubsubIO;
import org.apache.beam.sdk.io.gcp.pubsub.PubsubMessage;
import org.apache.beam.sdk.metrics.Counter;
import org.apache.beam.sdk.metrics.Metrics;
import org.apache.beam.sdk.transforms.DoFn;
import org.apache.beam.sdk.transforms.PTransform;
import org.apache.beam.sdk.transforms.ParDo;
import org.apache.beam.sdk.values.KV;
import org.apache.beam.sdk.values.PBegin;
import org.apache.beam.sdk.values.PCollection;
import org.joda.time.Instant;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;


import java.io.IOException;
import java.util.concurrent.atomic.AtomicBoolean;
import java.util.regex.Matcher;
import java.util.regex.Pattern;

import org.apache.beam.sdk.io.FileSystems;
import org.apache.beam.sdk.io.fs.MatchResult;
import org.apache.beam.sdk.io.fs.MoveOptions;
import org.apache.beam.sdk.io.fs.ResourceId;
import com.google.common.collect.ImmutableList;

@AutoValue
public abstract class FileReaderTransform
extends PTransform<PBegin, PCollection<KV<String, String>>> {
public abstract class FileReaderTransform extends PTransform<PBegin, PCollection<KV<String, String>>> {

public static final Logger LOG = LoggerFactory.getLogger(FileReaderTransform.class);

public abstract String subscriber();

@AutoValue.Builder
public abstract static class Builder {
public abstract Builder setSubscriber(String subscriber);
public abstract Integer batchSize();

public abstract FileReaderTransform build();
}
@AutoValue.Builder
public abstract static class Builder {
public abstract Builder setSubscriber(String subscriber);

public abstract Builder setBatchSize(Integer batchSize);

public abstract FileReaderTransform build();
}

public static Builder newBuilder() {
return new AutoValue_FileReaderTransform.Builder();
Expand All @@ -57,27 +76,79 @@ public PCollection<KV<String, String>> expand(PBegin input) {
"ReadFileMetadata",
PubsubIO.readMessagesWithAttributes().fromSubscription(subscriber()))
.apply("ConvertToGCSUri", ParDo.of(new MapPubSubMessage()))
.apply("FindFile", FileIO.matchAll())
.apply("FindFile", FileIO.matchAll().withEmptyMatchTreatment(EmptyMatchTreatment.ALLOW))
.apply(FileIO.readMatches())
.apply("AddFileNameAsKey", ParDo.of(new FileSourceDoFn()))
.apply("ReadFile", ParDo.of(new FileReaderSplitDoFn()));
.apply("ReadFile", ParDo.of(new FileReaderSplitDoFn(batchSize())));
}

public class MapPubSubMessage extends DoFn<PubsubMessage, String> {
private static final String VALID_FILE_PATTERNS = "^(.(?!.*\\.ctl$|.*\\.dlp$|.*\\.xml$|.*\\.json$|.*parallel_composite_uploads.*$|.*\\.schema$|.*\\.temp$))*$";

private final Counter numberOfFilesReceived =
Metrics.counter(FileReaderTransform.MapPubSubMessage.class, "NumberOfFilesReceived");

@ProcessElement
public void processElement(ProcessContext c) {

LOG.info("File Read Transform:ConvertToGCSUri: Located File's Metadata : "+c.element().getAttributeMap());
numberOfFilesReceived.inc(1L);

String bucket = c.element().getAttribute("bucketId");
String object = c.element().getAttribute("objectId");
String eventType = c.element().getAttribute("eventType");
String file_ts_string = c.element().getAttribute("eventTime");
GcsPath uri = GcsPath.fromComponents(bucket, object);

if (eventType.equalsIgnoreCase(Util.ALLOWED_NOTIFICATION_EVENT_TYPE)) {
LOG.info("File Name {}", uri.toString());
c.output(uri.toString());
} else {
LOG.info("Event Type Not Supported {}", eventType);
}
String file_name = uri.toString();
String prefix;

//Match filenames having extensions
Matcher m1 = Pattern.compile("^gs://([^/]+)/(.*)\\.(.*)$").matcher(file_name);

if (m1.find()) {
prefix = m1.group(2);
} else {//No extension
prefix = object;
}

ImmutableList.Builder<ResourceId> sourceFiles = ImmutableList.builder();
AtomicBoolean should_scan = new AtomicBoolean(true);

if (!file_name.matches(VALID_FILE_PATTERNS)) {
LOG.warn("File Read Transform:ConvertToGCSUri: Unsupported File Format. Skipping: {}", file_name);
should_scan.set(false);
} else if (!eventType.equalsIgnoreCase(Util.ALLOWED_NOTIFICATION_EVENT_TYPE)) {
LOG.warn("File Read Transform:ConvertToGCSUri: Event Type Not Supported: {}. Skipping: {}", eventType,file_name);
should_scan.set(false);
} else {
try {
MatchResult listResult = FileSystems.match("gs://" + bucket + "/" + prefix + ".*.dlp", EmptyMatchTreatment.ALLOW);
listResult.metadata().forEach(metadata -> {
ResourceId resourceId = metadata.resourceId();
Instant file_ts = Instant.parse(file_ts_string);
Instant tf_ts = new Instant(metadata.lastModifiedMillis());
LOG.warn(file_ts.toString());
LOG.warn(tf_ts.toString());
Copy link
Collaborator

@santhh santhh May 22, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should have one statement and a meaningful error message. This is not helpful. I feel you have too many logs here. Let's try to summarize and output what will be useful

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes sure. Some parts of the code are added by Robert L. Will look into those parts and take out the unnecessary logs.

if (resourceId.toString().equals("gs://" + bucket + "/" + prefix + ".rdct.dlp") && file_ts.isBefore(tf_ts)) {
LOG.warn("File Read Transform:ConvertToGCSUri: File has already been redacted. Skipping: {}", file_name);
should_scan.set(false);
} else {
LOG.warn("File Read Transform:ConvertToGCSUri: Deleting old touchfile: {}", resourceId.toString());
sourceFiles.add(resourceId);
}
});
FileSystems.delete(sourceFiles.build(), MoveOptions.StandardMoveOptions.IGNORE_MISSING_FILES);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}

if (should_scan.get()) {
LOG.info("File Read Transform:ConvertToGCSUri: Valid File Located: {}", file_name);
c.output(file_name);
}
}
Copy link
Collaborator

@santhh santhh May 22, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Feel like this logic can be simplified. (Optional)

}
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,8 @@
package com.google.swarm.tokenization.common;

import org.apache.beam.sdk.io.FileIO.ReadableFile;
import org.apache.beam.sdk.metrics.Counter;
import org.apache.beam.sdk.metrics.Metrics;
import org.apache.beam.sdk.transforms.DoFn;
import org.apache.beam.sdk.values.KV;
import org.joda.time.Instant;
Expand All @@ -24,18 +26,18 @@

public class FileSourceDoFn extends DoFn<ReadableFile, KV<String, ReadableFile>> {
public static final Logger LOG = LoggerFactory.getLogger(FileSourceDoFn.class);
private static final String FILE_PATTERN = "([^\\s]+(\\.(?i)(dat))$)";

private final Counter numberOfFilesPassedValidation =
Metrics.counter(FileSourceDoFn.class, "NumberOfFilesPassedValidation");

@ProcessElement
public void processElement(ProcessContext c) {

ReadableFile file = c.element();
String fileName = file.getMetadata().resourceId().toString();
if (fileName.matches(FILE_PATTERN)) {
String key = String.format("%s_%s", fileName, Instant.now().getMillis());
c.output(KV.of(key, file));
} else {
LOG.error("Extension Not Supported");
}
String key = String.format("%s|%s", fileName, Instant.now().getMillis());
LOG.info("File Read Transform:AddFileNameAsKey: {} is as added as a key for the file {}. ",key,fileName);
numberOfFilesPassedValidation.inc(1L);
c.output(KV.of(key, file));
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,7 @@
import org.apache.beam.sdk.options.ValueProvider;
import org.apache.beam.sdk.schemas.Schema;
import org.apache.beam.sdk.schemas.Schema.FieldType;
import org.apache.beam.sdk.values.KV;
import org.apache.beam.sdk.values.Row;
import org.apache.beam.sdk.values.TupleTag;
import org.apache.commons.csv.CSVFormat;
Expand Down