Skip to content

Commit d2ecffc

Browse files
committed
Minor corrections in vignettes
1 parent 6b07c47 commit d2ecffc

File tree

1 file changed

+5
-5
lines changed

1 file changed

+5
-5
lines changed

vignettes/Streaming_pipelines_for_working_Apache_Spark_Structured_Streaming.Rmd

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -51,9 +51,9 @@ knitr::opts_chunk$set(
5151
library(analysisPipelines)
5252
library(SparkR)
5353
54-
## Define these variables as per the configuration of your machine. This is just an example.
54+
## Define these variables as per the configuration of your machine. The below example is just illustrative.
5555
56-
sparkHome <- "/Users/naren/softwares/spark-2.3.1-bin-hadoop2.7/"
56+
sparkHome <- "/path/to/spark/directory/"
5757
sparkMaster <- "local[1]"
5858
sparkPackages <- c("org.apache.spark:spark-sql-kafka-0-10_2.11:2.3.1")
5959
# Set spark home variable if not present
@@ -81,10 +81,10 @@ This example illustrates usage of pipelines for a streaming application. In this
8181
Read streaming data from Kafka.
8282

8383
```{r}
84-
## Define these variables as per the configuration of your machine. This is just an example.
84+
## Define these variables as per the configuration of your machine. The below example is just illustrative.
8585
86-
kafkaBootstrapServers <- "172.25.0.144:9092,172.25.0.98:9092,172.25.0.137:9092"
87-
consumerTopic <- "netlogo"
86+
kafkaBootstrapServers <- "192.168.0.256:9092,192.168.0.257:9092,192.168.0.258:9092"
87+
consumerTopic <- "topic1"
8888
streamObj <- read.stream(source = "kafka", kafka.bootstrap.servers = kafkaBootstrapServers, subscribe = consumerTopic, startingOffsets="earliest")
8989
printSchema(streamObj)
9090
```

0 commit comments

Comments
 (0)