Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
2 changes: 1 addition & 1 deletion .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ jobs:
- run:
name: Run build
command: |
mvn clean install -DskipTests
mvn clean install -DskipTests -DCLOUD_STORE_GROUP_ID=$CLOUD_STORE_GROUP_ID -DCLOUD_STORE_ARTIFACT_ID=$CLOUD_STORE_ARTIFACT_ID -DCLOUD_STORE_VERSION=$CLOUD_STORE_VERSION
- save_cache:
paths:
- ~/.m2
Expand Down
2 changes: 1 addition & 1 deletion .github/pull_request_template.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ Please describe the tests that you ran to verify your changes in the below check
- [ ] Ran Test B

**Test Configuration**:
* Software versions: Java 11, scala-2.11, play-2.7.2
* Software versions: Java 11, scala-2.12, play-2.7.2
* Hardware versions: 2 CPU/ 4GB RAM

### Checklist:
Expand Down
2 changes: 1 addition & 1 deletion .github/pull_request_template.md.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ Please describe the tests that you ran to verify your changes in the below check
- [ ] Ran Test B

**Test Configuration**:
* Software versions: Java 11, scala-2.11, play-2.7.2
* Software versions: Java 11, scala-2.12, play-2.7.2
* Hardware versions:

### Checklist:
Expand Down
99 changes: 99 additions & 0 deletions .github/workflows/content-pr-check.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,99 @@
name: Pull Request Checks

on:
pull_request:
branches:
- '*'

jobs:
test-and-quality:
runs-on: ubuntu-latest
env:
CLOUD_STORE_GROUP_ID: ${{ vars.CLOUD_STORE_GROUP_ID }}
CLOUD_STORE_ARTIFACT_ID: ${{ vars.CLOUD_STORE_ARTIFACT_ID }}
CLOUD_STORE_VERSION: ${{ vars.CLOUD_STORE_VERSION }}
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0 # Important for SonarQube to get full history

- name: Set up JDK 11
uses: actions/setup-java@v3
with:
java-version: '11'
distribution: 'temurin'
cache: 'maven'

- name: Cache Maven packages
uses: actions/cache@v3
with:
path: ~/.m2/repository
key: ${{ runner.os }}-maven-${{ hashFiles('**/pom.xml') }}
restore-keys: |
${{ runner.os }}-maven-

- name: Build and Run Tests
run: |
mvn clean install -DskipTests \
-DCLOUD_STORE_GROUP_ID=${CLOUD_STORE_GROUP_ID} \
-DCLOUD_STORE_ARTIFACT_ID=${CLOUD_STORE_ARTIFACT_ID} \
-DCLOUD_STORE_VERSION=${CLOUD_STORE_VERSION}
cd content-api/content-service/
mvn clean test org.jacoco:jacoco-maven-plugin:0.8.8:prepare-agent test org.jacoco:jacoco-maven-plugin:0.8.8:report \
-DCLOUD_STORE_GROUP_ID=${CLOUD_STORE_GROUP_ID} \
-DCLOUD_STORE_ARTIFACT_ID=${CLOUD_STORE_ARTIFACT_ID} \
-DCLOUD_STORE_VERSION=${CLOUD_STORE_VERSION}

- name: Upload Test Results
if: always()
uses: actions/upload-artifact@v4
with:
name: test-results
path: 'content-api/content-service/target/surefire-reports/*.xml'

- name: Publish Test Results
if: always()
uses: dorny/test-reporter@v1
with:
name: Test Results
path: content-api/content-service/target/surefire-reports/*.xml
reporter: java-junit
fail-on-error: true

- name: Set up JDK 17
uses: actions/setup-java@v2
with:
java-version: '17'
distribution: 'temurin'

- name: SonarCloud Analysis
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
working-directory: content-api/content-service
run: |
mvn sonar:sonar \
-DCLOUD_STORE_GROUP_ID=${CLOUD_STORE_GROUP_ID} \
-DCLOUD_STORE_ARTIFACT_ID=${CLOUD_STORE_ARTIFACT_ID} \
-DCLOUD_STORE_VERSION=${CLOUD_STORE_VERSION} \
-Dsonar.projectKey=vinodbhorge \
-Dsonar.organization=vinodbhorge \
-Dsonar.host.url=https://sonarcloud.io \
-Dsonar.coverage.jacoco.xmlReportPaths=content-api/content-service/target/site/jacoco/jacoco.xml \
-Dsonar.token=${SONAR_TOKEN}


- name: Comment PR with SonarQube Results
uses: actions/github-script@v6
if: github.event_name == 'pull_request' && always()
with:
script: |
const sonarUrl = `https://sonarcloud.io/dashboard?id=${process.env.GITHUB_REPOSITORY.replace('/', '_')}`;
const message = `### Quality Gate Results
Check the detailed SonarQube analysis at: ${sonarUrl}`;
github.rest.issues.createComment({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
body: message
});
152 changes: 152 additions & 0 deletions KNOWLG-SETUP.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,152 @@

Below are the steps to set up the Sunbird Knowlg Microservices, DBs with seed data and Jobs. It uses a local Kubernetes cluster deploy the required services.

### Prerequisites:
* Java 11
* Maven
* Docker
* Minikube - It implements a local Kubernetes cluster on macOS, Linux, and Windows.
* KubeCtl - The Kubernetes command-line tool

### Prepare folders for database data and logs

```shell
mkdir -p ~/sunbird-dbs/neo4j ~/sunbird-dbs/cassandra ~/sunbird-dbs/redis ~/sunbird-dbs/es ~/sunbird-dbs/kafka
export sunbird_dbs_path=~/sunbird-dbs
```



### Docker Images of Knowlg MicroServices
Start Docker in your machine and create the Docker Images of below microservices using the shell script.
1. taxonomy-service
2. content-service
3. search-service

```shell
sh ./knowlg-docker-image.sh <TAG> # provide the TAG for the docker image.
```
**Note:** Please specify the TAG for the Docker Images and update the configuration in helm chart of respective deployment.

Check the Docker Images
```shell
docker image ls -a
```
**Output:**
```shell
❯❯ docker image ls -a
REPOSITORY TAG IMAGE ID CREATED SIZE
assessment-service R5.0.0 72a9cc1b2cc4 14 seconds ago 479MB
search-service R5.0.0 24b7d8947a4f 23 seconds ago 465MB
content-service R5.0.0 afcbc9c10fa3 33 seconds ago 556MB
taxonomy-service R5.0.0 a8a24a6241f2 47 seconds ago 480MB
```

### Kubernetes Cluster Setup
Please use the minikube to quickly set up the kubernetes cluster in local machine.

```shell
minikube start
```

### Load Docker Images to Minikube Cluster
```shell
minikube image load neo4j:3.3.0
minikube image load taxonomy-service:R5.0.0
```

### Create Namespace
Create the namespaces to deploy the API microservices, DBs and Jobs.
1. knowlg-api
2. knowlg-db
3. knowlg-job

```shell
kubectl create namespace knowlg-api
kubectl create namespace knowlg-db
kubectl create namespace knowlg-job
```

### Setup Databases
Please run the below `helm` commands to set up the required databases within the kubernets cluster.
It requires the below DBs for Knowlg.
1. Neo4J
2. Cassandra
3. Elasticsearch
4. Kafka
5. Redis

```shell
cd kubernetes
helm install redis sunbird-dbs/redis -n knowlg-db

minikube mount <LOCAL_SOURCE_DIR>:/var/lib/neo4j/data // LOCAL_SOURCE_DIR is where neo4j dump is extracted Ex: /Users/abc/sunbird-dbs/neo4j/data
helm install neo4j sunbird-dbs/neo4j -n knowlg-db

minikube mount <LOCAL_SOURCE_DIR>:/mnt/backups // LOCAL_SOURCE_DIR is where neo4j dump is extracted Ex: /Users/abc/sunbird-dbs/cassandra/backups
helm install cassandra sunbird-dbs/cassandra -n knowlg-db

ssh to cassandra pod
run => cqlsh
run => source '/mnt/backups/cassandra_backup/db_schema.cql';
```

**Note:**
- The `helm` charts for Kafka, Elasticsearch will be added soon.

### Define ConfigMap
We use the configmap to load the configuration for the microservices.

#### ConfigMap for Taxonomy-Service
Use the below commands to load the configmap of taxonomy-service.
1. `taxonomy-config` - this has the application configuration. Please update the variables with respect to your context and load.
2. `taxonomy-xml-config` - this has the logback configuration to handle the logs.

We have to update the below configurations in `taxonomy/templates/taxonomy-service_application.conf` specific to your context.

```shell
cd kubernetes
kubectl create configmap taxonomy-xml-config --from-file=taxonomy/taxonomy-service_logback.xml -n knowlg-api -o=yaml
kubectl create configmap taxonomy-config --from-file=taxonomy/taxonomy-service_application.conf -n knowlg-api -o=yaml
```

### Run Taxonomy-Service
Use the `taxonomy` helm chart to run the taxonomy-service in local kubernetes cluster.

```shell
cd kubernetes
helm install taxonomy taxonomy -n knowlg-api
```
Use Port Forwarding to access the application in the cluster from local.

```shell
kubectl port-forward <pod-name> 9000:9000 -n knowlg-api
curl 'localhost:9000/health'
```

### Define ConfigMap for Content-Service
Use the below commands to load the configmap of content-Service.
1. `content-config` - this has the application configuration. Please update the variables with respect to your context and load.
2. `content-xml-config` - this has the logback configuration to handle the logs.

We have to update the below configurations in `content/templates/content-service_application` specific to your context.

```shell
cd kubernetes
kubectl create configmap content-xml-config --from-file=content/content-service_logback.xml -n knowlg-api -o=yaml
kubectl create configmap content-config --from-file=content/content-service_application.conf -n knowlg-api -o=yaml
```

### Run Content-Service
Use the `taxonomy` helm chart to run the Content-Service in local kubernetes cluster.

```shell
cd kubernetes
helm install content content -n knowlg-api
```
Use Port Forwarding to access the application in the cluster from local.

```shell
kubectl port-forward <pod-name> 9000:9000 -n knowlg-api
curl 'localhost:9000/health'
```
27 changes: 23 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,32 @@

Repository for Knowledge Platform - 2.0

## Knowledge-platform local setup
## Knowledge-platform local setup
This readme file contains the instruction to set up and run the content-service in local machine.

### System Requirements:

### Prerequisites:
* Java 11
* Docker, Docker Compose


## One step installation

1. Go to Root folder (knowledge-platform)
2. Run "local-setup.sh" file
``` shell
sh ./local-setup.sh
```

This will install all the requied dcoker images & local folders for DB mounting.
3. Follow the below manual setps of running content service
refer: [Running Content Service:](#running-content-service)



## Manual steps to install all the dependents
Please follow the manual steps in [One step installation](#one-step-installation) is failed.

### Prepare folders for database data and logs

Expand Down Expand Up @@ -114,7 +133,7 @@ services:
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://127.0.0.1:2181

kafka:
image: 'wurstmeister/kafka:2.11-1.0.1'
image: 'wurstmeister/kafka:2.12-1.0.1'
container_name: kafka
ports:
- "9092:9092"
Expand Down Expand Up @@ -147,7 +166,7 @@ kafka-topics.sh --create --zookeeper zookeeper:2181 --replication-factor 1 --par

1. Go to the path: /knowledge-platform and run the below maven command to build the application.
```shell
mvn clean install -DskipTests
mvn clean install -DskipTests -DCLOUD_STORE_GROUP_ID=org.sunbird -DCLOUD_STORE_ARTIFACT_ID=cloud-store-sdk_2.12 -DCLOUD_STORE_VERSION=1.4.6
```
2. Go to the path: /knowledge-platform/content-api/content-service and run the below maven command to run the netty server.
```shell
Expand Down Expand Up @@ -184,4 +203,4 @@ mvn play2:run
3. Using the below command we can verify whether the databases(neo4j,redis & cassandra) connection is established or not. If all connections are good, health is shown as 'true' otherwise it will be 'false'.
```shell
curl http://localhost:9000/health
```
```
2 changes: 1 addition & 1 deletion assessment-api/assessment-actors/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@
</dependency>
<dependency>
<groupId>org.sunbird</groupId>
<artifactId>graph-engine_2.11</artifactId>
<artifactId>graph-engine_2.12</artifactId>
<version>1.0-SNAPSHOT</version>
<type>jar</type>
</dependency>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ import org.sunbird.graph.nodes.DataNode
import org.sunbird.graph.utils.NodeUtil
import org.sunbird.parseq.Task

import scala.collection.JavaConversions._
import scala.collection.convert.ImplicitConversions._
import scala.collection.JavaConverters.seqAsJavaListConverter
import scala.concurrent.{ExecutionContext, Future}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ import org.sunbird.telemetry.util.LogTelemetryEventUtil
import org.sunbird.utils.RequestUtil

import scala.concurrent.{ExecutionContext, Future}
import scala.collection.JavaConversions._
import scala.collection.convert.ImplicitConversions._
import scala.collection.JavaConverters
import scala.collection.JavaConverters._

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ import org.sunbird.graph.OntologyEngineContext
import org.sunbird.graph.schema.DefinitionNode

import scala.concurrent.ExecutionContext
import scala.collection.JavaConversions._
import scala.collection.convert.ImplicitConversions._

object RequestUtil {

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ import org.sunbird.graph.utils.ScalaJsonUtils
import org.sunbird.graph.{GraphService, OntologyEngineContext}
import org.sunbird.kafka.client.KafkaClient

import scala.collection.JavaConversions._
import scala.collection.convert.ImplicitConversions._
import scala.collection.JavaConverters._
import scala.concurrent.Future
import scala.concurrent.ExecutionContext.Implicits.global
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ import org.sunbird.managers.CopyManager
import org.sunbird.utils.{AssessmentConstants, BranchingUtil, JavaJsonUtils}

import java.util
import scala.collection.JavaConversions._
import scala.collection.convert.ImplicitConversions._
import scala.collection.JavaConverters._
import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.Future
Expand Down
Loading