Repository for Knowledge Search - 1.0
This readme file contains the instruction to set up and run the knowledge-search-jobs in local machine.
- Java 11
mkdir -p ~/sunbird-dbs/es ~/sunbird-dbs/kafka
export sunbird_dbs_path=~/sunbird-dbs
docker run --name sunbird_es -d -p 9200:9200 -p 9300:9300 \
-v $sunbird_dbs_path/es/data:/usr/share/elasticsearch/data \
-v $sunbird_dbs_path/es/logs://usr/share/elasticsearch/logs \
-v $sunbird_dbs_path/es/backups:/opt/elasticsearch/backup \
-e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:6.8.22
--name - Name your container (avoids generic id)
-p - Specify container ports to expose
Using the -p option with ports 7474 and 7687 allows us to expose and listen for traffic on both the HTTP and Bolt ports. Having the HTTP port means we can connect to our database with Neo4j Browser, and the Bolt port means efficient and type-safe communication requests between other layers and the database.
-d - This detaches the container to run in the background, meaning we can access the container separately and see into all of its processes.
-v - The next several lines start with the -v option. These lines define volumes we want to bind in our local directory structure so we can access certain files locally.
--env - Set config as environment variables for Neo4j database
- Kafka stores information about the cluster and consumers into Zookeeper. ZooKeeper acts as a coordinator between them. we need to run two services(zookeeper & kafka), Prepare your docker-compose.yml file using the following reference.
version: '3'
services:
zookeeper:
image: 'wurstmeister/zookeeper:latest'
container_name: zookeeper
ports:
- "2181:2181"
environment:
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://127.0.0.1:2181
kafka:
image: 'wurstmeister/kafka:2.11-1.0.1'
container_name: kafka
ports:
- "9092:9092"
environment:
- KAFKA_BROKER_ID=1
- KAFKA_LISTENERS=PLAINTEXT://:9092
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://127.0.0.1:9092
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
depends_on:
- zookeeper
- Go to the path where docker-compose.yml placed and run the below command to create and run the containers (zookeeper & kafka).
docker-compose -f docker-compose.yml up -d
- To start kafka docker container shell, run the below command.
docker exec -it kafka sh
Go to path /opt/kafka/bin, where we will have executable files to perform operations(creating topics, running producers and consumers, etc). Example:
kafka-topics.sh --create --zookeeper zookeeper:2181 --replication-factor 1 --partitions 1 --topic test_topic
- Navigate to downloaded repository folder (knowlg-search/search-job/) and run below command.
mvn clean install -DskipTests
- Open the project in IntelliJ.
- Navigate to the target job folder ( ../knowlg-search/search-job/search-indexer) and edit the 'pom.xml' to add below dependency.
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-clients_${scala.version}</artifactId>
<version>${flink.version}</version>
</dependency>
- Comment "provided" scope from flink-streaming-scala_${scala.version} artifact dependency in the job's 'pom.xml'.
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-scala_${scala.version}</artifactId>
<version>${flink.version}</version>
<!-- <scope>provided</scope>-->
</dependency>
- Comment the default flink StreamExecutionEnvironment in the job's StreamTask file ( SearchIndexerStreamTask.scala) and add code to create local StreamExecutionEnvironment.
// implicit val env: StreamExecutionEnvironment = FlinkUtil.getExecutionContext(config)
implicit val env: StreamExecutionEnvironment = StreamExecutionEnvironment.createLocalEnvironment()
- Save cloud storage related environment variables in StreamTask environment variables.
- Start all databases, zookeper and kafka containers in docker
- Run the StreamTask (Normal or Debug)
- Open a terminal, connect to kafka docker container and produce the target job topic.
docker exec -it kafka_container_id sh
kafka-console-producer.sh --broker-list kafka:9092 --topic sunbirddev.learning.graph.events
- Download the Apache flink
wget https://dlcdn.apache.org/flink/flink-1.12.7/flink-1.12.7-bin-scala_2.12.tgz
- Extract the downloaded folder
tar xzf flink-1.12.7-bin-scala_2.12.tgz
- Change the directory & Start the flink cluster.
cd flink-1.12.7
./bin/start-cluster.sh
- Open web view to check jobmanager and taskmanager
localhost:8081
Setup cloud storage specific variables as environment variables.
export cloud_storage_type= #values can be 'aws' or 'azure'
For AWS Cloud Storage connectivity:
export aws_storage_key=
export aws_storage_secret=
export aws_storage_container=
For Azure Cloud Storage connectivity:
export azure_storage_key=
export azure_storage_secret=
export azure_storage_container=
export content_youtube_apikey= #key to fetch metadata of youtube videos
- Navigate to the required job folder (Example: ../knowlg-search/search-job/search-indexer) and run the below maven command to build the application.
mvn clean install -DskipTests
- Start all databases, zookeper and kafka containers in docker
- Start flink (if not started) and submit the job to flink. Example:
cd flink-1.12.7
./bin/start-cluster.sh
./bin/flink run -m localhost:8081 /user/test/workspace/knowlg-search/search-job/search-indexer/target/search-indexer-1.0.0.jar
- Open a terminal, connect to kafka docker container and produce the target job topic.
docker exec -it kafka_container_id sh
kafka-console-producer.sh --broker-list kafka:9092 --topic sunbirddev.learning.graph.events
This readme file contains the instruction to set up and run the content-service in local machine.
- Java 11
- Docker, Docker Compose
- Go to Root folder (knowledge-search/search-api)
- Run "local-setup.sh" file
sh ./local-setup.sh
This will install all the requied dcoker images & local folders for DB mounting.
Please follow the manual steps in One step installation is failed.
mkdir -p ~/sunbird-dbs/neo4j ~/sunbird-dbs/cassandra ~/sunbird-dbs/redis ~/sunbird-dbs/es ~/sunbird-dbs/kafka
export sunbird_dbs_path=~/sunbird-dbs
- Kafka stores information about the cluster and consumers into Zookeeper. ZooKeeper acts as a coordinator between them. we need to run two services(zookeeper & kafka), Prepare your docker-compose.yml file using the following reference.
version: '3'
services:
zookeeper:
image: 'wurstmeister/zookeeper:latest'
container_name: zookeeper
ports:
- "2181:2181"
environment:
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://127.0.0.1:2181
kafka:
image: 'wurstmeister/kafka:2.12-1.0.1'
container_name: kafka
ports:
- "9092:9092"
environment:
- KAFKA_BROKER_ID=1
- KAFKA_LISTENERS=PLAINTEXT://:9092
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://127.0.0.1:9092
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
depends_on:
- zookeeper
- Go to the path where docker-compose.yml placed and run the below command to create and run the containers (zookeeper & kafka).
docker-compose -f docker-compose.yml up -d
- To start kafka docker container shell, run the below command.
docker exec -it kafka sh
Go to path /opt/kafka/bin, where we will have executable files to perform operations(creating topics, running producers and consumers, etc). Example:
kafka-topics.sh --create --zookeeper zookeeper:2181 --replication-factor 1 --partitions 1 --topic test_topic
- Go to the path: /knowledge-search/search-api and run the below maven command to build the application.
mvn clean install -DskipTests
- Go to the path: /knowlg-search/search-api/knowlg-search and run the below maven command to run the netty server.
sbt clean compile
sbt searchService/run
- Using the below command we can verify whether the databases(elasticSearch) connection is established or not. If all connections are good, health is shown as 'true' otherwise it will be 'false'.
curl http://localhost:9000/health
This readme file contains the instruction to set up and run the knowledge-search-jobs in local machine.
- Java 11
mkdir -p ~/sunbird-dbs/es ~/sunbird-dbs/kafka
export sunbird_dbs_path=~/sunbird-dbs
docker run --name sunbird_es -d -p 9200:9200 -p 9300:9300 \
-v $sunbird_dbs_path/es/data:/usr/share/elasticsearch/data \
-v $sunbird_dbs_path/es/logs://usr/share/elasticsearch/logs \
-v $sunbird_dbs_path/es/backups:/opt/elasticsearch/backup \
-e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:6.8.22
--name - Name your container (avoids generic id)
-p - Specify container ports to expose
Using the -p option with ports 7474 and 7687 allows us to expose and listen for traffic on both the HTTP and Bolt ports. Having the HTTP port means we can connect to our database with Neo4j Browser, and the Bolt port means efficient and type-safe communication requests between other layers and the database.
-d - This detaches the container to run in the background, meaning we can access the container separately and see into all of its processes.
-v - The next several lines start with the -v option. These lines define volumes we want to bind in our local directory structure so we can access certain files locally.
--env - Set config as environment variables for Neo4j database
- Kafka stores information about the cluster and consumers into Zookeeper. ZooKeeper acts as a coordinator between them. we need to run two services(zookeeper & kafka), Prepare your docker-compose.yml file using the following reference.
version: '3'
services:
zookeeper:
image: 'wurstmeister/zookeeper:latest'
container_name: zookeeper
ports:
- "2181:2181"
environment:
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://127.0.0.1:2181
kafka:
image: 'wurstmeister/kafka:2.11-1.0.1'
container_name: kafka
ports:
- "9092:9092"
environment:
- KAFKA_BROKER_ID=1
- KAFKA_LISTENERS=PLAINTEXT://:9092
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://127.0.0.1:9092
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
depends_on:
- zookeeper
- Go to the path where docker-compose.yml placed and run the below command to create and run the containers (zookeeper & kafka).
docker-compose -f docker-compose.yml up -d
- To start kafka docker container shell, run the below command.
docker exec -it kafka sh
Go to path /opt/kafka/bin, where we will have executable files to perform operations(creating topics, running producers and consumers, etc). Example:
kafka-topics.sh --create --zookeeper zookeeper:2181 --replication-factor 1 --partitions 1 --topic test_topic
- Navigate to downloaded repository folder (knowlg-search/search-job/) and run below command.
mvn clean install -DskipTests
- Open the project in IntelliJ.
- Navigate to the target job folder ( ../knowlg-search/search-job/search-indexer) and edit the 'pom.xml' to add below dependency.
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-clients_${scala.version}</artifactId>
<version>${flink.version}</version>
</dependency>
- Comment "provided" scope from flink-streaming-scala_${scala.version} artifact dependency in the job's 'pom.xml'.
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-scala_${scala.version}</artifactId>
<version>${flink.version}</version>
<!-- <scope>provided</scope>-->
</dependency>
- Comment the default flink StreamExecutionEnvironment in the job's StreamTask file ( SearchIndexerStreamTask.scala) and add code to create local StreamExecutionEnvironment.
// implicit val env: StreamExecutionEnvironment = FlinkUtil.getExecutionContext(config)
implicit val env: StreamExecutionEnvironment = StreamExecutionEnvironment.createLocalEnvironment()
- Save cloud storage related environment variables in StreamTask environment variables.
- Start all databases, zookeper and kafka containers in docker
- Run the StreamTask (Normal or Debug)
- Open a terminal, connect to kafka docker container and produce the target job topic.
docker exec -it kafka_container_id sh
kafka-console-producer.sh --broker-list kafka:9092 --topic sunbirddev.learning.graph.events
- Download the Apache flink
wget https://dlcdn.apache.org/flink/flink-1.12.7/flink-1.12.7-bin-scala_2.12.tgz
- Extract the downloaded folder
tar xzf flink-1.12.7-bin-scala_2.12.tgz
- Change the directory & Start the flink cluster.
cd flink-1.12.7
./bin/start-cluster.sh
- Open web view to check jobmanager and taskmanager
localhost:8081
Setup cloud storage specific variables as environment variables.
export cloud_storage_type= #values can be 'aws' or 'azure'
For AWS Cloud Storage connectivity:
export aws_storage_key=
export aws_storage_secret=
export aws_storage_container=
For Azure Cloud Storage connectivity:
export azure_storage_key=
export azure_storage_secret=
export azure_storage_container=
export content_youtube_apikey= #key to fetch metadata of youtube videos
- Navigate to the required job folder (Example: ../knowlg-search/search-job/search-indexer) and run the below maven command to build the application.
mvn clean install -DskipTests
- Start all databases, zookeper and kafka containers in docker
- Start flink (if not started) and submit the job to flink. Example:
cd flink-1.12.7
./bin/start-cluster.sh
./bin/flink run -m localhost:8081 /user/test/workspace/knowlg-search/search-job/search-indexer/target/search-indexer-1.0.0.jar
- Open a terminal, connect to kafka docker container and produce the target job topic.
docker exec -it kafka_container_id sh
kafka-console-producer.sh --broker-list kafka:9092 --topic sunbirddev.learning.graph.events