Kroomsa
A search engine for the curious. It is a search algorithm designed to engage users by exposing them to relevant yet interesting content during their session.
Description
The search algorithm implemented in your website greatly influences visitor engagement. A decent implementation can significantly reduce dependency on standard search engines like Google for every query thus, increasing engagement. Traditional methods look at terms or phrases in your query to find relevant content based on syntactic matching. Kroomsa uses semantic matching to find content relevant to your query. There is a blog post expanding upon Kroomsa's motivation and its technical aspects.
Getting Started
Prerequisites
- Python 3.6.5
- Run the project directory setup:
python3 ./setup.py
in the root directory. - Tensorflow's Universal Sentence Encoder 4
- The model is available at this link. Download the model and extract the zip file in the
/vectorizer
directory.
- The model is available at this link. Download the model and extract the zip file in the
- MongoDB is used as the database to collate Reddit's submissions. MongoDB can be installed following this link.
- To fetch comments of the reddit submissions, PRAW is used. To scrape credentials are needed that authorize the script for the same. This is done by creating an app associated with a reddit account by following this link. For reference you can follow this tuorial written by Shantnu Tiwari.
- Register multiple instances and retrieve their credentials, then add them to the
/config
underbot_codes
parameter in the following format:"client_id client_secret user_agent"
as list elements separated by,
.
- Register multiple instances and retrieve their credentials, then add them to the
- Docker-compose (For dockerized deployment only): Install the latest version following this link.
Installing
- Create a python environment and install the required packages for preprocessing using:
python3 -m pip install -r ./preprocess_requirements.txt
- Collating a dataset of Reddit submissions
- Scraping posts
- Pushshift's API is being used to fetch Reddit submissions. In the root directory, run the following command:
python3 ./pre_processing/scraping/questions/scrape_questions.py
. It launches a script that scrapes the subreddits sequentially till their inception and stores the submissions as JSON objects in/pre_processing/scraping/questions/scraped_questions
. It then partitions the scraped submissions into as many equal parts as there are registered instances of bots.
- Pushshift's API is being used to fetch Reddit submissions. In the root directory, run the following command:
- Scraping comments
- After populating the configuration with
bot_codes
, we can begin scraping the comments using the partitioned submission files created while scraping submissions. Using the following command:python3 ./pre_processing/scraping/comments/scrape_comments.py
multiple processes are spawned that fetch comment streams simultaneously.
- After populating the configuration with
- Insertion
- To insert the submissions and associated comments, use the following commands:
python3 ./pre_processing/db_insertion/insertion.py
. It inserts the posts and associated comments in mongo. - To clean the comments and tag the posts that aren't public due to any reason, Run
python3 ./post_processing/post_processing.py
. Apart from cleaning, it also adds emojis to each submission object (This behavior is configurable).
- To insert the submissions and associated comments, use the following commands:
- Scraping posts
- Creating a FAISS Index
- To create a FAISS index, run the following command:
python3 ./index/build_index.py
. By default, it creates an exhaustiveIDMap, Flat
index but is configurable through the/config
.
- To create a FAISS index, run the following command:
- Database dump (For dockerized deployment)
- For dockerized deployment, a database dump is required in
/mongo_dump
. Use the following command at the root dir to create a database dump.mongodump --db database_name(default: red) --collection collection_name(default: questions) -o ./mongo_dump
.
- For dockerized deployment, a database dump is required in
Execution
- Local deployment (Using Gunicorn)
- Create a python environment and install the required packages using the following command:
python3 -m pip install -r ./inference_requirements.txt
- A local instance of Kroomsa can be deployed using the following command:
gunicorn -c ./gunicorn_config.py server:app
- Create a python environment and install the required packages using the following command:
- Dockerized demo
- Set the
demo_mode
toTrue
in/config
. - Build images:
docker-compose build
- Deploy:
docker-compose up
- Set the
Authors
License
This project is licensed under the Apache License Version 2.0