PyTorch
Creating an End-to-End ML Application w/
Overview
- Why do we need to build end-to-end applications?
- By building e2e applications, you ensure that your code is organized, tested, testable / interactive and easy to scale-up / assimilate with larger pipelines.
- If you're someone in industry and are looking to showcase your work to future employers, it's no longer enough to just have code on Jupyter notebooks. ML is just another tool and you need to show that you can use it in conjunction with all the other software engineering disciplines (frontend, backend, devops, etc.). The perfect way to do this is to create end-to-end applications that utilize all these different facets.
- What are the components of an end-to-end ML application?
- Basic experimentation in Jupyter notebooks.
- We aren't going to completely dismiss notebooks because they're still great tool to iterate quickly. Check out the notebook for our task here โ notebook
- Moving our code from notebooks to organized scripts.
- Once we did some basic development (on downsized datasets), we want to move our code to scripts to reduce technical debt. We'll create functions and classes for different parts of the pipeline (data, model, train, etc.) so we can easily make them robust for different circumstances.
- We used our own boilerplate to organize our code before moving any of the code from our notebook.
- Proper logging and testing for you code.
- Log key events (preprocessing, training performance, etc.) using the built-in logging library. Also use logging to see new inputs and outputs during prediction to catch issues, etc.
- You also need to properly test your code. You will add and update your functions and their tests over time but it's important to at least start testing crucial pieces of your code from the beginning. These typically include sanity checks with preprocessing and modeling functions to catch issues early. There are many options for testing Python code but we'll use pytest here.
- Experiment tracking.
- We use Weights and Biases (WandB), where you can easily track all the metrics of your experiment, config files, performance details, etc. for free. Check out the Dashboards page for an overview and tutorials.
- When you're developing your models, start with simple approaches first and then slowly add complexity. You should clearly document (README, articles and WandB reports) and save your progression from simple to more complex models so your audience can see the improvements. The ability to write well and document your thinking process is a core skill to have in research and industry.
- WandB also has free tools for hyperparameter tuning (Sweeps) and for data/pipeline/model management (Artifacts).
- Robust prediction pipelines.
- When you actually deploy an ML application for the real world to use, we don't just look at the softmax scores.
- Before even doing any forward pass, we need to analyze the input and deem if it's within the manifold of the training data. If it's something new (or adversarial) we shouldn't send it down the ML pipeline because the results cannot be trusted.
- During processes like proprocessing, we need to constantly observe what the model received. For example, if the input has a bunch of unknown tokens than we need to flag the prediction because it may not be reliable.
- After the forward pass we need to do tests on the model's output as well. If the predicted class has a mediocre test set performance, then we need the class probability to be above some critical threshold. Similarly we can relax the threshold for classes where we do exceptionally well.
- Wrap your model as an API.
- Now we start to modularize larger operations (single/batch predict, get experiment details, etc.) so others can use our application without having to execute granular code. There are many options for this like Flask, Django, FastAPI, etc. but we'll use FastAPI for the ease and performance boost.
- We can also use a Dockerfile to create a Docker image that runs our API. This is a great way to package our entire application to scale it (horizontally and vertically) depending on requirements and usage.
- Create an interactive frontend for your application.
- The best way to showcase your work is to let others easily play with it. We'll be using Streamlit to very quickly create an interactive medium for our application and use Heroku to serve it (1000 hours of usage per month).
- This is also a great skill to have because in industry you'll need to create this to show key stakeholders and great to have in documentation as well.
- Basic experimentation in Jupyter notebooks.
Set up
virtualenv -p python3.6 venv
source venv/bin/activate
pip install -r requirements.txt
pip install torch==1.4.0
Download embeddings
python text_classification/utils.py
Training
python text_classification/train.py \
--data-url https://raw.githubusercontent.com/madewithml/lessons/master/data/news.csv --lower --shuffle --use-glove
Endpoints
uvicorn text_classification.app:app --host 0.0.0.0 --port 5000 --reload
GOTO: http://localhost:5000/docs
Prediction
Scripts
python text_classification/predict.py --text 'The Canadian government officials proposed the new federal law.'
cURL
curl "http://localhost:5000/predict" \
-X POST -H "Content-Type: application/json" \
-d '{
"inputs":[
{
"text":"The Wimbledon tennis tournament starts next week!"
},
{
"text":"The Canadian government officials proposed the new federal law."
}
]
}' | json_pp
Requests
import json
import requests
headers = {
'Content-Type': 'application/json',
}
data = {
"experiment_id": "latest",
"inputs": [
{
"text": "The Wimbledon tennis tournament starts next week!"
},
{
"text": "The Canadian minister signed in the new federal law."
}
]
}
response = requests.post('http://0.0.0.0:5000/predict',
headers=headers, data=json.dumps(data))
results = json.loads(response.text)
print (json.dumps(results, indent=2, sort_keys=False))
Streamlit
streamlit run text_classification/streamlit.py
GOTO: http://localhost:8501
Tests
pytest
Docker
- Build image
docker build -t text-classification:latest -f Dockerfile .
- Run container
docker run -d -p 5000:5000 -p 6006:6006 --name text-classification text-classification:latest
Heroku
Set `WANDB_API_KEY` as an environment variable.
Directory structure
text-classification/
โโโ datasets/ - datasets
โโโ logs/ - directory of log files
| โโโ errors/ - error log
| โโโ info/ - info log
โโโ tests/ - unit tests
โโโ text_classification/ - ml scripts
| โโโ app.py - app endpoints
| โโโ config.py - configuration
| โโโ data.py - data processing
| โโโ models.py - model architectures
| โโโ predict.py - prediction script
| โโโ streamlit.py - streamlit app
| โโโ train.py - training script
| โโโ utils.py - load embeddings and utilities
โโโ wandb/ - wandb experiment runs
โโโ .dockerignore - files to ignore on docker
โโโ .gitignore - files to ignore on git
โโโ CODE_OF_CONDUCT.md - code of conduct
โโโ CODEOWNERS - code owner assignments
โโโ CONTRIBUTING.md - contributing guidelines
โโโ Dockerfile - dockerfile to containerize app
โโโ LICENSE - license description
โโโ logging.json - logger configuration
โโโ Procfile - process script for Heroku
โโโ README.md - this README
โโโ requirements.txt - requirementss
โโโ setup.sh - streamlit setup for Heroku
โโโ sweeps.yaml - hyperparameter wandb sweeps config
Overfit to small subset
python text_classification/train.py \
--data-url https://raw.githubusercontent.com/madewithml/lessons/master/data/news.csv --lower --shuffle --data-size 0.1 --num-epochs 3
Experiments
- Random, unfrozen, embeddings
python text_classification/train.py \
--data-url https://raw.githubusercontent.com/madewithml/lessons/master/data/news.csv --lower --shuffle
- GloVe, frozen, embeddings
python text_classification/train.py \
--data-url https://raw.githubusercontent.com/madewithml/lessons/master/data/news.csv --lower --shuffle --use-glove --freeze-embeddings
- GloVe, unfrozen, embeddings
python text_classification/train.py \
--data-url https://raw.githubusercontent.com/madewithml/lessons/master/data/news.csv --lower --shuffle --use-glove
Next steps
End-to-end topics that will be covered in subsequent lessons.
- Utilizing wrappers like PyTorch Lightning to structure the modeling even more while getting some very useful utility.
- Data / model version control (Artifacts, DVC, MLFlow, etc.)
- Experiment tracking options (MLFlow, KubeFlow, WandB, Comet, Neptune, etc)
- Hyperparameter tuning options (Optuna, Hyperopt, Sweeps)
- Multi-process data loading
- Dealing with imbalanced datasets
- Distributed training for much larger models
- GitHub actions for automatic testing during commits
- Prediction fail safe techniques (input analysis, class-specific thresholds, etc.)
Helpful docker commands
โข Build image
docker build -t madewithml:latest -f Dockerfile .
โข Run container if using CMD ["python", "app.py"]
or ENTRYPOINT [ "/bin/sh", "entrypoint.sh"]
docker run -p 5000:5000 --name madewithml madewithml:latest
โข Get inside container if using CMD ["/bin/bash"]
docker run -p 5000:5000 -it madewithml /bin/bash
โข Run container with mounted volume
docker run -p 5000:5000 -v $PWD:/root/madewithml/ --name madewithml madewithml:latest
โข Other flags
-d: detached
-ti: interative terminal
โข Clean up
docker stop $(docker ps -a -q) # stop all containers
docker rm $(docker ps -a -q) # remove all containers
docker rmi $(docker images -a -q) # remove all images