Optimize Trading Strategies Using Freqtrade

Overview

Optimize trading strategy using Freqtrade

Short demo on building, testing and optimizing a trading strategy using Freqtrade.

The DevBootstrap YouTube screencast supporting this repo is here. Enjoy! :)

Alias Docker-Compose Command

First, I recommend to alias docker-compose to dc and docker-compose run --rm "$@" to dcr to save of typing.

Put this in your ~/.bash_profile file so that its always aliased like this!

alias dc=docker-compose
dcr() { docker-compose run --rm "$@"; }

Now run source ~/.bash_profile.

Installing Freqtrade

Install and run via Docker.

Now install the necessary dependencies to run Freqtrade:

mkdir ft_userdata
cd ft_userdata/
# Download the dc file from the repository
curl https://raw.githubusercontent.com/freqtrade/freqtrade/stable/docker-compose.yml -o docker-compose.yml

# Pull the freqtrade image
dc pull

# Create user directory structure
dcr freqtrade create-userdir --userdir user_data

# Create configuration - Requires answering interactive questions
dcr freqtrade new-config --config user_data/config.json

NOTE: Any freqtrade commands are available by running dcr freqtrade <command> <optional arguments>. So the only difference to run the command via docker-compose is to prefix the command with our new alias dcr (which runs docker-compose run --rm "$@" ... see above for details.)

Config Bot

If you used the new-config sub-command (see above) when installing the bot, the installation script should have already created the default configuration file (config.json) for you.

The params that we will set to note are (from config.json). This allows all the available balance to be distrubuted accross all possible trades. So in dry run mode we have a default paper money balance of 1000 (can be changed using dry_run_wallet param) and if we set to have a max of 10 trades then Freqtrade would distribute the funds accrosss all 10 trades aprroximatly equally (1000 / 10 = 100 / trade).

"stake_amount" : "unlimited",
"tradable_balance_ratio": 0.99,

The above are used for Dry Runs and is the 'Dynamic Stake Amount'. For live trading you might want to change this. For example, only allow bot to trade 20% of excahnge account funds and cancel open orders on exit (if market goes crazy!)

"tradable_balance_ratio": 0.2,
"cancel_open_orders_on_exit": true

For details of all available parameters, please refer to the configuration parameters docs.

Create a Strategy

So I've created a 'BBRSINaiveStrategy' based on RSI and Bollenger Bands. Take a look at the file bbrsi_naive_strategy.py file for details.

To tell your instance of Freqtrade about this strategy, open your docker-compose.yml file and update the strategy flag (last flag of the command) to --strategy BBRSINaiveStrategy

For more details on Strategy Customization, please refer to the Freqtrade Docs

Remove past trade data

If you have run the bot already, you will need to clear out any existing dry run trades from the database. The easiest way to do this is to delete the sqlite database by running the command rm user_data/tradesv3.sqlite.

Sandbox / Dry Run

As a quick sanity check, you can now immediately start the bot in a sandbox mode and it will start trading (with paper money - not real money!).

To start trading in sandbox mode, simply start the service as a daemon using Docker Compose, like so and follow the log trail as follows:

dc up -d
dc ps
dc logs -f

Setup a pairs file

We will use Binance so we create a data directory for binance and copy our pairs.json file into that directory:

mkdir -p user_data/data/binance
cp pairs.json user_data/data/binance/.

Now put whatever pairs you are interested to download into the pairs.json file. Take a look at the pairs.json file included in this repo.

Download Data

Now that we have our pairs file in place, lets download the OHLCV data for backtesting our strategy.

dcr freqtrade download-data --exchange binance -t 15m

List the available data using the list-data sub-command:

dcr freqtrade list-data --exchange binance

Manually inspect the json files to examine the data is as expected (i.e. that it contains the expected OHLCV data requested).

List the available data for backtesting

Note to list the available data you need to pass the --data-format-ohlcv jsongz flag as below:

dcr freqtrade list-data --exchange binance

Backtest

Now we have the data for 1h and 4h OHLCV data for our pairs lets Backtest this strategy:

dcr freqtrade backtesting --datadir user_data/data/binance --export trades  --stake-amount 100 -s BBRSINaiveStrategy -i 15m

For details on interpreting the result, refer to 'Understading the backtesting result'

Plotting

Plot the results to see how the bot entered and exited trades over time. Remember to change the Docker image being referenced in the docker-compose file to freqtradeorg/freqtrade:develop_plot before running the below command.

Note that the plot_config that is contained in the strategy will be applied to the chart.

dcr freqtrade plot-dataframe --strategy BBRSINaiveStrategy -p ALGO/USDT -i 15m

Once the plot is ready you will see the message Stored plot as /freqtrade/user_data/plot/freqtrade-plot-ALGO_USDT-15m.html which you can open in a browser window.

Optimize

To optimize the strategy we will use the Hyperopt module of freqtrade. First up we need to create a new hyperopt file from a template:

dcr freqtrade new-hyperopt --hyperopt BBRSIHyperopt

Now add desired definitions for buy/sell guards and triggers to the Hyperopt file. Then run the optimization like so (NOTE: set the time interval and the number of epochs to test using the -i and -e flags:

dcr freqtrade hyperopt --hyperopt BBRSIHyperopt --hyperopt-loss SharpeHyperOptLoss --strategy BBRSINaiveStrategy -i 15m

Update Strategy

Apply the suggested optimized results from the Hyperopt to the strategy. Either replace the current strategy or create a new 'optimized' strategy.

Backtest

Now we have updated our strategy based on the result from the hyperopt lets run a backtest again:

dcr freqtrade backtesting --datadir user_data/data/binance --export trades --stake-amount 100 -s BBRSIOptimizedStrategy -i 15m

Sandbox / Dry Run

Before you run the Dry Run, don't forget to check your local config.json file is configured. Particularly the dry_run is true, the dry_run_wallet is set to something reasonable (like 1000 USDT) and that the timeframe is set to the same that you have used when building and optimizing your strategy!

"max_open_trades": 10,
"stake_currency": "USDT",
"stake_amount" : "unlimited",
"tradable_balance_ratio": 0.99,
"fiat_display_currency": "USD",
"timeframe": "15min",
"dry_run": true,
"dry_run_wallet": 1000,

View Dry Run via Freq UI

For use with docker you will need to enable the api server in the Freqtrade config and set listen_ip_address to "0.0.0.0", and also set the username & password so that you can login like so:

...

"api_server": {
  "enabled": true,
  "listen_ip_address": "0.0.0.0",
  "username": "Freqtrader",
  "password": "secretpass!",
P
...

In the docker-compose.yml file also map the ports like so:

ports:
  - "127.0.0.1:8080:8080"

Then you can access the Freq UI via a browser at http://127.0.0.1:8080/. You can also access and control the bot via a REST API too!

Owner
DevBootstrap
Full Stack Blockchain Developer Tutorials
DevBootstrap
A rule-based log analyzer & filter

Flog 一个根据规则集来处理文本日志的工具。 前言 在日常开发过程中,由于缺乏必要的日志规范,导致很多人乱打一通,一个日志文件夹解压缩后往往有几十万行。 日志泛滥会导致信息密度骤减,给排查问题带来了不小的麻烦。 以前都是用grep之类的工具先挑选出有用的,再逐条进行排查,费时费力。在忍无可忍之后决

上山打老虎 9 Jun 23, 2022
Laplace Redux -- Effortless Bayesian Deep Learning

Laplace Redux - Effortless Bayesian Deep Learning This repository contains the code to run the experiments for the paper Laplace Redux - Effortless Ba

Runa Eschenhagen 28 Dec 07, 2022
Official implementation of "A Shared Representation for Photorealistic Driving Simulators" in PyTorch.

A Shared Representation for Photorealistic Driving Simulators The official code for the paper: "A Shared Representation for Photorealistic Driving Sim

VITA lab at EPFL 7 Oct 13, 2022
Implementation of "Semi-supervised Domain Adaptive Structure Learning"

Semi-supervised Domain Adaptive Structure Learning - ASDA This repo contains the source code and dataset for our ASDA paper. Illustration of the propo

3 Dec 13, 2021
A fast poisson image editing implementation that can utilize multi-core CPU or GPU to handle a high-resolution image input.

Poisson Image Editing - A Parallel Implementation Jiayi Weng (jiayiwen), Zixu Chen (zixuc) Poisson Image Editing is a technique that can fuse two imag

Jiayi Weng 110 Dec 27, 2022
The codebase for our paper "Generative Occupancy Fields for 3D Surface-Aware Image Synthesis" (NeurIPS 2021)

Generative Occupancy Fields for 3D Surface-Aware Image Synthesis (NeurIPS 2021) Project Page | Paper Xudong Xu, Xingang Pan, Dahua Lin and Bo Dai GOF

xuxudong 97 Nov 10, 2022
[PNAS2021] The neural architecture of language: Integrative modeling converges on predictive processing

The neural architecture of language: Integrative modeling converges on predictive processing Code accompanying the paper The neural architecture of la

Martin Schrimpf 36 Dec 01, 2022
A whale detector design for the Kaggle whale-detector challenge!

CNN (InceptionV1) + STFT based Whale Detection Algorithm So, this repository is my PyTorch solution for the Kaggle whale-detection challenge. The obje

Tarin Ziyaee 92 Sep 28, 2021
R-Drop: Regularized Dropout for Neural Networks

R-Drop: Regularized Dropout for Neural Networks R-drop is a simple yet very effective regularization method built upon dropout, by minimizing the bidi

756 Dec 27, 2022
Train the HRNet model on ImageNet

High-resolution networks (HRNets) for Image classification News [2021/01/20] Add some stronger ImageNet pretrained models, e.g., the HRNet_W48_C_ssld_

HRNet 866 Jan 04, 2023
This repository contains various models targetting multimodal representation learning, multimodal fusion for downstream tasks such as multimodal sentiment analysis.

Multimodal Deep Learning 🎆 🎆 🎆 Announcing the multimodal deep learning repository that contains implementation of various deep learning-based model

Deep Cognition and Language Research (DeCLaRe) Lab 398 Dec 30, 2022
Distance Encoding for GNN Design

Distance-encoding for GNN design This repository is the official PyTorch implementation of the DEGNN and DEAGNN framework reported in the paper: Dista

172 Nov 08, 2022
An Open-Source Package for Information Retrieval.

OpenMatch An Open-Source Package for Information Retrieval. 😃 What's New Top Spot on TREC-COVID Challenge (May 2020, Round2) The twin goals of the ch

THUNLP 439 Dec 27, 2022
EMNLP 2021: Single-dataset Experts for Multi-dataset Question-Answering

MADE (Multi-Adapter Dataset Experts) This repository contains the implementation of MADE (Multi-adapter dataset experts), which is described in the pa

Princeton Natural Language Processing 68 Jul 18, 2022
Infrastructure as Code (IaC) for a self-hosted version of Gnosis Safe on AWS

Welcome to Yearn Gnosis Safe! Setting up your local environment Infrastructure Deploying Gnosis Safe Prerequisites 1. Create infrastructure for secret

Numan 16 Jul 18, 2022
Code for paper entitled "Improving Novelty Detection using the Reconstructions of Nearest Neighbours"

NLN: Nearest-Latent-Neighbours A repository containing the implementation of the paper entitled Improving Novelty Detection using the Reconstructions

Michael (Misha) Mesarcik 4 Dec 14, 2022
PoolFormer: MetaFormer is Actually What You Need for Vision

PoolFormer: MetaFormer is Actually What You Need for Vision (arXiv) This is a PyTorch implementation of PoolFormer proposed by our paper "MetaFormer i

Sea AI Lab 1k Dec 30, 2022
Code for the upcoming CVPR 2021 paper

The Temporal Opportunist: Self-Supervised Multi-Frame Monocular Depth Jamie Watson, Oisin Mac Aodha, Victor Prisacariu, Gabriel J. Brostow and Michael

Niantic Labs 496 Dec 30, 2022
A fast Evolution Strategy implementation in Python

Evostra: Evolution Strategy for Python Evolution Strategy (ES) is an optimization technique based on ideas of adaptation and evolution. You can learn

Mika 251 Dec 08, 2022
FLAVR is a fast, flow-free frame interpolation method capable of single shot multi-frame prediction

FLAVR is a fast, flow-free frame interpolation method capable of single shot multi-frame prediction. It uses a customized encoder decoder architecture with spatio-temporal convolutions and channel ga

Tarun K 280 Dec 23, 2022