Python library to make development of portfolio analysis faster and easier

Overview

Trafalgar

Python library to make development of portfolio analysis faster and easier

Installation 🔥

For the moment, Trafalgar is still in beta development. To install it you should:

  1. Download requirements.txt in the folder where you want to execute the trafalgar library
  2. Go to your folder directory with the command prompt and write :
pip install -r requirements.txt
  1. Download trafalgars-0.0.1-py3-none-any.whl in the same folder
  2. Go to your folder directory with the command prompt and write :
pip install trafalgars-0.0.1-py3-none-any.whl

Features include 📈

  • Get close price, open price, adj close, volume and graphs of these in one line of code!
  • Build a efficient frontier programm in 3 lines of code
  • Backtest a portfolio, see its stats and compare it to a benchmark

Here is the code of this article from a google collab, you can use it to follow along with this article: https://colab.research.google.com/drive/1qgFDDQneQP-oddbJVWWApfPKFMnbpj6I?usp=sharing

Documentation

Call the library

First, you should do:

from trafalgar import *

Graph of the closing price of a stock

#graph_close(stock, start_date, end_date)
graph_close(["FB"], "2020-01-01", "2021-01-01")

Graph of the closing price of multiple stocks

graph_close(["FB", "AAPL", "TSLA"], "2020-01-01", "2021-01-01")

Graph the volume

#graph_volume(stock, start_date, end_date)

#for one stock
graph_volume(["FB"], "2020-01-01", "2021-01-01")

#for multiple stocks
graph_volume(["FB", "AAPL", "TSLA"], "2020-01-01", "2021-01-01")

Graph the opening price

#graph_open(stock, start_date, end_date)

#for one stock
graph_open(["FB"], "2020-01-01", "2021-01-01")

#for multiple stocks
graph_open(["FB", "AAPL", "TSLA"], "2020-01-01", "2021-01-01")

Graph the adjusted closing price

#graph_adj_close(stock, start_date, end_date)

#for one stock
graph_adj_close(["FB"], "2020-01-01", "2021-01-01")

#for multiple stocks
graph_adj_close(["FB", "AAPL", "TSLA"], "2020-01-01", "2021-01-01")

Graph the returns (for each day)

#returns_graph(stock, start_date, end_date)

#this one only work for one stock
returns_graph("FB", "2020-01-01", "2021-01-01")

Get closing price data (in dataframe format)

#close(stock, start_date, end_date)
close(["AAPL"], "2020-01-01", "2021-01-01")

Get volume data (in dataframe format)

#volume(stock, start_date, end_date)
volume(["AAPL"], "2020-01-01", "2021-01-01")

Get opening price data (in dataframe format)

#open(stock, start_date, end_date)
open(["AAPL"], "2020-01-01", "2021-01-01")

Get adjusted closing price data (in dataframe format)

#adj_close(stock, start_date, end_date)
adj_close(["AAPL"], "2020-01-01", "2021-01-01")

Covariance between stocks

#covariance(stocks, start_date, end_date, days) -> usually, days = 252
covariance(["AAPL", "DIS", "AMD"], "2020-01-01", "2021-01-01", 252)

Get data from a stock in OHLCV format directly

#ohlcv(stock, start_date, end_date)
ohlcv("AAPL", "2020-01-01", "2021-01-01")

Graph the cumulative returns of a stock/portfolio

#cum_returns_graph(stocks, weights, start_date, end_date)
cum_returns_graph(["FB", "AAPL", "AMD"], [0.3, 0.4, 0.3],"2020-01-01", "2021-01-01")

Get cumulative returns data of a stock/portfolio (in a dataframe format)

#cum_returns(stocks, weights, start_date, end_date)
cum_returns(["FB", "AAPL", "AMD"], [0.3, 0.4, 0.3],"2020-01-01", "2021-01-01")

Disclaimer : From there, the functions only work for portfolios, not for individual stocks. However there is a way to make it work for individual stock:

#let's say we want to calculate the annual_volatility of Apple. 
#We have to have at least 2 elements in our stock list. Here these are Apple and Facebook
#In order to get the volatility of only Apple we just have to put the weights of Facebook at 0 (so no money will be allocated to this stock) and put the weights of Apple at 1 (so all our money will be allocated to this stock)
annual_volatility(["FB", "AAPL"], [1, 0],"2020-01-01", "2021-01-01")

Annual Volatility of a portfolio/stock

#annual_volatility(stocks, weights, start_date, end_date)

#for your portfolio
annual_volatility(["FB", "AAPL", "AMD"], [0.3, 0.4, 0.3],"2020-01-01", "2021-01-01")

#for one stock (FB)
annual_volatility(["FB", "AAPL"], [1, 0],"2020-01-01", "2021-01-01")

Sharpe Ratio of a portfolio/stock

#sharpe_ratio(stocks, weights, start_date, end_date)

#for your portfolio
sharpe_ratio(["FB", "AAPL", "AMD"], [0.3, 0.4, 0.3],"2020-01-01", "2021-01-01")

#for one stock (FB)
sharpe_ratio(["FB", "AAPL"], [1, 0],"2020-01-01", "2021-01-01")

Compare the returns of a portfolio/stock to a benchmark

#returns_benchmark(stocks, weights, benchmark, start_date, end_date)

#for your portfolio
returns_benchmark(["AAPL", "AMD", "MSFT"], [0.3, 0.4, 0.3], "SPY", "2020-01-01", "2021-01-01")

#for one stock(AAPL)
returns_benchmark(["AAPL", "AMD"], [1,0], "SPY", "2020-01-01", "2021-01-01")

Blue line : returns of your portfolio Red line : returns of the benchmark

Compare the cumulative returns of a portfolio/stock to a benchmark

#cum_returns_benchmark(stocks, weights, benchmark, start_date, end_date)

#for your portfolio
cum_returns_benchmark(["AAPL", "AMD", "MSFT"], [0.3, 0.4, 0.3], "SPY", "2020-01-01", "2021-01-01")

#for one stock(AAPL)
cum_returns_benchmark(["AAPL", "AMD"], [1,0], "SPY", "2020-01-01", "2021-01-01")

Blue line : cumulative returns of your portfolio Red line : cumulative returns of the benchmark

Alpha and Beta of a portfolio/stock

#alpha_beta(stocks, weights, benchmark, start_date, end_date)

#for your portfolio
alpha_beta(["AAPL", "AMD", "MSFT"], [0.3, 0.4, 0.3], "SPY", "2020-01-01", "2021-01-01")

#for one stock(AAPL)
alpha_beta(["AAPL", "AMD"], [1,0], "SPY", "2020-01-01", "2021-01-01")

Efficient frontier to optimize allocation of shares in your portfolio

#efficient_frontier(stocks, start_date, end_date, iterations) -> iterations = 10000 is a good starting point
efficient_frontier(["AAPL", "FB", "TSLA", "BABA"], "2020-01-01", "2021-01-01", 10000)

Graph individual cumulative returns for your portfolio

#individual_cum_returns_graph(stocks, start_date, end_date)
individual_cum_returns_graph(["FB", "AAPL", "AMD"],"2020-01-01", "2021-01-01")

Individual cumulative returns datas for your portfolio (in dataframe format)

#individual_cum_returns(stocks, start_date, end_date)
individual_cum_returns(["FB", "AAPL", "AMD"],"2020-01-01", "2021-01-01")

Mean daily return of each stocks in your portfolio

#individual_mean_daily_return(stocks, start_date, end_date)
individual_mean_daily_return(["FB", "AAPL", "AMD"],"2020-01-01", "2021-01-01")

Portfolio mean daily return

#portfolio_daily_mean_return(stocks,weights, start_date, end_date)
portfolio_daily_mean_return(["FB", "AAPL", "AMD"],"2020-01-01", "2021-01-01")

Value at Risk of a stock (still in development)

#VaR(stock, start_date, end_date, confidence_level)
VaR("FB","2020-01-01", "2021-01-01", 98)

License

MIT

Comments
  • Issue with Pandas datareader

    Issue with Pandas datareader

    Describe the bug This seems to effect all your branches

    RemoteDataError Traceback (most recent call last) in ----> 1 oracle(portfolio)

    ~/anaconda3/envs/empyr/lib/python3.8/site-packages/empyrial.py in oracle(my_portfolio, prediction_days, based_on) 334 335 --> 336 df = web.DataReader(asset, data_source='yahoo', start = my_portfolio.start_date, end= my_portfolio.end_date) 337 df = pd.DataFrame(df) 338 df.reset_index(level=0, inplace=True)

    ~/anaconda3/envs/empyr/lib/python3.8/site-packages/pandas/util/_decorators.py in wrapper(*args, **kwargs) 197 else: 198 kwargs[new_arg_name] = new_arg_value --> 199 return func(*args, **kwargs) 200 201 return cast(F, wrapper)

    ~/anaconda3/envs/empyr/lib/python3.8/site-packages/pandas_datareader/data.py in DataReader(name, data_source, start, end, retry_count, pause, session, api_key) 374 375 if data_source == "yahoo": --> 376 return YahooDailyReader( 377 symbols=name, 378 start=start,

    ~/anaconda3/envs/empyr/lib/python3.8/site-packages/pandas_datareader/base.py in read(self) 251 # If a single symbol, (e.g., 'GOOG') 252 if isinstance(self.symbols, (string_types, int)): --> 253 df = self._read_one_data(self.url, params=self._get_params(self.symbols)) 254 # Or multiple symbols, (e.g., ['GOOG', 'AAPL', 'MSFT']) 255 elif isinstance(self.symbols, DataFrame):

    ~/anaconda3/envs/empyr/lib/python3.8/site-packages/pandas_datareader/yahoo/daily.py in _read_one_data(self, url, params) 151 url = url.format(symbol) 152 --> 153 resp = self._get_response(url, params=params) 154 ptrn = r"root.App.main = (.*?);\n}(this));" 155 try:

    ~/anaconda3/envs/empyr/lib/python3.8/site-packages/pandas_datareader/base.py in _get_response(self, url, params, headers) 179 msg += "\nResponse Text:\n{0}".format(last_response_text) 180 --> 181 raise RemoteDataError(msg) 182 183 def _get_crumb(self, *args):

    RemoteDataError: Unable to read URL: https://finance.yahoo.com/quote/BABA/history?period1=1591671600&period2=1625972399&interval=1d&frequency=1d&filter=history Response Text: b'\n \n \n \n Yahoo\n \n \n \n \n \n \n \n \n

    \n \n \n \n
    \n Yahoo Logo\n

    Will be right back...

    \n

    Thank you for your patience.

    \n

    Our engineers are working quickly to resolve the issue.

    \n
    \n '

    1

    opened by geofffoster 8
  • Failed to build scs ERROR: Could not build wheels for scs which use PEP 517 and cannot be installed directly

    Failed to build scs ERROR: Could not build wheels for scs which use PEP 517 and cannot be installed directly

    I am using Python 3.8.10. I had a separate environment and I got the following error when pip installing empyrial Failed to build scs ERROR: Could not build wheels for scs which use PEP 517 and cannot be installed directly

    From this link https://github.com/pydata/bottleneck/issues/281 I tried pip install --upgrade pip setuptools wheel but I am still getting the same bug when installing empyrial.

    • OS: Ubuntu 20.04
    • mini conda version and a separate environment for trading
    • Python 3.8.10
    • Let me know if there is any way around this bug. Thanks
    opened by gurusura 8
  • RemoteDataError: No data fetched using 'YahooDailyReader'

    RemoteDataError: No data fetched using 'YahooDailyReader'

    Discussed in https://github.com/ssantoshp/Empyrial/discussions/27

    Originally posted by karim1104 July 3, 2021 Starting July 1, I'm getting the error "RemoteDataError: No data fetched using 'YahooDailyReader'". I've tried it in different Python environments (3.6, 3.8, 3.9). It seems like a Pandas DataReader issue (https://github.com/pydata/pandas-datareader/issues/868) How can we resolve this? I have a subscription to FMP, is there a way I use instead of Yahoo Finance?

    opened by ssantoshp 6
  • Error when rebalancing with only one stock

    Error when rebalancing with only one stock

    Hi, I have tried to reproduce test results and simulated one single stock over time by forcing the weight distribution as shown below: tickers = ["stock1", "stock2"] weights_new_ = [1.0, 0.0] no optimizer is used, so just using the quantstats calculations of ratios and returns. In the next example, we do the same but with a yearly rebalancer. The thing here is that the results should be exactly the same. There seems to be a slight error in the returns calculations over time, which turns out to be bigger with more rebalancing.

    I will have some more look at it, and update if I find the bug. Btw great work!

    opened by atobiese 5
  • Unlisted Stock Symbol Counted in Pie Chart

    Unlisted Stock Symbol Counted in Pie Chart

    Hi , awesome tool santosh bhai. If a ticker symbol data is not listed at the time of the start date , it stills counts the ticker in the pie chart portfolio. Ideally it should not .. or am i getting this wrong. very new guy. regards ,

    opened by lawzeus 5
  • get_report error

    get_report error

    Describe the bug the sample code (as per --> https://empyrial.gitbook.io/empyrial/save-the-tearsheet/get-a-report) is throwing an error

    To Reproduce Steps to reproduce the behavior:

    1. Go to 'https://empyrial.gitbook.io/empyrial/save-the-tearsheet/get-a-report...'
    2. Run the sample code
    3. Scroll down to '....'
    4. See error

    NameError Traceback (most recent call last) /var/folders/41/q1hx0rjd5xzck1vl121t6b2m0000gn/T/ipykernel_11664/2518190224.py in 10 empyrial(portfolio) 11 ---> 12 get_report(portfolio)

    NameError: name 'get_report' is not defined

    Expected behavior A clear and concise description of what you expected to happen.

    Screenshots If applicable, add screenshots to help explain your problem.

    Desktop (please complete the following information):

    • OS: [e.g. iOS]
    • Browser [e.g. chrome, safari]
    • Version [e.g. 22]

    Additional context using jypiterlab notebook

    opened by lawzeus 5
  • Support for custom data, or data from other exchanges

    Support for custom data, or data from other exchanges

    Is your feature request related to a problem? Please describe. I want to analyze portfolio in other exchanges.

    Describe the solution you'd like Ability to provide other exchange data.

    opened by suvojit-0x55aa 4
  • Error when running fundlens

    Error when running fundlens

    Anaconda3\lib\site-packages\empyrial.py", line 610, in fundlens ['Dividend yield', yahoo_financials.get_dividend_yield()], ['Payout ratio', yahoo_financials.get_payout_ratio()], ['Controversy', controversy], ['Social score', social_score],

    UnboundLocalError: local variable 'controversy' referenced before assignment

    opened by jaredre 4
  • rebalance has a bug

    rebalance has a bug

    When you set up quarterly rebalance with only one ticker, the strategy and benchmark show different values. This is a bug. They should be completely equal.

    The codebase below reproduces the issue. The EOY returns and the timeseries plot of Cumulative returns vs benchmark show that the strategy and benchmark are divergent.

    from empyrial import empyrial, Engine portfolio = Engine(
    start_date= "2021-01-01", portfolio= ["BTC-USD", "GOOG"], weights = [1, 0.], #equal weighting is set by default benchmark = ["BTC-USD"], #SPY is set by default rebalance = 'quarterly' ) empyrial(portfolio)

    opened by rgleavenworth 3
  • Graph styling

    Graph styling

    Is there a way to override the default styling parameters used in your tearsheet? I understand that most of the styling is inherited from quantstats. Anyway you can suggest how to change things like facecolor, linewidth, etc?

    opened by rgleavenworth 3
  • EM Optimizer fails if benchmark changed to Nifty50  (yahoo ticker used

    EM Optimizer fails if benchmark changed to Nifty50 (yahoo ticker used "^NSEI")

    Describe the bug The EM optimiser fails when the default benchmark is altered to Nifty .

    However if the default is restored it works .

    To Reproduce Steps to reproduce the behavior : use this code "from empyrial import empyrial, Engine

    portfolio = Engine(
    start_date= "2015-01-01", #start date for the backtesting portfolio= ["TCS.NS", "INFY.NS", "HDFC.NS", "KOTAKBANK.NS","TITAN.NS","NESTLEIND.NS"], #assets in your portfolio benchmark = ["NSEI"] optimizer = "EF" ) empyrial(portfolio)"

    Expected behavior error message " File "/var/folders/41/q1hx0rjd5xzck1vl121t6b2m0000gn/T/ipykernel_2924/1251204071.py", line 7 optimizer = "EF" ^ SyntaxError: invalid syntax

    Screenshots If applicable, add screenshots to help explain your problem.

    Desktop (please complete the following information):

    • OS: MacOsx
    • Browser Chrome
    • jupyter

    Additional context Add any other context about the problem here.

    opened by lawzeus 3
  • assets value / non-stock based portfolio?

    assets value / non-stock based portfolio?

    Wondering if Empyrial can be used with a non-stock based portfolio. The example in the docs is like this:

    from empyrial import empyrial, Engine portfolio = Engine(
    start_date= "2018-06-09", portfolio= ["BABA", "PDD", "KO", "AMD","^IXIC"], weights = [0.2, 0.2, 0.2, 0.2, 0.2], #equal weighting is set by default benchmark = ["SPY"] #SPY is set by default ) empyrial(portfolio)

    Is there any alternate way to define a portfolio, not as a list of stocks / weights but based on the value of the assets in the account?

    opened by andrew521 2
  • str and Timestamp error

    str and Timestamp error

    The code:

    from empyrial import empyrial, Engine
    portfolio = Engine(
                      start_date= "2021-01-01", #start date for the backtesting
                      end_date= "2022-05-01",
                      portfolio= tickers[:], #assets in your portfolio
                      weights = w2[:],
                      benchmark=["XU100.IS"]
    )
    print(empyrial(portfolio))
    print(portfolio)
    

    It gives an error like below.

    TypeError Traceback (most recent call last) ~\AppData\Local\Temp/ipykernel_10148/966461475.py in 11 ) 12 ---> 13 print(empyrial(portfolio)) 14 print(portfolio)

    ~\AppData\Roaming\Python\Python39\site-packages\empyrial.py in empyrial(my_portfolio, rf, sigma_value, confidence_value) 304 empyrial.SR = SR 305 --> 306 CR = qs.stats.calmar(returns) 307 CR = CR.tolist() 308 CR = str(round(CR, 2))

    ~\AppData\Roaming\Python\Python39\site-packages\quantstats\stats.py in calmar(returns, prepare_returns) 547 if prepare_returns: 548 returns = _utils._prepare_returns(returns) --> 549 cagr_ratio = cagr(returns) 550 max_dd = max_drawdown(returns) 551 return cagr_ratio / abs(max_dd)

    ~\AppData\Roaming\Python\Python39\site-packages\quantstats\stats.py in cagr(returns, rf, compounded) 500 total = _np.sum(total) 501 --> 502 years = (returns.index[-1] - returns.index[0]).days / 365. 503 504 res = abs(total + 1.0) ** (1.0 / years) - 1

    TypeError: unsupported operand type(s) for -: 'str' and 'Timestamp'

    opened by burakgulmez 1
Releases(v1.9.8)
Owner
Santosh Passoubady
the Copycat Coder
Santosh Passoubady
Pipeline for chemical image-to-text competition

BMS-Molecular-Translation Introduction This is a pipeline for Bristol-Myers Squibb – Molecular Translation by Vadim Timakin and Maksim Zhdanov. We got

Maksim Zhdanov 7 Sep 20, 2022
API for the GPT-J language model 🦜. Including a FastAPI backend and a streamlit frontend

gpt-j-api 🦜 An API to interact with the GPT-J language model. You can use and test the model in two different ways: Streamlit web app at http://api.v

Víctor Gallego 276 Dec 31, 2022
Library for Russian imprecise rhymes generation

TOM RHYMER Library for Russian imprecise rhymes generation. Quick Start Generate rhymes by any given rhyme scheme (aabb, abab, aaccbb, etc ...): from

Alexey Karnachev 6 Oct 18, 2022
SurvTRACE: Transformers for Survival Analysis with Competing Events

⭐ SurvTRACE: Transformers for Survival Analysis with Competing Events This repo provides the implementation of SurvTRACE for survival analysis. It is

Zifeng 13 Oct 06, 2022
天池中药说明书实体识别挑战冠军方案;中文命名实体识别;NER; BERT-CRF & BERT-SPAN & BERT-MRC;Pytorch

天池中药说明书实体识别挑战冠军方案;中文命名实体识别;NER; BERT-CRF & BERT-SPAN & BERT-MRC;Pytorch

zxx飞翔的鱼 751 Dec 30, 2022
The source code of "Language Models are Few-shot Multilingual Learners" (MRL @ EMNLP 2021)

Language Models are Few-shot Multilingual Learners Paper This is the source code of the paper [Arxiv] [ACL Anthology]: This code has been written usin

Genta Indra Winata 45 Nov 21, 2022
This project consists of data analysis and data visualization (done using python)of all IPL seasons from 2008 to 2019 and answering the most asked questions about the IPL.

IPL-data-analysis This project consists of data analysis and data visualization of all IPL seasons from 2008 to 2019 and answering the most asked ques

Sivateja A T 2 Feb 08, 2022
BERT has a Mouth, and It Must Speak: BERT as a Markov Random Field Language Model

BERT has a Mouth, and It Must Speak: BERT as a Markov Random Field Language Model

303 Dec 17, 2022
KoBERT - Korean BERT pre-trained cased (KoBERT)

KoBERT KoBERT Korean BERT pre-trained cased (KoBERT) Why'?' Training Environment Requirements How to install How to use Using with PyTorch Using with

SK T-Brain 1k Jan 02, 2023
Code for the Python code smells video on the ArjanCodes channel.

7 Python code smells This repository contains the code for the Python code smells video on the ArjanCodes channel (watch the video here). The example

55 Dec 29, 2022
DANeS is an open-source E-newspaper dataset by collaboration between DATASET JSC (dataset.vn) and AIV Group (aivgroup.vn)

DANeS - Open-source E-newspaper dataset Source: Technology vector created by macrovector - www.freepik.com. DANeS is an open-source E-newspaper datase

DATASET .JSC 64 Aug 17, 2022
A spaCy wrapper of OpenTapioca for named entity linking on Wikidata

spaCyOpenTapioca A spaCy wrapper of OpenTapioca for named entity linking on Wikidata. Table of contents Installation How to use Local OpenTapioca Vizu

Universitätsbibliothek Mannheim 80 Jan 03, 2023
Converts text into a PDF of handwritten notes

Text To Handwritten Notes Converts text into a PDF of handwritten notes Explore the docs » · Report Bug · Request Feature · Steps: $ git clone https:/

UVSinghK 63 Oct 09, 2022
The RWKV Language Model

RWKV-LM We propose the RWKV language model, with alternating time-mix and channel-mix layers: The R, K, V are generated by linear transforms of input,

PENG Bo 877 Jan 05, 2023
Spam filtering made easy for you

spammy Author: Tasdik Rahman Latest version: 1.0.3 Contents 1 Overview 2 Features 3 Example 3.1 Accuracy of the classifier 4 Installation 4.1 Upgradin

Tasdik Rahman 137 Dec 18, 2022
A sample project that exists for PyPUG's "Tutorial on Packaging and Distributing Projects"

A sample Python project A sample project that exists as an aid to the Python Packaging User Guide's Tutorial on Packaging and Distributing Projects. T

Python Packaging Authority 4.5k Dec 30, 2022
Code for "Semantic Role Labeling as Dependency Parsing: Exploring Latent Tree Structures Inside Arguments".

Code for "Semantic Role Labeling as Dependency Parsing: Exploring Latent Tree Structures Inside Arguments".

Yu Zhang 50 Nov 08, 2022
Longformer: The Long-Document Transformer

Longformer Longformer and LongformerEncoderDecoder (LED) are pretrained transformer models for long documents. ***** New December 1st, 2020: Longforme

AI2 1.6k Dec 29, 2022
voice2json is a collection of command-line tools for offline speech/intent recognition on Linux

Command-line tools for speech and intent recognition on Linux

Michael Hansen 988 Jan 04, 2023
Chinese NewsTitle Generation Project by GPT2.带有超级详细注释的中文GPT2新闻标题生成项目。

GPT2-NewsTitle 带有超详细注释的GPT2新闻标题生成项目 UpDate 01.02.2021 从网上收集数据,将清华新闻数据、搜狗新闻数据等新闻数据集,以及开源的一些摘要数据进行整理清洗,构建一个较完善的中文摘要数据集。 数据集清洗时,仅进行了简单地规则清洗。

logCong 785 Dec 29, 2022