Sharing crypto-related quantitative analyses on our historical market data, which are distinctive by including also the order book data. This repository is curated and reviewed by Jan Skoda, an experienced quant and former Head of Research of an equity prop-trading company, who is also running a Market Maker's blog.
In the first notebook, we analyze Hummingbot market-making strategy on a combination of trade and level_1 order book data. In a very simple simulation, we can experiment with the profitability of variants of this strategy on historical data. This is intended as a basis for your own analysis, not as a direct basis for trading.
We estimate that with this data-driven backtesting approach, you can prevent a trading loss of >=10% of your open order nominal monthly even with a pretty simple analysis compared to the usual 'run the bot, check results after some time, tune parameters, repeat' approach. And you save a lot of time. Note that this effect can be much bigger on volatile low-volume coins.
-> Open the notebook in nbviewer
This notebook is looking on MM profitability, spread characteristics, order-book imbalance and a few autocorrelations on crypto order book data. There are some interesting results on extreme order-book imbalances and short-term returns autocorrelations.
-> Open first of the two notebooks in nbviewer
We detect simple every-minute TWAP orders/executions in tick data and analyze their impact on price.
-> Open the notebook in nbviewer
We detect fake volume in tick data by comparing trade prices to order book bests.
-> Open the notebook in nbviewer
Backtest of a simple twitter influencer counter-trade strategy. Inspired by the well known Inverse-Cramer strategy.
-> Open the notebook in nbviewer
By contributing into this repository, you can earn a few free weeks of subscription time on crypto-lake.com market data. If you have an idea of what to contribute or want suggestions, email us. Then just fork this repo and open a pull request once you're ready.
There are a few contribution rules:
- quant research rule #1: keep things simple (KISS)
- keep the code clean, extract repeated code into functions or modules (DRY)
- each notebook should have markdown intro description in its header and conclusions in the footer
- prefer realistic simulation to smart trading logic/model
See the list of freely available sample data or the data schemata.
Run locally with python3.8 or later. Install requirements.txt and run using jupyter notebook
shell command.
Or run online via Binder using the link to nbviewer above and then click the three rings icon to the top-right.
- Q: Can you share your alpha? / Should we share our alpha?
- A: No. The point of quant research is that everyone should come up with his own 'alpha'. This repository and Lake project just aim to make this easier by sharing the basics.
- Q: Can I also use other data than Lake?
- A: Sure, but don't mix data sources unless necessary.
- Q: Where can I follow Lake and sharing analyses?