The aim of this application is to leverage historical cryptocurrency price data and cutting-edge machine learning algorithms to serve inferences about Bitcoin and Ethereum future price points within a 1 hour window, in real time.
This is my first time working with time series data, and here we will work with raw time series datapoints that are served as OHLC ("open", "high", "low", "close") "candles".
At a high level, I've chosen to think of a given cryptocurrency as a complex system (read: chaotic system), emergent as a phenomenon of large N interactions between groups of humans. Inherently, this is a social system.
In this framework, a "candle" is a measurement of a cryptocurrency's state at a given moment in time - and by measuring state at a series of time points we can see how the system's state evolves over time. The raw dataset itself (consisting of multiple candles) is a function that maps empirically measured states to time points. At a fundamental level, the same concepts can be applied to any physical system composed of a large number of interacting variables - which means this is a very challenging problem!
This application natively makes predictions for both Bitcoin and Ethereum pricepoints, though the source code supports any cryptocurrency that has publically available data.
Cryptocurrency trading involves inherent risks and is subject to market fluctuations. The code here is intended for informational purposes only and should not be considered financial advice. Always conduct thorough research and exercise caution when trading cryptocurrencies.
Note: I've targeted Ubuntu 20.04/22.04 for automated dev setup.
- You can clone this repository onto a machine with:
git clone https://github.com/christopherkeim/crypto-real-time-inference.git
- Once you have local copy of this repository, navigate into this directory and run the
setup.sh
script:
cd crypto-real-time-inference
bash setup.sh
This will install Poetry 1.5.1 and Python3.10 into your environment.
- To install the Python dependencies for this application, run:
make install
- To download Bitcoin candles using default parameters (from September 2020 - current day) run:
make rawdata
- To build supervised-machine-learning-ready datasets from this raw price data, run:
make features
- To build a Lasso Regressor model (primary recommendation) using default parameters, run:
make train
- To build a Convolutional Neural Network (primary recommendation) using default parameters, run:
make nntrain
- To start the prediction service locally using FastAPI and Uvicorn, run:
make predict
You can curl the http://0.0.0.0:8000/api/predict
endpoint or simply navigate to that URL in your browser to garner predictions from your trained Convolutional Neural Network for the current next hour's price point (defaults)
- To build the prediction service into a Docker container, navigate to the root of this repository and run:
docker build -t crypto-real-time-inference:v0 .
- To start prediction the service container, run:
docker run -d -p 8000:8000 crypto-real-time-inference:v0
The containerized prediction service will serve predictions at http://0.0.0.0:8000/api/predict
.
- To setup the frontend client, navigate to the
frontend
directory and run:
npm install
You will also need to make a copy of .env.local.example
and rename it to .env.local
:
cp .env.local.example .env.local
The default values in .env.local
should work out of the box for local development. If you change where the backend prediction service is hosted, you will need to update the CRYPTO_INFERENCE_API_URI
variable in .env.local
to reflect the new URI.
- To start the frontend client development server, navigate to the
frontend
directory and run:
npm run dev
The default configuration will spin up the frontend client development server at http://localhost:3000
and the backend prediction service at http://localhost:8000
, with Hot Module Reload enabled for both.
It is also possible to run the frontend sever by itself, without the backend prediction service, by running:
npm run next-dev
- To build the frontend client for production, navigate to the
frontend
directory and run:
npm run build
This will build the frontend client into the frontend/.next
directory. To serve the production build, run:
npm run start
A great candidate for deployment is Vercel, just make sure you set the frontend
directory as the project directory after linking your repo. Other cloud providers will work as long as then call npm run build
and npm run start
in the root of the frontend
directory.
- Continuous Integration
- Data extraction from Coinbase (CLI tool)
- Feature Engineering Pipeline
- Experiment tracking (Weight & Biases)
- Training Pipelines (ML & DL)
- Prediction Service (FastAPI, Docker)
- Continuous Delivery to Docker Hub (
x86_64
,arm64
targets) - Frontend
- Continuous Deployment