Skip to content

Latest commit

 

History

History
205 lines (136 loc) · 7.56 KB

README.md

File metadata and controls

205 lines (136 loc) · 7.56 KB

CI Build and Push Prediction Service to Docker Hub


Python Version Poetry Ruff W&B TensorFlow FastAPI Next JS TypeScript React TailwindCSS Go

 

Crypto Real-Time Inference

The aim of this application is to leverage historical cryptocurrency price data and cutting-edge machine learning algorithms to serve inferences about Bitcoin and Ethereum future price points within a 1 hour window, in real time.

This is my first time working with time series data, and here we will work with raw time series datapoints that are served as OHLC ("open", "high", "low", "close") "candles".

At a high level, I've chosen to think of a given cryptocurrency as a complex system (read: chaotic system), emergent as a phenomenon of large N interactions between groups of humans. Inherently, this is a social system.

In this framework, a "candle" is a measurement of a cryptocurrency's state at a given moment in time - and by measuring state at a series of time points we can see how the system's state evolves over time. The raw dataset itself (consisting of multiple candles) is a function that maps empirically measured states to time points. At a fundamental level, the same concepts can be applied to any physical system composed of a large number of interacting variables - which means this is a very challenging problem!

This application natively makes predictions for both Bitcoin and Ethereum pricepoints, though the source code supports any cryptocurrency that has publically available data.

Disclaimer

Cryptocurrency trading involves inherent risks and is subject to market fluctuations. The code here is intended for informational purposes only and should not be considered financial advice. Always conduct thorough research and exercise caution when trading cryptocurrencies.

Quick Start 🐍 🚀 ✨

Setup

Note: I've targeted Ubuntu 20.04/22.04 for automated dev setup.

  1. You can clone this repository onto a machine with:
git clone https://github.com/christopherkeim/crypto-real-time-inference.git
  1. Once you have local copy of this repository, navigate into this directory and run the setup.sh script:
cd crypto-real-time-inference
bash setup.sh

This will install Poetry 1.5.1 and Python3.10 into your environment.

Dependency Installation

  1. To install the Python dependencies for this application, run:
make install

Data

  1. To download Bitcoin candles using default parameters (from September 2020 - current day) run:
make rawdata

Feature Engineering

  1. To build supervised-machine-learning-ready datasets from this raw price data, run:
make features

Machine Learning Training

  1. To build a Lasso Regressor model (primary recommendation) using default parameters, run:
make train

Deep Learning Training

  1. To build a Convolutional Neural Network (primary recommendation) using default parameters, run:
make nntrain

Model Prediction (Endpoint)

  1. To start the prediction service locally using FastAPI and Uvicorn, run:
make predict

You can curl the http://0.0.0.0:8000/api/predict endpoint or simply navigate to that URL in your browser to garner predictions from your trained Convolutional Neural Network for the current next hour's price point (defaults)

Prediction Backend 🧙‍♂️ 🔧

Prediction Service Containerization

  1. To build the prediction service into a Docker container, navigate to the root of this repository and run:
docker build -t crypto-real-time-inference:v0 .
  1. To start prediction the service container, run:
docker run -d -p 8000:8000 crypto-real-time-inference:v0

The containerized prediction service will serve predictions at http://0.0.0.0:8000/api/predict.

Frontend 🪅 ✨

Frontend Client Setup

  1. To setup the frontend client, navigate to the frontend directory and run:
npm install

You will also need to make a copy of .env.local.example and rename it to .env.local:

cp .env.local.example .env.local

The default values in .env.local should work out of the box for local development. If you change where the backend prediction service is hosted, you will need to update the CRYPTO_INFERENCE_API_URI variable in .env.local to reflect the new URI.

Frontend Client Development

  1. To start the frontend client development server, navigate to the frontend directory and run:
npm run dev

The default configuration will spin up the frontend client development server at http://localhost:3000 and the backend prediction service at http://localhost:8000, with Hot Module Reload enabled for both.

It is also possible to run the frontend sever by itself, without the backend prediction service, by running:

npm run next-dev

Frontend Client Production Build

  1. To build the frontend client for production, navigate to the frontend directory and run:
npm run build

This will build the frontend client into the frontend/.next directory. To serve the production build, run:

npm run start

A great candidate for deployment is Vercel, just make sure you set the frontend directory as the project directory after linking your repo. Other cloud providers will work as long as then call npm run build and npm run start in the root of the frontend directory.

More to come (see below)

In Progress 🔧💻

  • Continuous Integration
  • Data extraction from Coinbase (CLI tool)
  • Feature Engineering Pipeline
  • Experiment tracking (Weight & Biases)
  • Training Pipelines (ML & DL)
  • Prediction Service (FastAPI, Docker)
  • Continuous Delivery to Docker Hub (x86_64, arm64 targets)
  • Frontend
  • Continuous Deployment