Create models directory on your project root then download this trained model on AESLC dataset and put it in the models directory. Then, you can run it as a container:
- Build the image
docker build -f email_writer.Dockerfile -t email_writer .
and run it withdocker run -p 6060:6060 --name email_writer email_writer
. To enable gpu, install container-toolki and run the container withdocker run --gpus all -p 6060:6060 --name email_writer email_writer
. - Or you can run it direct by setting up the repo (preferably inside a venv) with
pip install -r requirements.txt
andpip install -e email_writer
- Then, run the command
uvicorn email_writer.main:app --port 6060 --host 0.0.0.0
- Then, run the command
To call the inference endpoint, just call http://0.0.0.0:6060/email
with a GET request and header of Content-Type=application/json
with the needed json body or directly by using swagger at http://0.0.0.0:6060/docs#/default/read_root_email_get
.
- subject: subject of the email
- from: sender of the email
- to: receiver of the email
- salutation: salutation of the sender
- temperature: temperature used by model inference
- n_gen: number of examples generated
Example request:
{
"subject": "Interview Challenge",
"salutation": "Giovani nice to meet you last week",
"from": "[email protected]",
"to": "[email protected]",
"temperature": 0.7,
"n_gen": 4,
}