Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compage gpt #157

Open
wants to merge 2 commits into
base: pre-main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions deploy/build-docker-images.sh
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,11 @@ TAG_NAME="v2"
CORE_IMAGE="ghcr.io/intelops/compage/core:$TAG_NAME"
APP_IMAGE="ghcr.io/intelops/compage/app:$TAG_NAME"
UI_IMAGE="ghcr.io/intelops/compage/ui:$TAG_NAME"
LLM_BACKEND_IMAGE="ghcr.io/intelops/compage/llm_backend:$TAG_NAME"

# create docker images for core, app and ui
docker build -t $CORE_IMAGE --network host ../core/
docker build -t $APP_IMAGE --network host ../app/
docker build -t $UI_IMAGE --network host ../ui/
ocker build -t $LLM_BACKEND_IMAGE --network host ../llm_backend/

3 changes: 2 additions & 1 deletion deploy/push-docker-images-to-github.sh
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,5 @@ source build-docker-images.sh
# push docker images for core, app and ui
docker push $CORE_IMAGE
docker push $APP_IMAGE
docker push $UI_IMAGE
docker push $UI_IMAGE
docker push $LLM_BACKEND_IMAGE
11 changes: 11 additions & 0 deletions llm_backend/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
FROM python:3.11

RUN mkdir -p /app

WORKDIR /app
COPY . .

RUN pip install -r requirements.txt
EXPOSE 8000
# ENTRYPOINT [ "python", "generate_code.py" ]
CMD ["uvicorn", "generate_code:app", "--host", "0.0.0.0", "--port", "8000"]
40 changes: 40 additions & 0 deletions llm_backend/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
## Requirements

To successfully run the program, make sure you have all the necessary dependencies installed. These dependencies are listed in the `requirements.txt` file. Before executing the program, follow these steps:

## Setting Up the Environment

1. Create a Python virtual environment using either pip or conda. You should use Python version 3.11.4.

2. Activate the newly created virtual environment. This step ensures that the required packages are isolated from your system-wide Python installation.

## Installing Dependencies

3. Install the required dependencies by running the following command in your terminal:

<pre>
<code >pip install -r requirements.txt</code>
</pre>

This command will read the `requirements.txt` file and install all the necessary packages into your virtual environment.

## Running the Code

4. Once the dependencies are installed, you can run the program using the following command:

<pre>
<code>uvicorn generate_code:app --reload</code>
</pre>

This command starts the Uvicorn server and launches the application. The `--reload` flag enables auto-reloading, which is useful during development.











142 changes: 142 additions & 0 deletions llm_backend/generate_code.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,142 @@
# Import libraries
import os
import sys
import openai
import json
import langchain.agents as lc_agents
import uvicorn
import pydantic


# Import custom modules
from datetime import datetime
from dotenv import load_dotenv
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import SequentialChain, SimpleSequentialChain, LLMChain
from langchain.memory import ConversationBufferMemory
from langchain.llms import OpenAI as lang_open_ai
from pydantic import BaseModel
from fastapi import FastAPI, Header
from fastapi.middleware.cors import CORSMiddleware






class Item(pydantic.BaseModel):
language: str
topic: str


app = FastAPI()
origins = ["*"]
app.add_middleware(
CORSMiddleware,
allow_origins=origins,
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)




@app.get("/ping")
def ping():
return {"message": "Hello World"}

@app.post("/llm_generate_code/")
async def generate_code(item:Item,apikey: str = Header(None) ):
global api_key

api_key = apikey
# Check if the API key has been saved in the memory
if not api_key:
api_key = api_key

else:
# The API key has already been saved, so don't re-assign it
pass


os.environ["OPENAI_API_KEY"] = api_key #item.apikey

code_language = item.language
code_topic = item.topic

# prompt template for the code generation
code_template = PromptTemplate(
input_variables=['lang', 'top'],
template='Write the code in ' +
' {lang} language' + ' for {top}'\
+ ' with proper inline comments and maintaining \
markdown format of {lang}'
)

code_explain_template = PromptTemplate(
input_variables=['top'],
template='Explain in detail the working of the generated code and algorithm ' +
' for {top}' + ' in proper markdown format'
)
code_flow_template = PromptTemplate(
input_variables=['top'],
template='Generate the diagram flow ' +
' for {top} in proper markdown format'
)

code_testcase_template = PromptTemplate(
input_variables= ['lang', 'top'],
template='Generate the unit test cases and codes ' +
'and integration test cases with codes ' +
'in {lang}' + ' for {top} in proper markdown formats'
)

# use memory for the conversation
code_memory = ConversationBufferMemory(
input_key='top', memory_key='chat_history')
explain_memory = ConversationBufferMemory(
input_key='top', memory_key='chat_history')
flow_memory = ConversationBufferMemory(
input_key='top', memory_key='chat_history')
testcase_memory = ConversationBufferMemory(
input_key='top', memory_key='chat_history')

# create the OpenAI LLM model
open_ai_llm = OpenAI( temperature=0.7, max_tokens=1000)

# create a chain to generate the code
code_chain = LLMChain(llm=open_ai_llm, prompt=code_template,
output_key='code', memory=code_memory, verbose=True)
# create another chain to explain the code
code_explain_chain = LLMChain(llm=open_ai_llm, prompt=code_explain_template,
output_key='code_explain', memory=explain_memory, verbose=True)



# create another chain to generate the code flow if needed
code_flow_chain = LLMChain(llm=open_ai_llm, prompt=code_flow_template,
output_key='code_flow', memory=flow_memory, verbose=True)

# create another chain to generate the code flow if needed
code_testcase_chain = LLMChain(llm=open_ai_llm, prompt=code_testcase_template,
output_key='code_unittest', memory=testcase_memory, verbose=True)

# create a sequential chain to combine both chains
sequential_chain = SequentialChain(chains=[code_chain, code_explain_chain, code_flow_chain,\
code_testcase_chain], input_variables=
['lang', 'top'], output_variables=['code', 'code_explain','code_flow', 'code_unittest'])


response = sequential_chain({'lang': code_language, 'top': code_topic})


return {'code': response['code'], 'code_explain': response['code_explain'],\
'code_flow': response['code_flow'], 'code_unittest': response['code_unittest']}






125 changes: 125 additions & 0 deletions llm_backend/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,125 @@
aiohttp==3.8.4
aiosignal==1.3.1
altair==4.2.2
anyio==3.7.1
async-timeout==4.0.2
attrs==23.1.0
backoff==2.2.1
beautifulsoup4==4.12.2
blinker==1.6.2
cachetools==5.3.1
certifi==2023.5.7
charset-normalizer==3.2.0
chromadb==0.3.23
click==8.1.5
clickhouse-connect==0.6.6
cmake==3.26.4
dataclasses-json==0.5.9
decorator==5.1.1
duckdb==0.8.1
entrypoints==0.4
fastapi==0.100.0
filelock==3.12.2
frozenlist==1.4.0
fsspec==2023.6.0
gitdb==4.0.10
GitPython==3.1.32
greenlet==2.0.2
h11==0.14.0
hnswlib==0.7.0
httptools==0.6.0
huggingface-hub==0.16.4
idna==3.4
importlib-metadata==6.8.0
Jinja2==3.1.2
joblib==1.3.1
jsonschema==4.18.3
jsonschema-specifications==2023.6.1
langchain==0.0.174
lit==16.0.6
lz4==4.3.2
markdown-it-py==3.0.0
MarkupSafe==2.1.3
marshmallow==3.19.0
marshmallow-enum==1.5.1
mdurl==0.1.2
monotonic==1.6
mpmath==1.3.0
multidict==6.0.4
mypy-extensions==1.0.0
networkx==3.1
nltk==3.8.1
numexpr==2.8.4
numpy==1.25.1
nvidia-cublas-cu11==11.10.3.66
nvidia-cuda-cupti-cu11==11.7.101
nvidia-cuda-nvrtc-cu11==11.7.99
nvidia-cuda-runtime-cu11==11.7.99
nvidia-cudnn-cu11==8.5.0.96
nvidia-cufft-cu11==10.9.0.58
nvidia-curand-cu11==10.2.10.91
nvidia-cusolver-cu11==11.4.0.1
nvidia-cusparse-cu11==11.7.4.91
nvidia-nccl-cu11==2.14.3
nvidia-nvtx-cu11==11.7.91
openai==0.27.2
openapi-schema-pydantic==1.2.4
packaging==23.1
pandas==2.0.3
Pillow==10.0.0
posthog==3.0.1
protobuf==3.20.3
pyarrow==12.0.1
pydantic==1.10.11
pydeck==0.8.1b0
Pygments==2.15.1
Pympler==1.0.1
python-dateutil==2.8.2
python-dotenv==1.0.0
pytz==2023.3
PyYAML==6.0
referencing==0.29.1
regex==2023.6.3
requests==2.31.0
rich==13.4.2
rpds-py==0.8.10
safetensors==0.3.1
scikit-learn==1.3.0
scipy==1.11.1
sentence-transformers==2.2.2
sentencepiece==0.1.99
six==1.16.0
smmap==5.0.0
sniffio==1.3.0
soupsieve==2.4.1
SQLAlchemy==2.0.18
starlette==0.27.0
streamlit==1.22.0
sympy==1.12
tenacity==8.2.2
threadpoolctl==3.2.0
tiktoken==0.3.3
tokenizers==0.13.3
toml==0.10.2
toolz==0.12.0
torch==2.0.1
torchvision==0.15.2
tornado==6.3.2
tqdm==4.65.0
transformers==4.30.2
triton==2.0.0
typing-inspect==0.9.0
typing_extensions==4.7.1
tzdata==2023.3
tzlocal==5.0.1
urllib3==2.0.3
uvicorn==0.22.0
uvloop==0.17.0
validators==0.20.0
watchdog==3.0.0
watchfiles==0.19.0
websockets==11.0.3
wikipedia==1.4.0
yarl==1.9.2
zipp==3.16.1
zstandard==0.21.0