Skip to content

JohnZolton/patense-local

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

30 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Patense.local

A 100% local, private document search tool. It enables you to run deep, private searches across hundreds of pages per minute to get relevant context for your queries. Patense.local uses vLLM by default but can run on any backend LLM server using the OpenAI API, .

It basically breaks your references up into pages, passes each page to an LLM with the query, and asks if the content is relevant to the query. If it's relevant, it displays a short quote with a link to the full page.

Features

  • Deep document search - find relevant portions of references fast with AI
  • Inventive feature extraction - get all disclosed features for possible claims or amendments
  • NEW OA Auditor - Upload a set of claims and references, AI parses the claims into inventive elements and searches all references

Deep Search Demo

Extraction Demo

Keys

  • Privacy First: Run the tool entirely on your local machine, ensuring full control over your data.
  • High Performance: Search and analyze large documents quickly and efficiently.
  • Flexible Backend: While text-gen-webui is the default, Patense.local can work with any backend LLM server.

Requirements

  • vLLM (installation is outside the scope of this guide).
  • Node.js and npm (These are necessary to run the application. If you're unfamiliar with installing them, it might be easier to use Patense.ai).

Installation

  1. Clone the Repository

    git clone https://github.com/JohnZolton/snorkle.git
    cd snorkle
    
  2. Install Dependencies

    npm install

    2.1 Rename .env.example to .env

  3. Configure the Backend

    Start your backend LLM server in api mode

    in your text-gen-webui folder (or other backend) run:

    vllm serve NousResearch/Meta-Llama-3.1-8B-Instruct --max-model-len 8000 --tensor-parallel-size 2 #Set to number of GPUs you're using
    
    
  4. Initialize the database in the /patense-local folder, run:

    npm run db:push
  5. Run the Application in the /patense-local folder, run:

    npm run dev
  6. Naviage to http://localhost:3000

Usage

Once the application is running, you can begin uploading documents and performing searches.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published