Skip to content

Commit

Permalink
Merge pull request #45 from Boerderij/nightly
Browse files Browse the repository at this point in the history
v1.0 Merge
  • Loading branch information
dirtycajunrice authored Dec 10, 2018
2 parents 998b457 + c27215c commit f4d3aaf
Show file tree
Hide file tree
Showing 32 changed files with 1,674 additions and 4,267 deletions.
5 changes: 4 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,10 @@
.Trashes
ehthumbs.db
Thumbs.db
configuration.py
__pycache__
GeoLite2-City.mmdb
GeoLite2-City.tar.gz
data/varken.ini
.idea/
Legacy/configuration.py
varken-venv/
73 changes: 73 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
# Change Log

## [v1.0](https://github.com/Boerderij/Varken/tree/v1.0) (2018-12-09)
[Full Changelog](https://github.com/Boerderij/Varken/compare/v0.3-nightly...v1.0)

**Implemented enhancements:**

- Add cisco asa from legacy [\#44](https://github.com/Boerderij/Varken/issues/44)
- Add server ID to ombi to differenciate [\#43](https://github.com/Boerderij/Varken/issues/43)

## [v0.3-nightly](https://github.com/Boerderij/Varken/tree/v0.3-nightly) (2018-12-07)
[Full Changelog](https://github.com/Boerderij/Varken/compare/v0.2-nightly...v0.3-nightly)

**Implemented enhancements:**

- Create Changelog for nightly release [\#39](https://github.com/Boerderij/Varken/issues/39)
- Create proper logging [\#34](https://github.com/Boerderij/Varken/issues/34)

**Closed issues:**

- Remove "dashboard" folder and subfolders [\#42](https://github.com/Boerderij/Varken/issues/42)
- Remove "Legacy" folder [\#41](https://github.com/Boerderij/Varken/issues/41)

## [v0.2-nightly](https://github.com/Boerderij/Varken/tree/v0.2-nightly) (2018-12-06)
[Full Changelog](https://github.com/Boerderij/Varken/compare/v0.1...v0.2-nightly)

**Implemented enhancements:**

- Tautulli - multiple server support? [\#25](https://github.com/Boerderij/Varken/issues/25)

**Closed issues:**

- Create the DB if it does not exist. [\#38](https://github.com/Boerderij/Varken/issues/38)
- create systemd examples [\#37](https://github.com/Boerderij/Varken/issues/37)
- Create a GeoIP db downloader and refresher [\#36](https://github.com/Boerderij/Varken/issues/36)
- Create unique IDs for all scripts to prevent duplicate data [\#35](https://github.com/Boerderij/Varken/issues/35)
- use a config.ini instead of command-line flags [\#33](https://github.com/Boerderij/Varken/issues/33)
- Migrate crontab to python schedule package [\#31](https://github.com/Boerderij/Varken/issues/31)
- Consolidate missing and missing\_days in sonarr.py [\#30](https://github.com/Boerderij/Varken/issues/30)
- Ombi something new \[Request\] [\#26](https://github.com/Boerderij/Varken/issues/26)
- Support for Linux without ASA [\#21](https://github.com/Boerderij/Varken/issues/21)

**Merged pull requests:**

- varken to nightly [\#40](https://github.com/Boerderij/Varken/pull/40) ([DirtyCajunRice](https://github.com/DirtyCajunRice))

## [v0.1](https://github.com/Boerderij/Varken/tree/v0.1) (2018-10-20)
**Implemented enhancements:**

- The address 172.17.0.1 is not in the database. [\#17](https://github.com/Boerderij/Varken/issues/17)
- Local streams aren't showing with Tautulli [\#16](https://github.com/Boerderij/Varken/issues/16)
- Worldmap panel [\#15](https://github.com/Boerderij/Varken/issues/15)

**Closed issues:**

- Tautulli.py not working. [\#18](https://github.com/Boerderij/Varken/issues/18)
- Issues with scripts [\#12](https://github.com/Boerderij/Varken/issues/12)
- issue with new tautulli.py [\#10](https://github.com/Boerderij/Varken/issues/10)
- ombi.py fails when attempting to update influxdb [\#9](https://github.com/Boerderij/Varken/issues/9)
- GeoIP Going to Break July 1st [\#8](https://github.com/Boerderij/Varken/issues/8)
- \[Request\] Documentation / How-to Guide [\#1](https://github.com/Boerderij/Varken/issues/1)

**Merged pull requests:**

- v0.1 [\#20](https://github.com/Boerderij/Varken/pull/20) ([samwiseg0](https://github.com/samwiseg0))
- Added selfplug [\#19](https://github.com/Boerderij/Varken/pull/19) ([si0972](https://github.com/si0972))
- Major rework of the scripts [\#14](https://github.com/Boerderij/Varken/pull/14) ([samwiseg0](https://github.com/samwiseg0))
- fix worldmap after change to maxmind local db [\#11](https://github.com/Boerderij/Varken/pull/11) ([madbuda](https://github.com/madbuda))
- Update sonarr.py [\#7](https://github.com/Boerderij/Varken/pull/7) ([ghost](https://github.com/ghost))
- Create crontabs [\#6](https://github.com/Boerderij/Varken/pull/6) ([ghost](https://github.com/ghost))
- update plex\_dashboard.json [\#5](https://github.com/Boerderij/Varken/pull/5) ([ghost](https://github.com/ghost))
- Update README.md [\#4](https://github.com/Boerderij/Varken/pull/4) ([ghost](https://github.com/ghost))
- added sickrage portion [\#3](https://github.com/Boerderij/Varken/pull/3) ([ghost](https://github.com/ghost))
129 changes: 31 additions & 98 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,115 +1,48 @@
# Grafana Scripts
Repo for api scripts written (both pushing and pulling) to aggregate data into influxdb for grafana
# Varken
[![Discord](https://img.shields.io/badge/Discord-Varken-7289DA.svg?logo=discord&style=flat-square)](https://discord.gg/AGTG44H)
[![BuyMeACoffee](https://img.shields.io/badge/BuyMeACoffee-Donate-ff813f.svg?logo=CoffeeScript&style=flat-square)](https://www.buymeacoffee.com/varken)
[![Docker Pulls](https://img.shields.io/docker/pulls/boerderij/varken.svg?style=flat-square)](https://hub.docker.com/r/boerderij/varken/)

Requirements /w install links: [Grafana](http://docs.grafana.org/installation/), [Python3](https://www.python.org/downloads/), [InfluxDB](https://docs.influxdata.com/influxdb/v1.5/introduction/installation/)
Dutch for PIG. PIG is an Acronym for Plex/InfluxDB/Grafana

<center><img width="800" src="https://i.imgur.com/av8e0HP.png"></center>
Varken is a standalone command-line utility to aggregate data
from the Plex ecosystem into InfluxDB. Examples use Grafana for a
frontend

## Quick Setup
1. Install requirements `pip3 install -r requirements.txt`
1. Make a copy of `configuration.example.py` to `configuration.py`
2. Make the appropriate changes to `configuration.py`
1. Create your plex database in influx
```sh
user@server: ~$ influx
> CREATE DATABASE plex
> quit
```
1. After completing the [getting started](http://docs.grafana.org/guides/getting_started/) portion of grafana, create your datasource for influxdb. At a minimum, you will need the plex database.
1. Install `grafana-cli plugins install grafana-worldmap-panel`
1. Click the + on your menu and click import. Using the .json provided in this repo, paste it in and customize as you like.
Requirements:
* Python3.6+
* Python3-pip

<p align="center">
<img width="800" src="https://i.imgur.com/av8e0HP.png">
</p>

## Quick Setup
1. Clone the repository `sudo git clone https://github.com/Boerderij/Varken.git /opt/Varken`
1. Follow the systemd install instructions located in `varken.systemd`
1. Create venv in project `cd /opt/Varken && /usr/bin/python3 -m venv varken-venv`
1. Install requirements `/opt/Varken/varken-venv/bin/python -m pip install -r requirements.txt`
1. Make a copy of `varken.example.ini` to `varken.ini` in the `data` folder
`cp /opt/Varken/data/varken.example.ini /opt/Varken/data/varken.ini`
1. Make the appropriate changes to `varken.ini`
ie.`nano /opt/Varken/data/varken.ini`
1. Make sure all the files have the appropriate permissions `sudo chown varken:varken -R /opt/Varken`
1. After completing the [getting started](http://docs.grafana.org/guides/getting_started/) portion of grafana, create your datasource for influxdb.
1. Install `grafana-cli plugins install grafana-worldmap-panel`

### Docker

Repo is included in [si0972/grafana-scripts](https://github.com/si0972/grafana-scripts-docker)
Repo is included in [Boerderij/docker-Varken](https://github.com/Boerderij/docker-Varken)

<details><summary>Example</summary>
<p>

```
docker create \
--name=grafana-scripts \
-v <path to data>:/Scripts \
-e plex=true \
docker run -d \
--name=varken \
-v <path to data>:/config \
-e PGID=<gid> -e PUID=<uid> \
si0972/grafana-scripts:latest
boerderij/varken:nightly
```
</p>
</details>
## Scripts
### `sonarr.py`
Gathers data from Sonarr and pushes it to influxdb.
```
Script to aid in data gathering from Sonarr

optional arguments:
-h, --help show this help message and exit
--missing Get all missing TV shows
--missing_days MISSING_DAYS
Get missing TV shows in past X days
--upcoming Get upcoming TV shows
--future FUTURE Get TV shows on X days into the future. Includes today.
i.e. --future 2 is Today and Tomorrow
--queue Get TV shows in queue
```
- Notes:
- You cannot stack the arguments. ie. `sonarr.py --missing --queue`
- One argument must be supplied
### `radarr.py`
Gathers data from Radarr and pushes it to influxdb
```
Script to aid in data gathering from Radarr

optional arguments:
-h, --help show this help message and exit
--missing Get missing movies
--missing_avl Get missing available movies
--queue Get movies in queue
```
- Notes:
- You cannot stack the arguments. ie. `radarr.py --missing --queue`
- One argument must be supplied
- `--missing_avl` Refers to how Radarr has determined if the movie should be available to download. The easy way to determine if the movie will appear on this list is if the movie has a <span style="color:red">RED "Missing"</span> tag associated with that movie. <span style="color:blue">BLUE "Missing"</span> tag refers to a movie that is missing but is not available for download yet. These tags are determined by your "Minimum Availability" settings for that movie.
### `ombi.py`
Gathers data from Ombi and pushes it to influxdb
```
Script to aid in data gathering from Ombi

optional arguments:
-h, --help show this help message and exit
--total Get the total count of all requests
--counts Get the count of pending, approved, and available requests
```
- Notes:
- You cannot stack the arguments. ie. `ombi.py --total --counts`
- One argument must be supplied
### `tautulli.py`
Gathers data from Tautulli and pushes it to influxdb. On initial run it will download the geoip2 DB and use it for locations.
## Notes
To run the python scripts crontab is currently leveraged. Examples:
```sh
### Modify paths as appropriate. python3 is located in different places for different users. (`which python3` will give you the path)
### to edit your crontab entry, do not modify /var/spool/cron/crontabs/<user> directly, use `crontab -e`
### Crontabs require an empty line at the end or they WILL not run. Make sure to have 2 lines to be safe
### It is bad practice to run any cronjob more than once a minute. For timing help: https://crontab.guru/
* * * * * /usr/bin/python3 /path-to-grafana-scripts/ombi.py --total
* * * * * /usr/bin/python3 /path-to-grafana-scripts/tautulli.py
* * * * * /usr/bin/python3 /path-to-grafana-scripts/radarr.py --queue
* * * * * /usr/bin/python3 /path-to-grafana-scripts/sonarr.py --queue
*/30 * * * * /usr/bin/python3 /path-to-grafana-scripts/radarr.py --missing
*/30 * * * * /usr/bin/python3 /path-to-grafana-scripts/sonarr.py --missing
*/30 * * * * /usr/bin/python3 /path-to-grafana-scripts/sickrage.py
```
116 changes: 116 additions & 0 deletions Varken.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,116 @@
import sys

# Check for python3.6 or newer to resolve erroneous typing.NamedTuple issues
if sys.version_info < (3, 6):
exit('Varken requires python3.6 or newer')

import schedule
import threading
import platform
import distro

from sys import exit
from time import sleep
from os import access, R_OK
from os.path import isdir, abspath, dirname, join
from argparse import ArgumentParser, RawTextHelpFormatter

from varken.iniparser import INIParser
from varken.sonarr import SonarrAPI
from varken.tautulli import TautulliAPI
from varken.radarr import RadarrAPI
from varken.ombi import OmbiAPI
from varken.cisco import CiscoAPI
from varken.dbmanager import DBManager
from varken.varkenlogger import VarkenLogger

PLATFORM_LINUX_DISTRO = ' '.join(x for x in distro.linux_distribution() if x)


def threaded(job):
thread = threading.Thread(target=job)
thread.start()


if __name__ == "__main__":
parser = ArgumentParser(prog='varken',
description='Command-line utility to aggregate data from the plex ecosystem into InfluxDB',
formatter_class=RawTextHelpFormatter)

parser.add_argument("-d", "--data-folder", help='Define an alternate data folder location')
parser.add_argument("-D", "--debug", action='store_true', help='Use to enable DEBUG logging')

opts = parser.parse_args()

DATA_FOLDER = abspath(join(dirname(__file__), 'data'))

if opts.data_folder:
ARG_FOLDER = opts.data_folder

if isdir(ARG_FOLDER):
DATA_FOLDER = ARG_FOLDER
if not access(ARG_FOLDER, R_OK):
exit("Read permission error for {}".format(ARG_FOLDER))
else:
exit("{} does not exist".format(ARG_FOLDER))

# Initiate the logger
vl = VarkenLogger(data_folder=DATA_FOLDER, debug=opts.debug)
vl.logger.info('Starting Varken...')

vl.logger.info(u"{} {} ({}{})".format(
platform.system(), platform.release(), platform.version(),
' - {}'.format(PLATFORM_LINUX_DISTRO) if PLATFORM_LINUX_DISTRO else ''
))
vl.logger.info(u"Python {}".format(sys.version))

CONFIG = INIParser(DATA_FOLDER)
DBMANAGER = DBManager(CONFIG.influx_server)

if CONFIG.sonarr_enabled:
for server in CONFIG.sonarr_servers:
SONARR = SonarrAPI(server, DBMANAGER)
if server.queue:
schedule.every(server.queue_run_seconds).seconds.do(threaded, SONARR.get_queue)
if server.missing_days > 0:
schedule.every(server.missing_days_run_seconds).seconds.do(threaded, SONARR.get_missing)
if server.future_days > 0:
schedule.every(server.future_days_run_seconds).seconds.do(threaded, SONARR.get_future)

if CONFIG.tautulli_enabled:
for server in CONFIG.tautulli_servers:
TAUTULLI = TautulliAPI(server, DBMANAGER)
if server.get_activity:
schedule.every(server.get_activity_run_seconds).seconds.do(threaded, TAUTULLI.get_activity)

if CONFIG.radarr_enabled:
for server in CONFIG.radarr_servers:
RADARR = RadarrAPI(server, DBMANAGER)
if server.get_missing:
schedule.every(server.get_missing_run_seconds).seconds.do(threaded, RADARR.get_missing)
if server.queue:
schedule.every(server.queue_run_seconds).seconds.do(threaded, RADARR.get_queue)

if CONFIG.ombi_enabled:
for server in CONFIG.ombi_servers:
OMBI = OmbiAPI(server, DBMANAGER)
if server.request_type_counts:
schedule.every(server.request_type_run_seconds).seconds.do(threaded, OMBI.get_request_counts)
if server.request_total_counts:
schedule.every(server.request_total_run_seconds).seconds.do(threaded, OMBI.get_total_requests)

if CONFIG.ciscoasa_enabled:
for firewall in CONFIG.ciscoasa_firewalls:
ASA = CiscoAPI(firewall, DBMANAGER)
schedule.every(firewall.get_bandwidth_run_seconds).seconds.do(threaded, ASA.get_bandwidth)

# Run all on startup
SERVICES_ENABLED = [CONFIG.ombi_enabled, CONFIG.radarr_enabled, CONFIG.tautulli_enabled,
CONFIG.sonarr_enabled, CONFIG.ciscoasa_enabled]
if not [enabled for enabled in SERVICES_ENABLED if enabled]:
exit("All services disabled. Exiting")
schedule.run_all()

while True:
schedule.run_pending()
sleep(1)
34 changes: 0 additions & 34 deletions cisco_asa.py

This file was deleted.

Loading

0 comments on commit f4d3aaf

Please sign in to comment.