-
-
- Configuration
-
docker/
etc/
iso/
install.sh
makeiso.sh
T-Ops 20.06.3 runs on Debian (Stable), is based heavily on
and includes dockerized versions of the following honeypots
- adbhoney,
- ciscoasa,
- citrixhoneypot,
- conpot,
- cowrie,
- dicompot,
- dionaea,
- elasticpot,
- glutton,
- heralding,
- honeypy,
- honeysap,
- honeytrap,
- ipphoney,
- mailoney,
- medpot,
- rdpy,
- snare,
- tanner
Furthermore T-Ops includes the following tools
- Cockpit for a lightweight, webui for docker, os, real-time performance monitoring and web terminal.
- Cyberchef a web app for encryption, encoding, compression and data analysis.
- ELK stack to beautifully visualize all the events captured by T-Ops.
- Elasticsearch Head a web front end for browsing and interacting with an Elastic Search cluster.
- Fatt a pyshark based script for extracting network metadata and fingerprints from pcap files and live network traffic.
- Spiderfoot a open source intelligence automation tool.
- Suricata a Network Security Monitoring engine.
- Meet the system requirements. The T-Ops installation needs at least 8 GB RAM and 128 GB free disk space as well as a working (outgoing non-filtered) internet connection.
- Download the T-Ops ISO from GitHub or create it yourself.
- Install the system in a VM or on physical hardware with internet access.
- Enjoy your favorite beverage - watch and analyze.
- Technical Concept
- System Requirements
- Installation Types
- Installation
- Updates
- Options
- Roadmap
- Disclaimer
- FAQ
- Contact
- Licenses
- Credits
- Stay tuned
- Testimonial
T-Ops is based on the Debian (Stable) network installer. The honeypot daemons as well as other support components are dockered. This allows T-Ops to run multiple honeypot daemons and tools on the same network interface while maintaining a small footprint and constrain each honeypot within its own environment.
In T-Ops we combine the dockerized honeypots ...
- adbhoney,
- ciscoasa,
- citrixhoneypot,
- conpot,
- cowrie,
- dicompot,
- dionaea,
- elasticpot,
- glutton,
- heralding,
- honeypy,
- honeysap,
- honeytrap,
- ipphoney,
- mailoney,
- medpot,
- rdpy,
- snare,
- tanner
... with the following tools ...
- Cockpit for a lightweight, webui for docker, os, real-time performance monitoring and web terminal.
- Cyberchef a web app for encryption, encoding, compression and data analysis.
- ELK stack to beautifully visualize all the events captured by T-Ops.
- Elasticsearch Head a web front end for browsing and interacting with an Elastic Search cluster.
- Fatt a pyshark based script for extracting network metadata and fingerprints from pcap files and live network traffic.
- Spiderfoot a open source intelligence automation tool.
- Suricata a Network Security Monitoring engine.
... to give you the best out-of-the-box experience possible and an easy-to-use multi-honeypot appliance.
While data within docker containers is volatile T-Ops ensures a default 30 day persistence of all relevant honeypot and tool data in the well known /data
folder and sub-folders. The persistence configuration may be adjusted in /opt/tpot/etc/logrotate/logrotate.conf
. Once a docker container crashes, all other data produced within its environment is erased and a fresh instance is started from the corresponding docker image.
Basically, what happens when the system is booted up is the following:
- start host system
- start all the necessary services (i.e. cockpit, docker, etc.)
- start all docker containers via docker-compose (honeypots, nms, elk, etc.)
The T-Ops project provides all the tools and documentation necessary to build your own honeypot system and contribute to our Sicherheitstacho.
The source code and configuration files are fully stored in the T-Ops GitHub repository. The docker images are preconfigured for the T-Ops environment. If you want to run the docker images separately, make sure you study the docker-compose configuration (/opt/tpot/etc/tpot.yml
) and the T-Ops systemd script (/etc/systemd/system/tpot.service
), as they provide a good starting point for implementing changes.
The individual docker configurations are located in the docker folder.
Depending on the installation type, whether installing on real hardware or in a virtual machine, make sure the designated system meets the following requirements:
- 8 GB RAM (less RAM is possible but might introduce swapping / instabilities)
- 128 GB SSD (smaller is possible but limits the capacity of storing events)
- Network via DHCP
- A working, non-proxied, internet connection
There are prebuilt installation types available each focussing on different aspects to get you started right out of the box. The docker-compose files are located in /opt/tpot/etc/compose
. If you want to build your own compose file just create a new one (based on the layout and settings of the prebuilds) in /opt/tpot/etc/compose
and run tped.sh
afterwards to point T-Ops to the new compose file and run you personalized edition.
- Honeypots: adbhoney, ciscoasa, citrixhoneypot, conpot, cowrie, dicompot, dionaea, elasticpot, heralding, honeysap, honeytrap, mailoney, medpot, rdpy, snare & tanner
- Tools: cockpit, cyberchef, ELK, fatt, elasticsearch head, ewsposter, nginx / heimdall, spiderfoot, p0f & suricata
- Honeypots: adbhoney, ciscoasa, citrixhoneypot, conpot, cowrie, dicompot, dionaea, elasticpot, heralding, honeypy, honeysap, honeytrap, mailoney, medpot, rdpy, snare & tanner
- Tools: cockpit, ewsposter, fatt, p0f & suricata
- Since there is no ELK stack provided the Sensor Installation only requires 4 GB of RAM.
- Honeypots: conpot, cowrie, dicompot, heralding, honeysap, honeytrap, medpot & rdpy
- Tools: cockpit, cyberchef, ELK, fatt, elasticsearch head, ewsposter, nginx / heimdall, spiderfoot, p0f & suricata
- Honeypots: heralding & honeytrap
- Tools: cockpit, cyberchef, fatt, ELK, elasticsearch head, ewsposter, nginx / heimdall, spiderfoot, p0f & suricata
- Honeypots: adbhoney, ciscoasa, citrixhoneypot, conpot, cowrie, dicompot, dionaea, glutton, heralding, honeypy, honeysap, ipphoney, mailoney, medpot, rdpy, snare & tanner
- Tools: cockpit, cyberchef, ELK, fatt, elasticsearch head, ewsposter, nginx / heimdall, spiderfoot, p0f & suricata
- Honeypots: dicompot & medpot
- Tools: cockpit, cyberchef, ELK, fatt, elasticsearch head, ewsposter, nginx / heimdall, spiderfoot, p0f & suricata
The installation of T-Ops is straight forward and heavily depends on a working, transparent and non-proxied up and running internet connection. Otherwise the installation will fail!
Firstly, decide if you want to download the prebuilt installation ISO image from GitHub, create it yourself or post-install on an existing Debian 10 (Buster).
Secondly, decide where you the system to run: real hardware or in a virtual machine?
An installation ISO image is available for download (~50MB), which is created by the ISO Creator you can use yourself in order to create your own image. It will basically just save you some time downloading components and creating the ISO image. You can download the prebuilt installation ISO from GitHub and jump to the installation section.
For transparency reasons and to give you the ability to customize your install you use the ISO Creator that enables you to create your own ISO installation image.
Requirements to create the ISO image:
- Debian 10 as host system (others may work, but remain untested)
- 4GB of free memory
- 32GB of free storage
- A working internet connection
How to create the ISO image:
- Clone the repository and enter it.
git https://github.com/nu11secur1ty/T-Ops
cd T-Ops
- Run the
makeiso.sh
script to build the ISO image. The script will download and install dependencies necessary to build the image on the invoking machine. It will further download the ubuntu network installer image (~50MB) which T-Ops is based on.
sudo ./makeiso.sh
After a successful build, you will find the ISO image tops.iso
along with a SHA256 checksum tpot.sha256
in your folder.
You may want to run T-Ops in a virtualized environment. The virtual system configuration depends on your virtualization provider.
T-Ops is successfully tested with VirtualBox and VMWare with just little modifications to the default machine configurations.
It is important to make sure you meet the system requirements and assign virtual harddisk and RAM according to the requirements while making sure networking is bridged.
You need to enable promiscuous mode for the network interface for fatt, suricata and p0f to work properly. Make sure you enable it during configuration.
If you want to use a wifi card as a primary NIC for T-Ops, please be aware that not all network interface drivers support all wireless cards. In VirtualBox e.g. you have to choose the "MT SERVER" model of the NIC.
Lastly, mount the tpot.iso
ISO to the VM and continue with the installation.
You can now jump here.
If you decide to run T-Ops on dedicated hardware, just follow these steps:
- Burn a CD from the ISO image or make a bootable USB stick using the image.
Whereas most CD burning tools allow you to burn from ISO images, the procedure to create a bootable USB stick from an ISO image depends on your system. There are various Windows GUI tools available, e.g. this tip might help you.
On Linux or MacOS you can use the tool dd or create the USB stick with T-Ops's ISO Creator. - Boot from the USB stick and install.
Please note: Limited tests are performed for the Intel NUC platform other hardware platforms remain untested. There is no hardware support provided of any kind.
In some cases it is necessary to install Debian 10 (Buster) on your own:
- Cloud provider does not offer mounting ISO images.
- Hardware setup needs special drivers and / or kernels.
- Within your company you have to setup special policies, software etc.
- You just like to stay on top of things.
The T-Ops Universal Installer will upgrade the system and install all required T-Ops dependencies.
Just follow these steps:
git clone https://github.com/nu11secur1ty/T-Ops
cd T-Ops/iso/installer/
./install.sh --type=user
The installer will now start and guide you through the install process.
You can also let the installer run automatically if you provide your own tpot.conf
. An example is available in T-Ops/iso/installer/tpot.conf.dist
. This should make things easier in case you want to automate the installation i.e. with Ansible.
Just follow these steps while adjusting tpot.conf
to your needs:
git clone https://github.com/nu11secur1ty/T-Ops
cd T-Ops/iso/installer/
cp tpot.conf.dist tpot.conf
./install.sh --type=auto --conf=tpot.conf
The installer will start automatically and guide you through the install process.
Located in the cloud
folder.
Currently there are examples with Ansible & Terraform.
If you would like to contribute, you can add other cloud deployments like Chef or Puppet or extend current methods with other cloud providers.
Please note: Cloud providers usually offer adjusted Debian OS images, which might not be compatible with T-Ops. There is no cloud provider support provided of any kind.
You can find an Ansible based T-Ops deployment in the cloud/ansible
folder.
The Playbook in the cloud/ansible/openstack
folder is reusable for all OpenStack clouds out of the box.
It first creates all resources (security group, network, subnet, router), deploys a new server and then installs and configures T-Ops.
You can have a look at the Playbook and easily adapt the deploy role for other cloud providers.
Please note: Cloud providers usually offer adjusted Debian OS images, which might not be compatible with T-Ops. There is no cloud provider support provided of any kind.
You can find Terraform configuration in the cloud/terraform
folder.
This can be used to launch a virtual machine, bootstrap any dependencies and install T-Ops in a single step.
Configuration for Amazon Web Services (AWS) and Open Telekom Cloud (OTC) is currently included.
This can easily be extended to support other Terraform providers.
Please note: Cloud providers usually offer adjusted Debian OS images, which might not be compatible with T-Ops. There is no cloud provider support provided of any kind.
The installation requires very little interaction, only a locale and keyboard setting have to be answered for the basic linux installation. While the system reboots maintain the active internet connection. The T-Ops installer will start and ask you for an installation type, password for the tsec user and credentials for a web user. Everything else will be configured automatically. All docker images and other componenents will be downloaded. Depending on your network connection and the chosen installation type, the installation may take some time. With 250Mbit down / 40Mbit up the installation is usually finished within 15-30 minutes.
Once the installation is finished, the system will automatically reboot and you will be presented with the T-Ops login screen. On the console you may login with:
- user: [tsec or user] you chose during one of the post install methods
- pass: [password] you chose during the installation
All honeypot services are preconfigured and are starting automatically.
You can login from your browser and access the Admin UI: https://<your.ip>:64294
or via SSH to access the command line: ssh -l tsec -p 64295 <your.ip>
- user: [tsec or user] you chose during one of the post install methods
- pass: [password] you chose during the installation
You can also login from your browser and access the Web UI: https://<your.ip>:64297
- user: [user] you chose during the installation
- pass: [password] you chose during the installation
Make sure your system is reachable through a network you suspect intruders in / from (i.e. the internet). Otherwise T-Ops will most likely not capture any attacks, other than the ones from your internal network! For starters it is recommended to put T-Ops in an unfiltered zone, where all TCP and UDP traffic is forwarded to T-Ops's network interface. However to avoid fingerprinting you can put T-Ops behind a firewall and forward all TCP / UDP traffic in the port range of 1-64000 to T-Ops while allowing access to ports > 64000 only from trusted IPs.
A list of all relevant ports is available as part of the Technical Concept
Basically, you can forward as many TCP ports as you want, as glutton & honeytrap dynamically bind any TCP port that is not covered by the other honeypot daemons.
In case you need external Admin UI access, forward TCP port 64294 to T-Ops, see below. In case you need external SSH access, forward TCP port 64295 to T-Ops, see below. In case you need external Web UI access, forward TCP port 64297 to T-Ops, see below.
T-Ops requires outgoing git, http, https connections for updates (Debian, Docker, GitHub, PyPi), attack submission (ewsposter, hpfeeds) and CVE / IP reputation translation map updates (logstash, listbot). Ports and availability may vary based on your geographical location. Also during first install outgoing ICMP / TRACEROUTE is required additionally to find the closest and fastest mirror to you.
For the ones of you who want to live on the bleeding edge of T-Ops development we introduced an update feature which will allow you to update all T-Ops relevant files to be up to date with the T-Ops master branch. If you made any relevant changes to the T-Ops relevant config files make sure to create a backup first.
The Update script will:
- mercilessly overwrite local changes to be in sync with the T-Ops master branch
- upgrade the system to the packages available in Debian (Stable)
- update all resources to be in-sync with the T-Ops master branch
- ensure all T-Ops relevant system files will be patched / copied into the original T-Ops state
- restore your custom ews.cfg and HPFEED settings from
/data/ews/conf
You simply run the update script:
sudo su -
cd /opt/tpot/
./update.sh
Despite all testing efforts please be reminded that updates sometimes may have unforeseen consequences. Please create a backup of the machine or the files with the most value to your work.
The system is designed to run without any interaction or maintenance and automatically contributes to the community.
For some this may not be enough. So here some examples to further inspect the system and change configuration parameters.
By default, the SSH daemon allows access on tcp/64295 with a user / password combination and prevents credential brute forcing attempts using fail2ban
. This also counts for Admin UI (tcp/64294) and Web UI (tcp/64297) access.
If you do not have a SSH client at hand and still want to access the machine via command line you can do so by accessing the Admin UI from https://<your.ip>:64294
, enter
- user: [tsec or user] you chose during one of the post install methods
- pass: [password] you chose during the installation
You can also add two factor authentication to Cockpit just by running 2fa.sh
on the command line.
Just open a web browser and connect to https://<your.ip>:64297
, enter
- user: [user] you chose during the installation
- pass: [password] you chose during the installation
and the Landing Page will automagically load. Now just click on the tool / link you want to start.
The following web based tools are included to improve and ease up daily tasks.
T-Ops is designed to be low maintenance. Basically, there is nothing you have to do but let it run.
If you run into any problems, a reboot may fix it
If new versions of the components involved appear new docker images will be created and distributed. New images will be available from docker hub and downloaded automatically to T-Ops and activated accordingly.
T-Ops is provided in order to make it accessible to all interested in honeypots. By default, the captured data is submitted to a community backend. This community backend uses the data to feed Sicherheitstacho.
You may opt out of the submission by removing the # Ewsposter service
from /opt/tpot/etc/tpot.yml
:
- Stop T-Ops services:
systemctl stop tpot
- Remove Ewsposter service:
vi /opt/tpot/etc/tpot.yml
- Remove the following lines, save and exit vi (
:x!
):
# Ewsposter service
ewsposter:
container_name: ewsposter
restart: always
networks:
- ewsposter_local
image: "ghcr.io/telekom-security/ewsposter:2006"
volumes:
- /data:/data
- /data/ews/conf/ews.ip:/opt/ewsposter/ews.ip
- Start T-Ops services:
systemctl start tpot
Data is submitted in a structured ews-format, a XML stucture. Hence, you can parse out the information that is relevant to you.
It is encouraged not to disable the data submission as it is the main purpose of the community approach - as you all know sharing is caring 😍
As an Opt-In it is now possible to also share T-Ops data with 3rd party HPFEEDS brokers.
If you want to share your T-Ops data you simply have to register an account with a 3rd party broker with its own benefits towards the community. You simply run hpfeeds_optin.sh
which will ask for your credentials. It will automatically update /opt/tpot/etc/tpot.yml
to deliver events to your desired broker.
The script can accept a config file as an argument, e.g. ./hpfeeds_optin.sh --conf=hpfeeds.cfg
Your current config will also be stored in /data/ews/conf/hpfeeds.cfg
where you can review or change it.
Be sure to apply any changes by running ./hpfeeds_optin.sh --conf=/data/ews/conf/hpfeeds.cfg
.
No worries: Your old config gets backed up in /data/ews/conf/hpfeeds.cfg.old
Of course you can also rerun the hpfeeds_optin.sh
script to change and apply your settings interactively.
As with every development there is always room for improvements ...
Some features may be provided with updated docker images, others may require some hands on from your side.
You are always invited to participate in development on our GitHub page.
- We don't have access to your system. So we cannot remote-assist when you break your configuration. But you can simply reinstall.
- The software was designed with best effort security, not to be in stealth mode. Because then, we probably would not be able to provide those kind of honeypot services.
- You install and you run within your responsibility. Choose your deployment wisely as a system compromise can never be ruled out.
- Honeypots - by design - should not host any sensitive data. Make sure you don't add any.
- By default, your data is submitted to SecurityMeter. You can disable this in the config. But hey, wouldn't it be better to contribute to the community?
Please report any issues or questions on our GitHub issue list, so the community can participate.
The software is provided as is in a Community Edition format. T-Ops is designed to run out of the box and with zero maintenance involved.
We hope you understand that we cannot provide support on an individual basis. We will try to address questions, bugs and problems on our GitHub issue list.
The software that T-Ops is built on uses the following licenses.
GPLv2: conpot, dionaea, honeysap, honeypy, honeytrap, suricata
GPLv3: adbhoney, elasticpot, ewsposter, fatt, rdpy, heralding, ipphoney, snare, tanner
Apache 2 License: cyberchef, dicompot, elasticsearch, logstash, kibana, docker, elasticsearch-head
MIT license: ciscoasa, glutton
Other: citrixhoneypot, cowrie, mailoney, Debian licensing
Without open source and the fruitful development community (we are proud to be a part of), T-Ops would not have been possible! Our thanks are extended but not limited to the following people and organizations:
- adbhoney
- apt-fast
- ciscoasa
- citrixhoneypot
- cockpit
- conpot
- cowrie
- debian
- dicompot
- dionaea
- docker
- elasticpot
- elasticsearch
- elasticsearch-head
- ewsposter
- fatt
- glutton
- heralding
- honeypy
- honeysap
- honeytrap
- ipphoney
- kibana
- logstash
- mailoney
- medpot
- p0f
- rdpy
- spiderfoot
- snare
- tanner
- suricata
A new version of T-Ops is released about every 6-12 months, development has shifted more and more towards rolling releases and the usage of /opt/tpot/update.sh
.
One of the greatest feedback we have gotten so far is by one of the Conpot developers:
"[...] I highly recommend T-Ops which is ... it's not exactly a swiss army knife .. it's more like a swiss army soldier, equipped with a swiss army knife. Inside a tank. A swiss tank. [...]"
And from @robcowart (creator of ElastiFlow):
"#TPot is one of the most well put together turnkey honeypot solutions. It is a must-have for anyone wanting to analyze and understand the behavior of malicious actors and the threat they pose to your organization."