Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow users to check ping of instances? #12

Open
a1studmuffin opened this issue Jun 12, 2023 · 7 comments
Open

Allow users to check ping of instances? #12

a1studmuffin opened this issue Jun 12, 2023 · 7 comments

Comments

@a1studmuffin
Copy link

One of the best things new users can do is find a Lemmy server with a low ping from their location, as this will make the whole experience faster and more enjoyable for them.

It would be great to encourage users to find a server near them, perhaps by providing a "Check ping" button for each listing which performs a client-side ping to the instance and shows the result.

@maltfield
Copy link
Owner

That's a great idea, but not something that can be done within a GitHub README.md file. It could be hosted on a static-site generated by GitHub Actions, however.

@a1studmuffin
Copy link
Author

No probs, feel free to close out, I just thought I'd suggest it as I went through this recently with two instances and the difference was night and day.

@calculuschild
Copy link

Could we add to the tables the hosting country/region of each instance server perhaps?

@maltfield
Copy link
Owner

@calculuschild would you prefer the emoji flag of the country or the 2-digit ISO abbreviation?

@calculuschild
Copy link

calculuschild commented Jun 13, 2023

Either way. I'd probably prefer the flag because it's easier to distinguish at a glance. As long as it's clear it's the hosting region, not the language.

@ghost
Copy link

ghost commented Jun 16, 2023

I'm also interested in this. It could have an argument to be used manually.

This script calculates the average latency for a list of domains using the ping command and runs the calculations in parallel. It imports the argparse module to handle command-line arguments, the subprocess module to execute the ping command, and uses type hints for better code readability. The script defines a list of domains (DOMAINS), the number of pings per domain (NUM_PINGS), and the delay between parallel executions (DELAY). It then parses the command-line arguments and checks if the --latency flag is passed.

The ping_domain function takes a domain as input, runs the ping command with the specified number of pings, and parses the output to calculate the average latency. If the --latency flag is passed, the script constructs the command arguments for each domain, creates a command string to execute the ping_domain function, and finally runs the commands in parallel using the parallel command with the specified delay. The parallel execution is achieved by using the subprocess.run() function with the parallel command and the constructed command string and arguments Source 5.

Example of script usage:

python3 script.py --latency

This will calculate the average latency for each domain in the DOMAINS list and run the calculations in parallel.

import argparse
import subprocess
from typing import List

DOMAINS = ["example.com", "example.org", "example.net"]
NUM_PINGS = 10
DELAY = 60

def parse_args() -> argparse.Namespace:
    parser = argparse.ArgumentParser()
    parser.add_argument("--latency", action="store_true", help="Calculate average latency for domains")
    return parser.parse_args()

def ping_domain(domain: str) -> float:
    result = subprocess.run(["ping", "-c", str(NUM_PINGS), domain], capture_output=True, text=True)
    lines = result.stdout.strip().split("\n")
    avg_line = [line for line in lines if "avg" in line][0]
    avg_time = float(avg_line.split("/")[-3])
    return avg_time

def run_latency_check(args: argparse.Namespace):
    command_args = [f"{domain} {NUM_PINGS}" for domain in DOMAINS]
    command_str = "python3 -c 'from my_module import ping_domain; domain, num_pings = \"{arg}\".split(); print(ping_domain(domain, int(num_pings)))'"
    subprocess.run(["parallel", "-j0", f"--delay {DELAY}", command_str, ":::", *command_args])

def main():
    args = parse_args()
    if args.latency:
        run_latency_check(args)

if __name__ == "__main__":
    main()

@8ullyMaguire
Copy link

#!/usr/bin/env python3
import json
import asyncio
import aiohttp
import time

from typing import List, Dict

TIME_BETWEEN_REQUESTS = 5
TIME_TOTAL = 60


async def get_latency(session, domain):
    try:
        start = time.time()
        if not domain.startswith(("http://", "https://")):
            domain = "https://" + domain
        async with session.get(domain, timeout=3) as response:
            end = time.time()
            return end - start
    except asyncio.TimeoutError:
        return float("inf")
    except aiohttp.client_exceptions.ServerDisconnectedError:
        return float("inf")


def add_latency_to_domain(domain, latency, latencies):
    if domain not in latencies:
        latencies[domain] = []
    latencies[domain].append(latency)
    return latencies


async def measure_latencies_for_domains(session, domains, latencies):
    tasks = []
    for domain in domains:
        tasks.append(get_latency(session, domain))

    results = await asyncio.gather(*tasks)

    for domain, latency in zip(domains, results):
        latencies = add_latency_to_domain(domain, latency, latencies)

    return latencies


async def measure_latencies(domains, duration):
    latencies = {}
    start_time = time.time()
    end_time = start_time + duration

    async with aiohttp.ClientSession() as session:
        while time.time() < end_time:
            latencies = await measure_latencies_for_domains(session, domains, latencies)
            await asyncio.sleep(TIME_BETWEEN_REQUESTS)

    return latencies


def average_latencies(latencies):
    averages = []
    for domain, latency_list in latencies.items():
        avg_latency = sum(latency_list) / len(latency_list)
        averages.append((domain, avg_latency))
    return averages


def sort_latencies(averages):
    return sorted(averages, key=lambda x: x[1])


async def get_latency_report(domains, duration):
    latencies = await measure_latencies(domains, duration)
    averages = average_latencies(latencies)
    return sort_latencies(averages)


def get_instances(data: Dict) -> List[Dict]:
    instances = []
    for instance_details in data["instance_details"]:
        instances.append(instance_details)
    return instances


def get_domains(instances: List[Dict]) -> List[str]:
    return [instance["domain"] for instance in instances]


def load_json_data(filepath: str) -> Dict:
    with open(filepath) as json_data:
        return json.load(json_data)


async def main():
    data = load_json_data('stats.json')
    instances = get_instances(data)
    domains = get_domains(instances)
    report = await get_latency_report(domains, TIME_TOTAL)
    for domain, avg_latency in report:
        print(f"{domain}: {avg_latency:.2f} seconds")


if __name__ == "__main__":
    parser = argparse.ArgumentParser()
    parser.add_argument('--latency', action='store_true', help='Execute latency measurement')
    args = parser.parse_args()

    if args.latency:
        asyncio.run(main())

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants