Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Web server listening port configuration is not being honored #733

Open
abilashmram opened this issue Dec 17, 2024 · 0 comments
Open
Labels
bug Something isn't working

Comments

@abilashmram
Copy link

Describe the bug
Web sever still listens on port 8080 instead of the one configured by user (8081)

Expected behavior
The port from docker config needs to be used

Screenshots
If applicable, add screenshots to help explain your problem.

Log Files
abi@ubu:/container_cfg/scrutiny$ docker run -it --rm -p 8081:8081 -p 8086:8086
-v pwd/scrutiny:/opt/scrutiny/config
-v pwd/influxdb2:/opt/scrutiny/influxdb
-v /run/udev:/run/udev:ro
-e DEBUG=true
-e SCRUTINY_LOG_FILE='/tmp/web.log'
--cap-add SYS_RAWIO
--device=/dev/sda
--device=/dev/sdb
--name scrutiny
ghcr.io/analogj/scrutiny:master-omnibus
Unable to find image 'ghcr.io/analogj/scrutiny:master-omnibus' locally
master-omnibus: Pulling from analogj/scrutiny
2d429b9e73a6: Already exists
5885402cb5ae: Pull complete
cb40161db1a3: Pull complete
46a804d43886: Pull complete
2ace9a6c2e19: Pull complete
df8f88c1a4c5: Pull complete
b26e6ce8434f: Pull complete
75782d4e94e5: Pull complete
79e281688b65: Pull complete
Digest: sha256:d643deeb1163f4ad425e56eeb0328bbdb7ccfe60335e94a23a477cee5f536a96
Status: Downloaded newer image for ghcr.io/analogj/scrutiny:master-omnibus
s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
cont-init: info: running /etc/cont-init.d/01-timezone
cont-init: info: /etc/cont-init.d/01-timezone exited 0
cont-init: info: running /etc/cont-init.d/50-cron-config
cont-init: info: /etc/cont-init.d/50-cron-config exited 0
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service legacy-services: starting
services-up: info: copying legacy longrun collector-once (no readiness notification)
services-up: info: copying legacy longrun cron (no readiness notification)
services-up: info: copying legacy longrun influxdb (no readiness notification)
services-up: info: copying legacy longrun scrutiny (no readiness notification)
s6-rc: info: service legacy-services successfully started
starting cron
waiting for influxdb
waiting for scrutiny service to start
influxdb config file already exists. skipping.
starting influxdb
influxdb not ready
scrutiny api not ready
2024-12-17T19:29:06.066022Z info Welcome to InfluxDB {"log_id": "0tXXNK4G000", "version": "v2.2.0", "commit": "a2f8538837", "build_date": "2022-04-06T17:36:40Z"}
2024-12-17T19:29:06.066962Z info Resources opened {"log_id": "0tXXNK4G000", "service": "bolt", "path": "/opt/scrutiny/influxdb/influxd.bolt"}
2024-12-17T19:29:06.067016Z info Resources opened {"log_id": "0tXXNK4G000", "service": "sqlite", "path": "/opt/scrutiny/influxdb/influxd.sqlite"}
2024-12-17T19:29:06.068835Z info Checking InfluxDB metadata for prior version. {"log_id": "0tXXNK4G000", "bolt_path": "/opt/scrutiny/influxdb/influxd.bolt"}
2024-12-17T19:29:06.068886Z info Using data dir {"log_id": "0tXXNK4G000", "service": "storage-engine", "service": "store", "path": "/opt/scrutiny/influxdb/engine/data"}
2024-12-17T19:29:06.068897Z info Compaction settings {"log_id": "0tXXNK4G000", "service": "storage-engine", "service": "store", "max_concurrent_compactions": 6, "throughput_bytes_per_second": 50331648, "throughput_bytes_per_second_burst": 50331648}
2024-12-17T19:29:06.068904Z info Open store (start) {"log_id": "0tXXNK4G000", "service": "storage-engine", "service": "store", "op_name": "tsdb_open", "op_event": "start"}
2024-12-17T19:29:06.074222Z info index opened with 8 partitions {"log_id": "0tXXNK4G000", "service": "storage-engine", "index": "tsi"}
2024-12-17T19:29:06.074307Z info index opened with 8 partitions {"log_id": "0tXXNK4G000", "service": "storage-engine", "index": "tsi"}
2024-12-17T19:29:06.074682Z info Reading file {"log_id": "0tXXNK4G000", "service": "storage-engine", "engine": "tsm1", "service": "cacheloader", "path": "/opt/scrutiny/influxdb/engine/wal/5d15e97c3eee7c6c/autogen/2/_00001.wal", "size": 11857}
2024-12-17T19:29:06.074771Z info Reading file {"log_id": "0tXXNK4G000", "service": "storage-engine", "engine": "tsm1", "service": "cacheloader", "path": "/opt/scrutiny/influxdb/engine/wal/5d15e97c3eee7c6c/autogen/1/_00001.wal", "size": 36375}
2024-12-17T19:29:06.075159Z info Opened shard {"log_id": "0tXXNK4G000", "service": "storage-engine", "service": "store", "op_name": "tsdb_open", "index_version": "tsi1", "path": "/opt/scrutiny/influxdb/engine/data/5d15e97c3eee7c6c/autogen/2", "duration": "5.271ms"}
2024-12-17T19:29:06.075985Z info Opened shard {"log_id": "0tXXNK4G000", "service": "storage-engine", "service": "store", "op_name": "tsdb_open", "index_version": "tsi1", "path": "/opt/scrutiny/influxdb/engine/data/5d15e97c3eee7c6c/autogen/1", "duration": "6.093ms"}
2024-12-17T19:29:06.076003Z info Open store (end) {"log_id": "0tXXNK4G000", "service": "storage-engine", "service": "store", "op_name": "tsdb_open", "op_event": "end", "op_elapsed": "7.100ms"}
2024-12-17T19:29:06.076012Z info Starting retention policy enforcement service {"log_id": "0tXXNK4G000", "service": "retention", "check_interval": "30m"}
2024-12-17T19:29:06.076020Z info Starting precreation service {"log_id": "0tXXNK4G000", "service": "shard-precreation", "check_interval": "10m", "advance_period": "30m"}
2024-12-17T19:29:06.076682Z info Starting query controller {"log_id": "0tXXNK4G000", "service": "storage-reads", "concurrency_quota": 1024, "initial_memory_bytes_quota_per_query": 9223372036854775807, "memory_bytes_quota_per_query": 9223372036854775807, "max_memory_bytes": 0, "queue_size": 1024}
2024-12-17T19:29:06.079438Z info Configuring InfluxQL statement executor (zeros indicate unlimited). {"log_id": "0tXXNK4G000", "max_select_point": 0, "max_select_series": 0, "max_select_buckets": 0}
2024-12-17T19:29:06.081938Z info Listening {"log_id": "0tXXNK4G000", "service": "tcp-listener", "transport": "http", "addr": ":8086", "port": 8086}
scrutiny api not ready
starting scrutiny
2024/12/17 19:29:11 No configuration file found at /opt/scrutiny/config/scrutiny.yaml. Using Defaults.


/ ) / )( _ ( )( )( )( )( ( )( / )
_
( (
) / )(
)( )( )( ) ( \ /
(
/ _)()_)() () ()()_) (__)
github.com/AnalogJ/scrutiny dev-0.8.1

Start the scrutiny server
time="2024-12-17T19:29:11Z" level=debug msg="{"log":{"file":"/tmp/web.log","level":"DEBUG"},"notify":{"urls":[]},"web":{"database":{"location":"/opt/scrutiny/config/scrutiny.db"},"influxdb":{"bucket":"metrics","host":"localhost","init_password":"password12345","init_username":"admin","org":"scrutiny","port":"8086","retention_policy":true,"scheme":"http","tls":{"insecure_skip_verify":false},"token":"scrutiny-default-admin-token"},"listen":{"basepath":"","host":"0.0.0.0","port":"8080"},"src":{"frontend":{"path":"/opt/scrutiny/web"}}}}" type=web
[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.

  • using env: export GIN_MODE=release
  • using code: gin.SetMode(gin.ReleaseMode)

time="2024-12-17T19:29:11Z" level=info msg="Trying to connect to scrutiny sqlite db: /opt/scrutiny/config/scrutiny.db\n" type=web
time="2024-12-17T19:29:11Z" level=info msg="Successfully connected to scrutiny sqlite db: /opt/scrutiny/config/scrutiny.db\n" type=web
time="2024-12-17T19:29:11Z" level=debug msg="InfluxDB url: http://localhost:8086" type=web
time="2024-12-17T19:29:11Z" level=info msg="InfluxDB certificate verification: true\n" type=web
time="2024-12-17T19:29:11Z" level=debug msg="Determine Influxdb setup status..." type=web
time="2024-12-17T19:29:11Z" level=info msg="Database migration starting. Please wait, this process may take a long time...." type=web
time="2024-12-17T19:29:11Z" level=info msg="Database migration completed successfully" type=web
time="2024-12-17T19:29:11Z" level=info msg="SQLite global configuration migrations starting. Please wait...." type=web
time="2024-12-17T19:29:11Z" level=info msg="SQLite global configuration migrations completed successfully" type=web
time="2024-12-17T19:29:11Z" level=debug msg="basepath: " type=web
[GIN-debug] GET /api/health --> github.com/analogj/scrutiny/webapp/backend/pkg/web/handler.HealthCheck (5 handlers)
[GIN-debug] POST /api/health/notify --> github.com/analogj/scrutiny/webapp/backend/pkg/web/handler.SendTestNotification (5 handlers)
[GIN-debug] POST /api/devices/register --> github.com/analogj/scrutiny/webapp/backend/pkg/web/handler.RegisterDevices (5 handlers)
[GIN-debug] GET /api/summary --> github.com/analogj/scrutiny/webapp/backend/pkg/web/handler.GetDevicesSummary (5 handlers)
[GIN-debug] GET /api/summary/temp --> github.com/analogj/scrutiny/webapp/backend/pkg/web/handler.GetDevicesSummaryTempHistory (5 handlers)
[GIN-debug] POST /api/device/:wwn/smart --> github.com/analogj/scrutiny/webapp/backend/pkg/web/handler.UploadDeviceMetrics (5 handlers)
[GIN-debug] POST /api/device/:wwn/selftest --> github.com/analogj/scrutiny/webapp/backend/pkg/web/handler.UploadDeviceSelfTests (5 handlers)
[GIN-debug] GET /api/device/:wwn/details --> github.com/analogj/scrutiny/webapp/backend/pkg/web/handler.GetDeviceDetails (5 handlers)
[GIN-debug] DELETE /api/device/:wwn --> github.com/analogj/scrutiny/webapp/backend/pkg/web/handler.DeleteDevice (5 handlers)
[GIN-debug] GET /api/settings --> github.com/analogj/scrutiny/webapp/backend/pkg/web/handler.GetSettings (5 handlers)
[GIN-debug] POST /api/settings --> github.com/analogj/scrutiny/webapp/backend/pkg/web/handler.SaveSettings (5 handlers)
[GIN-debug] GET /web/*filepath --> github.com/gin-gonic/gin.(*RouterGroup).createStaticHandler.func1 (5 handlers)
[GIN-debug] HEAD /web/*filepath --> github.com/gin-gonic/gin.(*RouterGroup).createStaticHandler.func1 (5 handlers)
[GIN-debug] GET / --> github.com/analogj/scrutiny/webapp/backend/pkg/web.(*AppEngine).Setup.func1 (5 handlers)
[GIN-debug] Listening and serving HTTP on 0.0.0.0:8080
time="2024-12-17T19:29:16Z" level=info msg="127.0.0.1 - ad3d37225942 [17/Dec/2024:19:29:16 +0000] "HEAD /api/health" 200 0 "" "curl/7.88.1" (1ms)" clientIP=127.0.0.1 hostname=ad3d37225942 latency=1 method=HEAD path=/api/health referer= respLength=0 statusCode=200 type=web userAgent=curl/7.88.1
time="2024-12-17T19:29:16Z" level=debug bodyType=response clientIP=127.0.0.1 hostname=ad3d37225942 latency=1 method=HEAD path=/api/health referer= respLength=0 statusCode=200 type=web userAgent=curl/7.88.1
starting scrutiny collector (run-once mode. subsequent calls will be triggered via cron service)
2024/12/17 19:29:16 No configuration file found at /opt/scrutiny/config/collector.yaml. Using Defaults.


/ ) / )( _ ( )( )( )( )( ( )( / )
_
( (
) / )(
)( )( )( ) ( \ /
(
/ _)()_)() () ()()_) (__)
AnalogJ/scrutiny/metrics dev-0.8.1

DEBU[0000] {
"allow_listed_devices": [],
"api": {
"endpoint": "http://localhost:8080"
},
"commands": {
"metrics_info_args": "--info --json",
"metrics_scan_args": "--scan --json",
"metrics_smart_args": "--xall --json",
"metrics_smartctl_bin": "smartctl",
"metrics_smartctl_wait": 0
},
"devices": [],
"host": {
"id": ""
},
"log": {
"file": "",
"level": "DEBUG"
}
} type=metrics
INFO[0000] Verifying required tools type=metrics
INFO[0000] Executing command: smartctl --scan --json type=metrics
{
"json_format_version": [


Please also provide the output of docker info
docker info
Client: Docker Engine - Community
Version: 27.4.0
Context: default
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc.)
Version: v0.19.2
Path: /usr/libexec/docker/cli-plugins/docker-buildx
compose: Docker Compose (Docker Inc.)
Version: v2.31.0
Path: /usr/libexec/docker/cli-plugins/docker-compose

Server:
Containers: 16
Running: 16
Paused: 0
Stopped: 0
Images: 17
Server Version: 27.4.0
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Using metacopy: false
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: systemd
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 88bf19b2105c8b17560993bee28a01ddc2f97182
runc version: v1.2.2-0-g7cb3632
init version: de40ad0
Security Options:
apparmor
seccomp
Profile: builtin
cgroupns
Kernel Version: 6.8.0-50-generic
Operating System: Ubuntu 24.04.1 LTS
OSType: linux
Architecture: x86_64
CPUs: 12
Total Memory: 31.12GiB
Name: ubu
ID: 85091491-425e-4b4b-ab74-b190641542bb
Docker Root Dir: /var/lib/docker
Debug Mode: false
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false

WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled

@abilashmram abilashmram added the bug Something isn't working label Dec 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant