You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I would appreciate some help with my current setup. My goal is to use the TCP input plugin to receive Syslog messages and forward them to Azure Event Hub via the HTTP output plugin. Everything works fine as long as the throughput stays relatively low.
During my tests with multiple TCP connections and dummy messages (~512KB each) sent to Fluent Bit, I ran into some problems. After a few seconds, the HTTP output plugin throws an error (from Event Hub) saying the message was too big and the payload exceeds the maximum size. Since Event Hub has a 1MB payload limit, I need to restrict the TCP buffer to 1MB. While waiting for the primary buffer to flush, I want to keep accepting incoming messages and store them on disk to prevent or minimize data loss. Unfortunately, I haven’t been able to figure out the right settings to make this work.
I've also tried using the storage.total_limit_size setting in different places, but it doesn’t seem to effectively restrict the memory buffer for the TCP input or the HTTP output.
This is similar to issue #8460, except that Kafka appears to handle larger payloads.
My config:
[Service]
Flush 1
Daemon Off
Log_Level debug
Parsers_File parsers.conf
storage.path /var/log/flb-storage/
storage.sync normal
storage.checksum off
storage.backlog.mem_limit 256M
[Input]
Name tcp
Listen 0.0.0.0
Port 5140
Format none
Separator \r
storage.type filesystem
[Output]
Name stdout
Match *
Format json
[Output]
Name http
Match *
Host <PLACEHOLDER_SERVICEBUS_HOST>
Port 443
URI <PLACEHOLDER_URI>
Format json
Header Authorization <PLACEHOLDER_SAS_TOKEN>
tls On
tls.verify Off
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hi team,
I would appreciate some help with my current setup. My goal is to use the TCP input plugin to receive Syslog messages and forward them to Azure Event Hub via the HTTP output plugin. Everything works fine as long as the throughput stays relatively low.
During my tests with multiple TCP connections and dummy messages (~512KB each) sent to Fluent Bit, I ran into some problems. After a few seconds, the HTTP output plugin throws an error (from Event Hub) saying the message was too big and the payload exceeds the maximum size. Since Event Hub has a 1MB payload limit, I need to restrict the TCP buffer to 1MB. While waiting for the primary buffer to flush, I want to keep accepting incoming messages and store them on disk to prevent or minimize data loss. Unfortunately, I haven’t been able to figure out the right settings to make this work.
I've also tried using the storage.total_limit_size setting in different places, but it doesn’t seem to effectively restrict the memory buffer for the TCP input or the HTTP output.
This is similar to issue #8460, except that Kafka appears to handle larger payloads.
My config:
Maybe something like this?
FluentD Documentation - Buffer Parameters
buffer_chunk_limit
The maximum size allowed for a chunk (default: 8MB).
If a chunk exceeds the limit, it is automatically flushed to the output queue.
Thank you.
Beta Was this translation helpful? Give feedback.
All reactions