You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Compressors generally operate faster if they are provided with large chunks of data.
There is already a small buffer, but it's only used for content-type sniffing before compression start (and is bypassed entirely once compression has started).
We could instead replace it with a buffer of configurable size (by default larger than the current content-type sniffing one) that would be used also during compression to compress data in larger chunks.
Some compressors may do the same internally, so we would need to avoid double buffering in this case.
Such a buffer could later also be used to capture the full uncompressed payload of (small) responses, that paired together with the resulting compressed payload could be cached to avoid having to recompress the same data over and over.
The text was updated successfully, but these errors were encountered:
Compressors generally operate faster if they are provided with large chunks of data.
There is already a small buffer, but it's only used for content-type sniffing before compression start (and is bypassed entirely once compression has started).
We could instead replace it with a buffer of configurable size (by default larger than the current content-type sniffing one) that would be used also during compression to compress data in larger chunks.
Some compressors may do the same internally, so we would need to avoid double buffering in this case.
Such a buffer could later also be used to capture the full uncompressed payload of (small) responses, that paired together with the resulting compressed payload could be cached to avoid having to recompress the same data over and over.
The text was updated successfully, but these errors were encountered: