Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

yet another - too big subrequest response while sending to client - issue #26

Open
yogeshgadge opened this issue Jun 12, 2023 · 3 comments

Comments

@yogeshgadge
Copy link

yogeshgadge commented Jun 12, 2023

I need to transform response body so I am using r.subrequest which runs into too big subrequest response while sending to client. Several issues are there to this effect and the solution seems to be to use

subrequest_output_buffer_size {size};

In my case I kept increasing this value until 4M until it worked by trial and error.

My question is what should be this value - does this depend on the size of the response.
If it is then how can this value to be set when we may not the size upfront.
I understand that setting this value too big is safe for it to work but at the same time unsafe for resources etc.

I tried using responseBuffer instead of responseBody but of no use.

Wondering if I should/should use subrequest in the first place ?

Wondering if this has anything to do with internal; in the subrequest location ?

# nginx.conf
location /search-subrequest {
     proxy_pass https://redacted-domain/api/search;
     subrequest_output_buffer_size 4M; # 4M is big
}
location /search {

  js_content main.search;

and here is my njs

// search.js

async function search(r) {
    let reply = await r.subrequest('/search-subrequest');
    let bodyRaw = reply.responseBuffer.toString('utf8');
    let body = JSON.parse(bodyRaw);
    // transform body
    r.return(200, JSON.stringify(body));
 }

and the errors curl -I http://localhost/search

curl: (52) Empty reply from server
[warn] 415#415: *142 an upstream response is buffered to a temporary file /var/cache/nginx/proxy_temp/1/02/0000000021 while reading upstream, client: 127.0.0.1, server: localhost, request: "HEAD / HTTP/1.1", subrequest: "/search-subrequest", upstream: "https://REACTTEDIP:443/api/search", host: "localhost"

[error] 415#415: *142 too big subrequest response while sending to client, client: 127.0.0.1, server: localhost, request: "HEAD / HTTP/1.1", subrequest: "/search-subrequest", upstream: "https://REACTTEDIP:443/api/search", host: "localhost"

@yogeshgadge
Copy link
Author

yogeshgadge commented Jun 12, 2023

fyi i switched to fetch API and I no longer have this issue. But is this the correct way instead of subrequest ?

async function searchUsingFetch(r) {
    let reply = await ngx.fetch('https://redacted-domain/api/search');
    let bodyRaw = await reply.text();
    let body = JSON.parse(bodyRaw);
    // transform body
    r.return(200, JSON.stringify(body));
 }

@xAt0mZ
Copy link

xAt0mZ commented Aug 25, 2023

@yogeshgadge if you don't need to put some cache, using ngx.fetch is the way to go, IMO.

However if you need to have some cache (because your upstream API is rate limited for example) you will need to use subrequest like so

# /etc/nginx/conf.d/default.conf 

proxy_cache_path /tmp/nginx-cache/ levels=1:2 keys_zone=cachezone:100m max_size=1g inactive=30d use_temp_path=off;

js_path "/etc/nginx/njs/";
js_import main from search.js;

server {
  server_name _;

  ### example cache configuration
  proxy_cache cachezone;
  proxy_cache_revalidate on;
  proxy_cache_min_uses 1;
  proxy_cache_background_update on;
  proxy_cache_lock on;
  proxy_cache_key $scheme$proxy_host$request_uri;
  proxy_cache_use_stale error updating timeout http_500 http_502 http_503 http_504;
  proxy_cache_valid 301 1h;
  proxy_cache_valid 404 1m;
  ### end of cache configuration

  # public route
  location = /search {
    js_content main.search;
  }

 # internal route
  location /_internal_search {
    internal; # important so nobody can use the route from outside

    subrequest_output_buffer_size 500m; # still have a random value here
    proxy_pass https://redacted-domain/api/search;
    proxy_cache_valid 200 302 5m;
  }
}

With

// /etc/nginx/njs/search.js

async function search(r) {
  const reply = await r.subrequest('/_internal_search');
  const json = JSON.parse(reply.responseText);
  // do things with your JSON

  r.return(200, JSON.stringify(json));
}

export default { search };

Note that if you use nginx.fetch() with an https upstream you will need to use js_fetch_trusted_certificate or disable SSL validation using either js_fetch_verify or

let reply = await ngx.fetch('https://redacted-domain/api/search', { verify: false });

@xAt0mZ
Copy link

xAt0mZ commented Aug 26, 2023

Well as a followup, seems using subrequest with caching doesn't work as expected. On cache stale NJS explodes with

nginx-nginx-1  | 2023/08/26 11:11:24 [error] 28#28: *28 js exception: MemoryError while sending to client
client: 172.23.0.1
server: _
request: "GET /srach HTTP/1.1"
subrequest: "/_internal_search"
upstream: "https://redacted-domain/api/search"
host: "localhost"
referrer: "http://localhost:5173/"

nginx-nginx-1  | 2023/08/26 11:11:45 [error] 28#28: *30 js exception: MemoryError
client: 172.23.0.1
server: _
request: "GET /search HTTP/1.1"
subrequest: "/_internal_search"
host: "localhost"

so stick with the fetch API if you don't need caching

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants