• If you are still using CentOS 7.9, it's time to convert to Alma 8 with the free centos2alma tool by Plesk or Plesk Migrator. Please let us know your experiences or concerns in this thread:
    CentOS2Alma discussion

Question How to increase HTTP body size limits?

Sergio Manzi

Regular Pleskian
In one of my virtual hosts I have a large file (6GB) that I need to make downloadable.

Apparently Plesk, "as is", limits the body size to 1GB and, again apparently, it is Apache that answers with a fake body size of 1GB, then Nginx gets in troubles and in its log I find:
Code:
18040#0: *24071 upstream prematurely closed connection while reading upstream

So I added the "LimitRequestBody 0" in my domain "Apache & nginix settings": no way, still limited to 1 GB.

I tried increasing the nginix body size too by using the"client_max_body_size = 10G" directive and thus stumbled upon the https://kb.plesk.com/en/122689 issue. Followed its instruction, but still nothing.

Tried to force "LimitRequestBody 0" in my vhost httpd.conf, and still nothing...

Can somebody please give me an hand before I start crying and/or throwing my keyboard out of my window?

TIA!!

Sergio
 
I am not sure that the issues come from Apache/NGinx only. How you try to download the file? Do you have a script for this? Try this 2 links, even if is old.
http://stackoverflow.com/questions/432713/serving-large-files-with-php
http://teddy.fr/2007/11/28/how-serve-big-files-through-php/

As far as I remember from a client project, programmers discussed on a meeting that they can server large files like a stream, but I was not involved directly into that project. Chunk the files seems to me a good solution.
 
Hi Ivalics, thanks for answering!

Yes, I think it is an Apache issue because of the following findings while I was trying to download a 1.4GB file (host names and IP addresses have been edited):

Header received by my Firefox client:

how-to-increase-http-body-size-limits-firefox-header-2.png


access_ssl_log:
79.40.xxx.yyy - - [02/Aug/2016:11:38:36 +0000] "GET /test/testfile HTTP/1.0" 200 1085781229 "https://host.example.com/test/" "Mozilla/5.0 (Windows NT 6.3; WOW64; rv:47.0) Gecko/20100101 Firefox/47.0"

proxy_error_log:
2016/08/02 11:40:31 [error] 30972#0: *59526 upstream prematurely closed connection while reading upstream, client: 79.40.xxx.yyy, server: host.example.com, request: "GET /test/testfile HTTP/2.0", upstream: "https://AAA.BBB.CCC.DDD:7081/test/testfile", host: "host.example.com", referrer: "https://host.example.com/test/"


I interpret the above as nginx answering with the correct file size (146800640), but then Apache offering only 1GB (1085781229) to nginx, but of course my interpretation can be way off!!

===

Please note: this are static files, so no PHP is involved.

Thanks!

Sergio
 
Last edited:
Hello, Ivalics,

... at about the same size. Here are the results of 4 trials with Firefox, Chrome and Internet Explorer:
Code:
1,083,616,548 FF47 (1)
1,083,883,492 FF47 (2)
1,083,777,776 CR51
1,082,992,256 IE11

Curious thing is that with Internet Explorer when the download was interrupted, I was offered the option to "resume" it, and having done that the file completely downloaded. I never noticed IE has such feature... I suppose it asks for a different range of bytes...
 
For me it seems is not the same size. Small differences indeed, but there is a difference. Because of this, this for me means that apache have some time settings which goes to timeout. I just try to think with logic. If would be the size settings the issue, then it will be same each time the size.
Why not try to pass it through a script and chump it in pieces?
 
agreed (and I hate when things are "about" the same...), but it could also be different behaviours in the flushing of the last buffer...

I don't think it is time dependent because the same behaviour happens from my location, were I have 100 Mb/s download bandwidth and hence the downloading is quite rapid, and from another location where the download speeds are a lot lower and it takes much longer to download the same "about 1GB" fragment.

Have you noticed the size reported in access_ssl_log, 1085781229 bytes?
 
I suggest to try to implement the small PHP script in the link, chunk the files and check if will goes over 1GB, then you are a step ahead of what is happening :)
 
OK, I will do (later as now I have to go out...) as a "debugging action" and temporary workaround, but to be honest I would like to see my server to serve static files as static files and not through a script...

Thanks again, Ivalics!
 
If will work, you know how to go forward :) step by step is a big step in the future.
 
Hi, Ivalic!

I made the test we agreed on and of course it works without problems. I had to adapt the "chunked read" function you pointed me at by adding a minimum of relevant HTTP headers, so the code I used is as here below:
Code:
<?php
$filename = 'testfile';
$urlencoded_filename = urlencode($filename);
$filesize = filesize($filename);

// Send the right headers
header("Content-Length: {$filesize}");
header("Content-Type: application/octet-stream");
header("Content-Disposition: attachment; filename*=UTF-8''{$urlencoded_filename}");

$result = readfile_chunked($filename);


// Read a file and display its content chunk by chunk
function readfile_chunked($filename, $retbytes = TRUE) {
  define("CHUNK_SIZE", 1024*1024); // Size (in bytes) of tiles chunk
  $buffer = "";
  $cnt =0;
  // $handle = fopen($filename, "rb");
  $handle = fopen($filename, "rb");
  if ($handle === false) {
  return false;
  }
  while (!feof($handle)) {
  $buffer = fread($handle, CHUNK_SIZE);
  echo $buffer;
  ob_flush();
  flush();
  if ($retbytes) {
  $cnt += strlen($buffer);
  }
  }
  $status = fclose($handle);
  if ($retbytes && $status) {
  return $cnt; // return num. bytes delivered like readfile() does.
  }
  return $status;
}


Beside that I made some further testing trying to serve my "huge files" directly with nginx (turning on the Smart static files processing and Serve static files directly by nginx options in my vhost).

That too works (and it would be optimal as nginx is much better than Apache at serving static files), but unhappily it is not feasible for me as it has some other nasty side effects (which I'd like to solve too sooner or later), and particularly the fact that with "Smart static files processing" activated:
  • Password protected directories do not works
  • I don't have the possibility to have a directory listing (the equivalent of an "Options +Indexes")
  • It is incompatible with Joomla sites (due to the handling of the "/" and null URLs)
... but that's another story, for another day! :)

So I think we can conclude that Apache (or its interaction with nginx) is at the root of the issue.

I also tried raising Apache's TimeOut up to 240 seconds, which is more than I need to download the whole 1.4GB file, but that didn't solve the issue.

So, for the time being, I'd really be happy if we could find a way to directly serve my static files through Apache... :rolleyes:

Thanks,

Sergio

P.S.: in the meantime I also realized that the LimitRequestBody parameter I cited in my OP is not relevant as it is used to limit request body sizes, not responses...
 
Last edited:
I'll try disabling nginx tonight (Central European Time) as this will have potential impact on all my domains and I want to do that at low traffic time. Then I'll let you know...

Anyway I don't think this will be a "definitive" answer as I suppose Apache configuration when used as the sole web server will be different from the one used when it is proxied by nginx (the issue could be only in the way it is configured when it is used as the nginx backend), but I agree, that too will be a step forward!.

I've read the Stackexchange article and yes, the situation is very similar. I think this answer is interesting:
I think that error from Nginx is indicating that the connection was closed by your nodejs server (i.e., "upstream"). How is nodejs configured?
In our case "usptream" is Apache, and hence I think we have one more hint about Apache configuration being at the root of the issue.

Thanks again for your help!
 
OK, tested, it works with Apache only! 1.4GB file completely downloaded.


how-to-increase-http-body-size-limits-firefox-header-3.png


So...:
  • Nginx alone: works!
  • Apache alone: works!
  • Apache + Nginx: Does not work

... therefore I would say that there are very good odds that the problem lies in the Apache <--> Nginix interface...


P.S.: If you notice there are two differences from the headers I initially reported: I switched from https to http and renamed the file from testfile to testfile.zip: that doesn't change a thing, same behavior.
 
Last edited:
Decided to sign up just to reply to this with a fix as it's been bugging me for weeks!

Managed to sort it by first following the link above:

https://kb.plesk.com/en/122689

But as well as:

client_max_body_size 5120m;

I also had to add:

proxy_max_temp_file_size 5120m;

to the additional nginx directives for the site.

Also ended up using the readfile_chunked function above to get around general php memory limits after I'd got the above working.
 
@NeilW, thank-you for your feedback about this issue.

Unhappily I'm quite busy at this time and can't make further testing right now (will probably do during Christmas holidays...).

The usage of the proxy_max_temp_file_size directive (not mentioned in https://kb.plesk.com/en/122689) sounds interesting and in line with my previous observation that the Apache + Nginx combination does not work.

BTW, https://kb.plesk.com/en/122689 (dated as Aug 20, 2014, last review on May 14, 2016) says that "This was considered an internal software issue with internal ID #PPPM-1914" and that "The issue will be fixed in future updates.", but it doesn't seems that the issue has been addressed yet: that's a real pity...

Thanks again, Sergio
 
Back
Top