• If you are still using CentOS 7.9, it's time to convert to Alma 8 with the free centos2alma tool by Plesk or Plesk Migrator. Please let us know your experiences or concerns in this thread:
    CentOS2Alma discussion

Issue Nginx Caching - x-cache-status: BYPASS (working but not serving from cache)

Thanks for the reply. Lots of good info here. I guess what I'm trying to say is that on a perfectly configured server with all the URI parameters covered Nginx should still serve content from memory?

I guess we need a way to verify that Nginx is serving the cache from memory rather than disk? Does such a thing exist?
 
@Laurence@

With respect to this statement and question

I guess we need a way to verify that Nginx is serving the cache from memory rather than disk? Does such a thing exist?

I have to say "humbug" with respect to all forms of caching ........ and I will explain that.

Plain old (static) HTML does not require caching, with the exception of browser based or some other form of client-side caching (read: why bother to serve identical requests over and over again? Just enforce browser or client-side caching as far as possible!).

As a logical result, any form of disk based caching, serving static pages that are the result from some rendering of dynamic pages, is just as good as memory based caching.

Sure, there are a lot of factors to consider, but those factors are not really related to caching as an individual topic:

- CDN: helps to reduce the time to serve requests, only by bringing the dynamic or static content (most CDNs are inherently caching) closer to the location of the request,
- Application structure: almost none of the common applications is written in plain old (static) HTML, causing the need for caching,
- Browser behaviour: not every browser behaves decently, when it comes to browser/client-side caching,

and so on and so on.

In short, in a perfect world of perfect design infrastructure, one does not need the patch offered by caching mechanisms.

However, the world is not perfect ........ and hence we need caching mechanisms: this "not so perfect" world is characterized by code rendering engines, such as PHP.

In essence, there is no clear advance of memory based caching over disk based caching, if and only if one has some static files that can be served directly on request.

Simply stated, there is no need to put a static file into memory (read: not effective) AND consume memory resources that can also be used to render dynamic pages.

In my humble opinion, it is a bad thing to hammer memory with requests........ if one can also use static files with a disk based caching mechanism.

This is also the Nginx philosophy of caching: only use memory when required.

Actually, Nginx is rather sophisticated when it comes to caching: when configured properly, Nginx is hinging on three effective starting points, being

1 - serve static files AND HTML directly (read: no request is passed to Apache, hence no consumption of resources by the memory hungry Apache)
2 - serve static content, rendered from dynamic pages, from disk based cache,
3 - serve dynamic content, if any is still left, from memory based cache,

and you can imagine by now that "proper configuration" means that point 3 is almost never reached.

At least, that is the theory..............but the daily practice can only coincide with the theory, if there is a proper Nginx config AND a set of proper applications.

Again, it is has to be stated that the world is "not so perfect" and I will give an example.

Consider for example a default WordPress instance, containing a whole bunch of (default) PHP pages, with most of these pages being requested if one page of the WP instance is requested in some browser on a location somewhere.

The endresult of that request of one page can be cached via Nginx, the underlying (nested) PHP code is (mostly) being cached via OPCache.

Ehm, two caching mechanisms, many (nested) PHP files, many potential variables in the PHP code, many potential outcomes of that one page request?

Yes, the design structure of WordPress makes any form of caching rather difficult and mostly ineffective and sometimes even impossible (consider the wp-admin section).

Now, consider what would happen if you put the entire WP site into (static) HTML: no need to cache any request, just serve HTML content directly (see point 1).

But we do not want that, we want WordPress and we want to change pages ourself (as opposed to adding or editing HTML directly) and use WordPress as a "proper CMS".

So, in the long run, we keep creating monsters of applications, at least from the perspective of caching.

The whole point is: if we would be able to redesign applications to be a set of static HTML, then caching mechanisms barely add value.

But that is not possible, so we keep on patching the inefficiency of applications, web servers etc. by simply introducing some form of caching.

And that is essentially the same as a "dirty work-around": one does not deal with the root cause of the problem, one simply works around some (other) issue.


In conclusion, caching is not the holy grail, all forms of caching mechanisms are essentially a patch to some inefficiency in serving requests.

It often is better to tackle the root causes of the problem, as opposed to introducing another tool in the toolset to enhance server and site performance.

But if one does some caching, just use Nginx based caching given the excellent and outstanding business logic of caching in Nginx.

Regards..............
 
Last edited:
Back
Top