Originally Posted by RotHorseKid
That's what I thought.
a) Could I just let Squid write it's httpd-emulated logs to the respective web_log files for the sites?
b) In tests, I found decreases in latency by using Squid, especially for pages with lots of graphics (as Tenaka already found). This is simply for the fact that, for 50+ different sites on one server, Squid does a better job serving pages/graphics from memory than Linux and Apache alone. So by not going through apache, I decrease the latency introduced by disk IO.
c) Secondly, I have some sites with high-latency DB connections. Much near-to-static data comes from these DBs. By tuning the web applications to write the correct Cache-Control headers for these pages, I can minimize the amount of actual DB queries made.
d) At least these were my findings, I am open to discussion here.
Originally Posted by till
Only the traffic that goes through apache is counted.
I do not udnerstand this fully, isn't traffic still going through apache? the only difference I see is that apache isn't delivering to the client but to squid. And squid, I suppose is requesting from apache in the same way a client would, so the logfiles should still be ok as usual? Or is there a major point I am missing here?
see my above lines
can you explain in a little bit more detail here? are you talking about optimizing mysql settings?
please keep in mind that I am talking about theory, so be patient, I have not yet tested this just been reading about it and considering implementing right now