View Single Post
Old 30th November 2005, 14:45
Ovidiu Ovidiu is offline
Senior Member
Join Date: Sep 2005
Posts: 1,269
Thanks: 84
Thanked 25 Times in 21 Posts

Originally Posted by RotHorseKid
That's what I thought.
a) Could I just let Squid write it's httpd-emulated logs to the respective web_log files for the sites?
b) In tests, I found decreases in latency by using Squid, especially for pages with lots of graphics (as Tenaka already found). This is simply for the fact that, for 50+ different sites on one server, Squid does a better job serving pages/graphics from memory than Linux and Apache alone. So by not going through apache, I decrease the latency introduced by disk IO.
c) Secondly, I have some sites with high-latency DB connections. Much near-to-static data comes from these DBs. By tuning the web applications to write the correct Cache-Control headers for these pages, I can minimize the amount of actual DB queries made.
d) At least these were my findings, I am open to discussion here.
Originally Posted by till
Only the traffic that goes through apache is counted.
I do not udnerstand this fully, isn't traffic still going through apache? the only difference I see is that apache isn't delivering to the client but to squid. And squid, I suppose is requesting from apache in the same way a client would, so the logfiles should still be ok as usual? Or is there a major point I am missing here?

a) see my above lines
b) great ;-)
c) can you explain in a little bit more detail here? are you talking about optimizing mysql settings?
d) please keep in mind that I am talking about theory, so be patient, I have not yet tested this just been reading about it and considering implementing right now
Reply With Quote