Tag Archives: apache

Day wrapup

About 5:20, I solved the mod_perl problem, but I don’t understand why what I did solved the problem. This is almost as frustrating as having it not work in the first place, and makes me feel utterly incompetent.

The main problem here is that I’m working with a large amount of code that I did not write, do not completely understand, and lack the time to either rewrite or investigate. Much of it was written by a former minion who was one of the most talented programmers I’ve had the pleasure of working with. Which is all well and good, but when stuff breaks, I frequently run across cryptic lines of code that I don’t really understand, and certainly don’t know the reasoning behind. Like today. And, of course, he’s been gone long enough that I’m sure he no longer would know what it was there for.

Anyways, I seem to have things working now. Seems to be something that works under Perl 5.6, but not under 5.8, but I don’t know why, or if it’s a bug, or whether it should have been expected, or much of anything.

And now, it is almost 7:30, and I have not had anything to eat because I’ve been on the phone, and am rather … shall we say … irritated about something, and have been unable to think about food until this very moment.

Another week, half over before I have accomplished anything. Fortunately, I’m doing training next week, and won’t have to deal with crap like this. I hope.

mod_perl frustrations

I have spent almost the entire day thus far chasing a mod_perl problem, and I don’t appear to be any closer now than I was first thing this morning. It seems that almost nothing has worked, mod_perl-wise, since I upgraded to Perl 5.8 a week or so ago, but it had not really mattered until today, because I wasn’t working with any of that stuff. Now, today, I have to get a few trivial bugs fixed on a customer site, and I can’t get anything working on this test server. I think I might just uninstall everything and start all over from scratch. There’s only so much frustration I can take in one day.

:wq

TRACE works as designed! Panic! Run for the hills!

WhiteHat Security, perhaps in an attempt to make themselves appear important, or, perhaps because they really thought it was true, released a security alert a few days ago. You can read it HERE (http://www.whitehatsec.com/press_releases/WH-PR-20030120.txt)

In summary, here it is.

HTTP provides a TRACE method, for debugging purposes. When you send a TRACE request, you get it back, including the message body, headers, etc.

WhiteHat’s security alert said that when you send a TRACE request, you get it back, including the message body, headers, etc.

Pretty scary, huh?

So, basically, they are saying what the rest of us have known since 1992. After all, it is in the HTTP specification, and you have read that, right?

Apparently, they think that you should not be able to get at information that you just sent to the server. It is secret or something.

And they provide a variety of scary JavaScript examples that allow you to intercept your own request, and send that request to some third-party site. Now, this is actually where they say that the vulnerability lies. They seem to think that this is the fault of the TRACE command. The fact of the matter is that the client has always had access to this data. Perhaps – just perhaps – this could be construed as a flaw in JavaScript – that you could possibly gain access to cookies, or auth information, and send it to some other site. But that is possible via other means which are not quite so tortuous.

So, folks, if you are in a panic about TRACE, you might want to read http://marc.theaimsgroup.com/?l=apache-httpd-dev&m=104333761011676&w=2 which talks about it a little more scientifically than I have, and explains why it is a bunch of hogwash. You can feel free to disable TRACE on your Apache server if you really want to, but it won’t gain you anything other than a false sense of security.

Migrating to 2.0, part two

I got done migrating to Apache 2.0 on Eris. I’m actually still running two daemons. I’m running Dav in its own process, on an alternate port. I built a very stripped-down Apache, taking out all the modules that I did not think I would need. I’ll bet I could make it even more stripped down, but it seems to be pretty good. I’m running it with Worker, and just a few threads.

The other process is bigger – ie more modules – and running SSL as well. I’m running worker on that also, and it really seems to be running faster. I suppose I could be imagining this, but it feels snappier. This could also be because I’m running mod_deflate. I was using mod_gzip before, but this cause some problems, as mentioned in an earlier note.

My other main server is still running 1.3, because I feel better with 1.3 and mod_perl. Hopefully, I can move that to 2.0 real soon now also.

Migrating to Apache 2.0

Now that I have given a “Migrating to Apache 2.0” talk a few times, I
thought it might be a good time to try it myself. Actually, my last PHP
web site went away, and I’m not using mod_perl on the server in
question, so it seemed like a reasonable thing to try. Also, after my
latest frustrations with mod_gzip, a move to mod_deflate seemed like a
good idea as well.

So, I’m moving one of my two main servers to Apache 2.0.

The hardest part of the entire process really seems to be the swap
itself, because there are so many hard-coded path-names laying around
pointing to /usr/local/apache. So I’m building Apache2 in
/usr/local/apache2, I’ll do some symlinking for a bit while I rebuild it
in /usr/local/apache, and then … well, it should just work. I think.

mod_gzip

It appears that mod_gzip keeps work files FOREVER. Don’t say that you want to keep work files, because it will. I appear to have GIGABYTES of mod_gzip work files. And I’ve been backing them up. For months. This is an enormous pain.

Third time’s the charm

DrBacchus’ Journal: Surreal tech support situation

Third time’s the charm. The first two people that I spoke to were not able to help me, and it was not even clear that they understood the problem. This time, I spoke with Tim, who clearly knew what I was talking about. This made the whole conversation much more pleasant.

While he agreed with me as to the problem and the likely solution, it was not quite that simple, since they are, of course, using some mass vhosting module. He was not able to tell me what module, exactly, they were using. Not that it matters – I was just curious. But apparently the account name makes the hostname different from what we are using, and this goes into the mass-vhosting algorithm. He agreed to change the name on the account, which should resolve the problem, but could take several hours to actually happen.

So, we seem to have a happy conclusion to this all. And the next time you call Earthlink support, ask for Tim.

Foiling Nimda

Nimda and Code Red are IIS worms. As an Apache server administrator, you are not vulnerable, but they do fill up your log files. Here are a few techniquest to prevent that.

One: Apache::CodeRed. Find it at
http://cpan.org/modules/by-module/Apache/ Easy to install, easy to
configure. But needs mod_perl, so if you don’t have that, you’re out of
luck.

Also, I have a hacked version of this, which adds the
address to my firewall deny list. I think I should probably leave that
as an exercise, but basically you have it call a suid script, which
takes an IP address as the argument, and adds a host to your firewall.
Presumably you could do this from a CGI program as well, and invoke that
thus:

Action codered /cgi-bin/code_red.cgi
<LocationMatch "/(default.ida|msdac|root.exe|MSADC|system32)/">
    SetHandler codered
</LocationMatch>

The cgi would look something like:

#!/usr/bin/perl
my $ip = $ENV{REMOTE_ADDR};
`/usr/bin/BLOCK $ip`;
print "Content-type: text/htmlnn";
print "bye, now.";

This will get rid of error log entries, as it will be a valid URL. This
is probably my most recommended approach, unless you want to use
Apache::CodeRed, which also sends email to the domain contacts and ISP
contacts, which is perhaps the best thing to do, but generates a lot of
bounce messages.

Note that even if you don’t add them to your firewall, the above script can be used, minus lines 2 and 3, to eliminate the error messages. And, in conjunction with the “don’t log” recipe below, can remove the problem.

Two: Conditional logging. See tutorial at
http://httpd.apache.org/docs/logs.html#conditional or, for the recipe
version, you need the following:

SetEnvIf REQUEST_URI "default.ida" dont-log
CustomLog logs/access_log combined env=!dont-log

As noted previously, this only covers the access log. The error log is
trickier. One way to handle this is to actually redirect these requests
to a virtual host, with a /dev/null’ed error log. That is how I handled
it before I started firewalling them.

However, this, in conjunction with the recommended CGI program will
eliminate all log entries other than the initial access to the CGI
program, which can also be eliminated if you use the conditional logging
trick.

Note two things about the firewall thing. If you have a busy site, this
is *NOT* recommended, as it will cause your firewall list to grow to an
absurd size. I’m doing this on a home dsl account. Two, if you firewall
them, you’ll get one entry in the error log, perhaps, but no more. There
will be log entries in your firewall log, probably. These are far more
satisfying. Reset your firewall deny list periodically.


Follow-up: Ken Coar notes that you should also check out EarlyBird.

Surreal tech support situation

One of my customers hosts their web site on a Mindspring server, which was bought by Earthlink at some point. The site is running on Apache. The customer had me set up password authentication for a subdirectory, and it was necessary for me to do this with an .htaccess file, which was easy enough. However, since ServerName is apparently set incorrectly in the configuration, they were having the problem described in the FAQ, where you get asked for your password twice, and end up on a hostname that is not what you typed in.

I called EarthLink, and talked with two different support reps before I could get someone that even acknowledged that the problem was happening – the first guy simply would not admit that it was happening. I explained the problem to him (the second guy), told him how to solve the problem, and gave him the URL for the FAQ where it is described. (http://httpd.apache.org/docs/misc/FAQ.html#prompted-twice) After putting me on hold for a lengthy period of time, apparently talking to other experts, he came back and told me that the problem was beyond their expertise to deal with. He encouraged me to read the .htaccess file tutorial on the Apache web site at apache.org. (http://httpd.apache.org/docs/howto/htaccess.html)

Now, for those of you who don’t already know, the reason that this was so very surreal is that I wrote the .htaccess tutorial on the Apache web site. I’m pretty sure that the tech support guy did not believe me when I told him this, but, honest, I really did. And, of course, I’m no closer to having a solution to the problem, because it’s something that has to be done in the main configuration file. ServerName is set incorrectly, and I would need access to the main server config file to fix that, or to set UseCanonicalName off, which is the other recommended solution.

Hopefully, I’ll get someone on the phone next week that believes me, and is willing to implement the recommended solution.

Apache Web Server Administration, by Charles Aulds

Linux Apache Web Server Administration
Charles Aulds
Craig Hunt Linux Library
Sybex Press

Well, I tried to be very critical of this book, because, after all, I want you to buy my book. But it really is very good.

It has thorough converage of all important topics. I found a number of places where information was wrong, but most of these were probably attributable to typesetting errors, rather than author errors. Missing parentheses, for example.

The examples were, for the most part, excellent, with good supporting explantions. Diagrams were good too – not gratuitous, but actually useful in most cases.

If I’m going to complain about something, it would be that there is no clear distinction made of when he’s talking about 1.3, and when 2.0. Or is it all 2.0? I’m really not sure. Some of it appears to be 1.3 specific, but other places he’s very clearly talking about 2.0, although this is not mentioned in the text, and might not be clear to other folks.

Overall, recommended and thorough.

(The book was given to me by the publisher, but I did not receive any other incentive to say nice things about it.)