Linux Server Security (2nd Edition( [Electronic resources] نسخه متنی

اینجــــا یک کتابخانه دیجیتالی است

با بیش از 100000 منبع الکترونیکی رایگان به زبان فارسی ، عربی و انگلیسی

Linux Server Security (2nd Edition( [Electronic resources] - نسخه متنی

Michael D. Bauer

| نمايش فراداده ، افزودن یک نقد و بررسی
افزودن به کتابخانه شخصی
ارسال به دوستان
جستجو در متن کتاب
بیشتر
تنظیمات قلم

فونت

اندازه قلم

+ - پیش فرض

حالت نمایش

روز نیمروز شب
جستجو در لغت نامه
بیشتر
لیست موضوعات
توضیحات
افزودن یادداشت جدید







10.2. The Web Server


A secure
web service starts with a secure web server, which in turn starts
with good codeno buffer overflows or other problems that could
be exploited to gain root privileges. Apache has
had a handful of critical vulnerabilities over the past few years,
and has generally released fixed versions promptly. Apache powers
about two-thirds of the 55 million hosts in the monthly Netcraft
survey (http://news.netcraft.com/archives/web_server_surveyl).

Microsoft's Internet Information Server (IIS), with
less than a third of Apache's market share, has had
many critical and ongoing security problems. A Microsoft Security
Bulletin issued in April 2002 described 10 critical problems in IIS 4
and 5. These include vulnerabilities to buffer overruns, Denial of
Service, and cross-site scripting; a number of these provide
full-system privileges to the attacker. IIS 6 is reportedly better.

In practice, most Apache security problems are caused by
configuration errors, and I'll talk about how to
avoid these shortly. Still, there are always bug fixes, new features,
and performance enhancements, along with the occasional security fix,
so it's best to start from the most recent stable
release.

Although Apache 2.0 was released a few years ago, security and bug
fixes continue for the 1.3 branch. Apache 2.0 has some interesting
additions, such as filters (pipelined input
modules) and MPMs (multiprocessing modules). The
default MPM, prefork, works like 1.3 by starting
a bunch of processes and assigning requests among them. The
worker MPM handles requests in threads. But 2.0
uptake has been slow. One reason is that the threaded MPM requires
all linked Apache modules and all of their supporting
libraries to be threadsafe. Although Apache 2 and PHP
(Version 4 and up) are threadsafe, some of the libraries used by PHP
extensions may not be. This can cause errors that are extremely
difficult to track. For this reason, Rasmus Lerdorf and the other PHP
developers recommend using Apache 1.3 with PHP, or Apache 2 with the
prefork MPM.
Another method is
to use FastCGI (http://www.fastcgi.com/), which runs as a
separate process from Apache.

I still use Apache 1.3 with PHP. Since most users are still working
with 1.3, that's what will be used in the examples
in this chapter, with some 2.0 notes where needed. The book
Apache Security (O'Reilly) has
more details on security for 2.0.


10.2.1. Build Time: Installing Apache


Attacks are so frequent on today's Internet that you
don't want to leave a window for attack, even for
the few minutes it takes to set up a secure server. This section
covers setting up your environment and obtaining the right version of
Apache.

10.2.1.1 Setting up your firewall


A public web server is commonly located
with email and nameservers in a DMZ, between outer and inner
firewalls. You want to configure access for two classes of visitor:

The public, visiting your site from the Internet Web administrators, who may be coming from the outside, inside, or
another server in the DMZ
Web servers normally listen on TCP ports 80
(http:) and 443 (secure HTTP,
https:). While you're
installing Apache and the pieces are lying all around, block external
access to these ports at your firewall (with iptables or other open
source or commercial tools). If you're installing
remotely, open only port 22 and use ssh. After
you've configured Apache, tightened your CGI scripts
(as described in this chapter), and tested the server locally, you
can then reopen ports 80 and 443 to the world.

How you handle administrators depends on where they are and how they
want to get to the web server. If administrators use command-line
tools such as those described in this chapter,
ssh is sufficient. If they use some web GUI,
permissions and passwords need to be set for the corresponding
scripts. Administrators might also tunnel to some port with
ssh or stunnel, or use use
other tools over a VPN.

10.2.1.2 Checking your Apache version


If you have Linux, you almost
certainly already have Apache somewhere. Check your version with the
following command:

httpd -v Check the Apache mirrors (http://www.apache.org/mirrors/)
or your favorite Linux distribution site for the most recent stable
release of Apache, and keep up with security updates as
they're released.

If you're
running an older version of Apache,
you can build a new version and test it with another port, then
install it when ready. If you plan to replace any older version,
first see if another copy of Apache (or another web server) is
running:

service httpd status or:

ps -ef | grep httpd If Apache is running, halt it by entering the following:

apachectl stop or (in Red Hat and Fedora):

service httpd stop or:

/etc/init.d/apache stop Make sure there aren't any
other web servers running on port 80:

netstat -an | grep ':80' If you see one, kill -9 its process ID and check
that it's really, most sincerely dead. You can also
prevent it from starting at the next reboot with this command:

chkconfig httpd off

10.2.1.3 Installation methods


Should you get a binary installation or source? A binary installation
is usually quicker, while a source installation is more flexible and
current. I'll look at both but emphasize source,
since security updates usually should not wait.

Of
the many Linux package managers, RPM may be the most familiar, so
I'll use it for this example. Grab the most current
stable version of Apache from http://httpd.apache.org, your favorite Linux
distribution, or an RPM or yum repository.

Depending on whose RPM package you use, Apache's
files and directories will be installed in different places. This
command prints where the package's files will be
installed:

rpm -qpil httpd-2.0.52-1.i386.rpm We'll soon see how to make Apache's
file hierarchy more secure, no matter what it looks like.

For a source installation, start with the freshest stable tarball.
Here's an example for 1.3:

# wget http://mirrors.isc.org/pub/apache/httpd/apache_1.3.33.tar.gz
# tar xvzf apache_1.3.33.tar.gz
# cd apache_1.3.33 If the file has an MD5 or GPG signature, check it (with
md5sum or gpgv) to ensure
you don't have a bogus distribution or a corrupted
download file.

Then, run the GNU configure script. A bare:

# ./configure will install everything in directories under
/usr/local/apache (Apache 2 uses
/usr/local/apache2). To use another directory,
use --prefix:

# ./configure --prefix=/usr/other/apache Apache includes some standard layouts (directory
hierarchies). To see these and other script options, enter the
following:

# ./configure --help Next, run good old make:

# make This will print pages of results, eventually creating a copy of
Apache called httpd in the
src subdirectory. We'll look at
what's actually there in the next section. When
you're ready to install Apache to the target
directory, enter the following:

# make install

10.2.1.4 Linking methods


Did
the preceding method produce a statically linked or dynamically
linked executable? What modules were included? By including fewer
modules, you use less memory and have fewer potential problems.
"Simplify, simplify," said Thoreau,
on behalf of the least-privilege principle.

Dynamic
linking provides more flexibility and a smaller memory
footprint. Dynamically linked versions of Apache are easy to extend
with some configuration options and an Apache restart. Recompilation
is not needed. I prefer this method, especially when using the Perl
or PHP modules. See http://httpd.apache.org/docs/dsol for
details on these Dynamic Shared Objects (DSOs). Your copy of Apache
is dynamically linked if you see files with .so
in their names, and this:

# httpd -l
Compiled-in modules:
http_core.c
mod_so.c A statically
linked Apache puts the modules into one binary file, and
it looks something like this:

# httpd -l
Compiled-in modules:
http_core.c
mod_env.c
mod_log_config.c
mod_mime.c
mod_negotiation.c
mod_status.c
mod_include.c
mod_autoindex.c
mod_dir.c
mod_cgi.c
mod_asis.c
mod_imap.c
mod_actions.c
mod_userdir.c
mod_alias.c
mod_access.c
mod_auth.c
mod_setenvif.c
suexec: disabled; invalid wrapper /usr/local/apache/bin/suexec Specify --activate-module and
--add-module to modify the module list. Changing
any of the modules requires recompilation and relinking.

Besides its built-in modules (http://httpd.apache.org/docs/mod/), Apache
has hundreds of third-party modules (http://modules.apache.org/). Some modules
that you may want to build into Apache are listed in Table 10-2.

Table 10-2. Some Apache modules


Apache module

Description/URL

mod_perl

Perl http://perl.apache.org/

mod_php

PHP http://www.php.net/

mod_dav

WebDAV http://httpd.apache.org/docs-2.0/mod/mod_davl
http://www.webdav.org/mod_dav/

mod_security

Adds snort-style intrusion detection http://www.modsecurity.org/ and
Chapter 13

mod_bandwidth,
mod_choke

Bandwidth management http://www.cohprog.com/mod_bandwidthl http://os.cyberheatinc.com/modules.php?name=Content&pa=showpage&pid=7

mod_backhand

Load balancing http://www.backhand.org/mod_backhand/

mod_pubcookie

Authentication for single sign on http://www.pubcookie.org/

10.2.1.5 Securing Apache's file hierarchy


Wherever your installation scattered
Apache's files, it's time to make
sure they're secure at runtime. Loose ownership and
permission settings are a common cause of security problems.

We want the following:

A user ID and group ID for Apache to use User IDs for people who will provide content to the server
Least privilege suggests we create an Apache user ID with as little
power as possible. You often see use of user ID
nobody and group ID nobody.
However, these IDs are also used by NFS, so it's
better to use dedicated IDs. Red Hat uses user ID
apache and group ID apache.
The apache user has no shell and few
permissionsjust the kind of guy we want, and the one
we'll use here.

There are different philosophies on how to assign permissions for web
user IDs. Here are some solutions for content files (HTML and such):

Add each person who will be modifying content on the web site to the
group apache. Make sure that others in the group
(including the user ID apache) can read but not
write one another's files (run umask 137;
chmod 640
for each content file and directory). These
settings allow developers to edit their own files and let others in
the group view them. The web server (running as user
apache) can read and serve them. Other users on
the web server can't access the files at all. This
is important because scripts may contain passwords and other
sensitive data. The apache user
can't overwrite files, which is also useful in case
of a lapse.

The previous settings may be too extreme if you need to let web
developers overwrite each other's files. In this
case, consider mode 660. This is a little less secure, because now
the apache user can also overwrite content
files.

A common approach (especially for those who recommend user ID
nobody and group ID nobody)
is to use the other permissions for the
apache user (mode 644). I think this is less
safe, since it also gives read access to other accounts on the
server.

Let the apache user run the server, but
don't give it write access to any of its site files.
Have developers work on another development server and copy sites to
the production server under a single, separate user account.


Table 10-3 lists the main types of files in an
Apache distribution, where they end up in a default RPM installation
or a source installation, and ownership and permissions.

Table 10-3. Apache installation defaults

File types

Notable files

Red Hat RPM directories

Source directories

Owner Dirmode Filemode

Initialization script

httpd

/etc/init.d

(No standard)

root 755 755

Configuration files

httpd.conf access.conf srm.conf

/etc/httpd/conf

/usr/local/apache/conf

root 755 644

Logs

access_log error_log

/etc/httpd/logs

/usr/local/apache/logs

root 755 644

Apache programs

httpd apachectl

/usr/sbin

/usr/local/apache/bin

root 755 511

Apache utilities

htpasswd apxs rotatelogs

/usr/sbin

/usr/local/apache/bin

root 755 755

Modules

mod_perl.so

/usr/lib/apache

/usr/local/apache/libexec

root 755 755

CGI programs

(CGI scripts)

/var/www/cgi-bin

/usr/local/apache/cgi-bin

root 755 750[1]

Static content

(HTML files)

/var/www/html

/usr/local/apache/htdocs

apache 470 640

Password/datafiles

(Varies)

(No standard)

(No standard)

apache 470 640

[1] Files should be owned by group
apache.


10.2.1.6 Logging


The Apache log directories should be
owned by root and visible to no one else.
Looking at Table 10-3, the default owner is
root but the directory permissions are
755 and file permissions are
644. We can change the directory permissions to
700 and the file permissions to
600.

Logs can reveal sensitive information in the URLs (GET parameters)
and in the referrer. An attacker with write access can plant
cross-site scripting bugs that would be triggered by a log analyzer
as it processes the URLs.

Logs also grow like crazy and fill up the disk. One of the more
common ways to clobber a web server is to fill up the disk with
logfiles. Use logrotate to rotate them daily, or
less often if your server isn't that busy.


10.2.2. Setup Time: Configuring Apache


Configuring
a web server is like configuring an email or DNS serversmall
changes can have unforeseen consequences. Most web security problems
are caused by configuration errors rather than exploits of the Apache
code.

10.2.2.1 Apache configuration files


I mentioned that
Apache's configuration files could be found under
/etc/httpd/conf,
/usr/local/apache/conf, or some less well-lit
place. The most prominent file is httpd.conf,
but in 1.3, you will also see access.conf and
srm.conf. These are historic remnants from the
original NCSA web server. Only httpd.conf is
used for Apache 2.0.

To keep local changes together, you can use a separate file like
mystuff.conf and process it with the
Include directive:

Include mystuff.conf In Apache 2.0, you can specify a directory, and all files in it will
be processed in alphabetical order:

Include /usr/local/apache/conf/mysites/ Be careful, because this will grab everything in the directory,
including any backup files or saved editor sessions.

Any time you change Apache's configuration, check it
before restarting the server:

# apachectl configtest If this succeeds, start Apache:

# apachectl start Before starting Apache, let's see how secure we can
make it.

10.2.2.2 Configuration options


To see what options your copy of
Apache understands, run the following:

# httpd -L This reflects the modules that have been included, either dynamically
or statically. I'll discuss the core options later.

10.2.2.2.1 User and group


In Section 10.2.1.5, I covered which
user and group IDs to use for Apache and its files. Apache is started
by root, but the runtime ownership of all the
Apache child processes is specified by the User
and Group options. These directives should match
your choices:

User apache
Group apache


Do not use root for the
user ID! Choose an ID with the least privilege and no login shell.
Apache 2 cannot be run as root unless
it's compiled with the
-DBIG_SECURITY_HOLE option.

10.2.2.2.2 Files and directories


The top of the server directory hierarchy is
ServerRoot:

ServerRoot /usr/local/apache The top of the web-content hierarchy (for static HTML files, not CGI
scripts) is DocumentRoot:

DocumentRoot /usr/local/apache/htdocs

10.2.2.2.3 Listen


By default, Apache listens on all IP addresses.
Listen specifies which IP addresses and/or ports
Apache should serve.

For initial testing, you can force Apache to serve only the local
address:

Listen 127.0.0.1 or a different port:

Listen 81 This is useful if you need to keep your current server live while
testing the new one.

Address and port may be combined:

Listen 202.203.204.205:82 Use multiple Listen directives to specify more
than one address or port. You may modify your firewall rules to
restrict access from certain external addresses while testing your
configuration. In Apache 2.0, Listen is mandatory.

10.2.2.2.4 Containers: directory, location, and files


Apache controls access to resources (files, scripts, and other
things) with the container directives:
Directory, Location, and
Files. Directory applies to an
actual directory in the web server's filesystems.
Location refers to a URL, so its actual location
is relative to DocumentRoot (Location / =
DocumentRoot
). Files refers to
filenames, which may be in different directories.

Each of these has a counterpart that uses regular expressions:
DirectoryMatch, LocationMatch,
and FilesMatch.

Within these containers are directives that specify
access control
(what can be done) and authorization (by whom).

I'll trot out least privilege again and lock Apache
down by default (put this in access. conf if you
want to keep httpd.conf pristine):

<Directory />
Options none
AllowOverride none
Order deny,allow
Deny from all
</Directory>

By itself, this is a bit extreme. It won't serve
anything to anyone, even if you're testing from the
same machine. Try it, just to ensure you can lock yourself out. Then
open the door slightly:

<Directory /usr/local/apache/htdocs>
Order deny,allow
Deny from all
Allow from 127.0.0.1
</Directory>

Now you can use a command-line web utility (such as
wget, lynx, or
curl) or a graphic browser on the same box to
test Apache. Does it return a page? Do you see it logged in
access_log? If not, what does
error_log say?

10.2.2.2.5 Options


Table 10-4 lists the possible values for
Options.

Table 10-4. Apache resource options

Value

Description

All

Allow all but
MultiViews. You don't want to be
this generous. This is the default!


ExecCGI

Allow CGI scripts. Use sparingly.


FollowSymLinks

Follow symbolic links. This is a slight efficiency gain, since Apache
avoids a stat call.


SymLinksIfOwnerMatch

Follow symbolic links only if the target and the link have the same
owner. This is safer than FollowSymLinks.


Includes

Allow SSI, including #exec cgi. Beware.


IncludesNoExec

Allow SSI, but no #exec or #exec
cgi
. Use this if you only want file inclusion.


Indexes

Show a formatted directory listing if no
DirectoryIndex file (such as
indexl) is found. This should be avoided,
since it may reveal more about your site than you intend.


MultiViews

This governs content negotiation (e.g., multiple languages) and
should otherwise be disabled.

Preceding an option value with a minus (-) removes
it from the current options, preceding it with plus
(+) adds it, and a bare value is absolute:

# Add Indexes to current options:
Options +Indexes
# Remove Indexes from current options:
Options -Indexes
# Make Indexes the only current option, disabling the others:
Options Indexes

10.2.2.2.6 Resource limits


Table 10-5 lists the directives that help avoid
resource exhaustion from
Denial of Service attacks or
runaway CGI programs.

Table 10-5. Apache resource limits

Directive

Default

Usage

MaxClients

256

Maximum number of simultaneous
requests. Make sure you have enough memory for this many simultaneous
copies of httpd, unless you like to watch your
disk lights blink furiously during swapping.


MaxRequestsPerChild

0

Maximum requests for a child process (0=infinite).
A positive value helps limit bloat from memory leaks.


KeepAlive

on

Allow HTTP 1.1 keepalives (reuse of TCP connection). This increases
throughput and is recommended.


MaxKeepAliveRequests

100

Maximum requests per connection if KeepAlive is on.


KeepAliveTimeout

15

Maximum seconds to wait for a subsequent request on the same
connection. Lower this if you get close to
MaxClients.


RLimitCPU

soft,[max]

Soft and maximum limits for seconds per process.


RLimitMEM

soft,[max]

Soft and maximum limits for bytes per process.


RLimitNPROC

soft,[max]

Soft and maximum limits for number of processes.


LimitRequestBody

0

Maximum bytes in a request body (0=infinite). You
can limit uploaded file sizes with this.


LimitRequestFields

100

Maximum request header fields. Make sure this value is greater than
the number of fields in any of your forms.


LimitRequestFieldSize

8190

Maximum bytes in an HTTP header request field.


LimitRequestLine

8190

Maximum bytes in an HTTP header request line. This limits abnormally
large GET or HEAD requests, which may be hostile.

10.2.2.2.7 User directories


If you don't need to provide
user directories on your web
server, disable them:

UserDir disabled You can support only some users:

UserDir disabled
UserDir enabled good_user_1, careful_user_2 If you want to enable all your users, disable
root and other system accounts:

UserDir enabled
UserDir disabled root To prevent users from installing their own
.htaccess files, specify:

UserDir public_html
<Directory ~/public_html>
AllowOverride None
</Directory>


10.2.3. Robots and Spiders


Some hits to your web site will come
from programs called robots. Some of these
gather data for search engines and are also called
spiders. A well-behaved robot is supposed to
read and obey the robots.txt file in your
site's home directory. This file tells it which
files and directories may be searched. You should have a
robots.txt file in the top directory of each web
site. Exclude all directories with CGI scripts (anything marked as
ScriptAlias, such as
/cgi-bin), images, access-controlled content, or
any other content that should not be exposed to the world.
Here's a simple example:

User-agent: *
Disallow: /image_dir
Disallow: /cgi-bin Many robots are spiders, used by web search engines to help catalogue
the Web's vast expanses. Good ones obey the
robots.txt rules and have other indexing
heuristics. They try to examine only static content and ignore things
that look like CGI scripts (such as URLs containing ?
or /cgi-bin). Web scripts can use the
PATH_INFO environment variable and Apache
rewriting rules to make CGI scripts search-engine friendly.

The robot exclusion standard is documented at http://www.robotstxt.org/wc/norobotsl and
http://www.robotstxt.org/wc/robotsl.

Rude robots can be excluded with environment variables and access
control:

BrowserMatch ^evil_robot_name begone
<Location />
order allow,deny
allow from all
deny from env=begone
</Location>

An evil robot may lie about its identity in the
UserAgent HTTP request header and then make a
beeline to the directories it's supposed to ignore.
You can craft your robots.txt file to lure it
into a tarpit, which is described in the next section.


/ 94