Linux Server Security (2nd Edition( [Electronic resources] نسخه متنی

اینجــــا یک کتابخانه دیجیتالی است

با بیش از 100000 منبع الکترونیکی رایگان به زبان فارسی ، عربی و انگلیسی

Linux Server Security (2nd Edition( [Electronic resources] - نسخه متنی

Michael D. Bauer

| نمايش فراداده ، افزودن یک نقد و بررسی
افزودن به کتابخانه شخصی
ارسال به دوستان
جستجو در متن کتاب
بیشتر
تنظیمات قلم

فونت

اندازه قلم

+ - پیش فرض

حالت نمایش

روز نیمروز شب
جستجو در لغت نامه
بیشتر
لیست موضوعات
توضیحات
افزودن یادداشت جدید









3.1. OS Hardening Principles





Operating-system hardening can be time consuming and even confusing.
Like many OSes designed for a wide range of roles and user levels,
Linux has historically tended to be "insecure by
default": most distributions'
default installations are designed to present the user with as many
preconfigured and active applications as possible. Therefore,
securing a Linux system not only requires you to understand the inner
workings of your system; you may also have to undo work others have
done in the interest of shielding you from those inner workings!



Having said that, the principles of Linux hardening and OS hardening
in general can be summed up by a single maxim: "That
which is not explicitly permitted is forbidden." As
I mentioned in the previous chapter, this phrase was coined by Marcus
Ranum in the
context of building firewall rules and access-control lists. However,
it scales very well to most other information security endeavors,
including system hardening.



Another concept originally forged in a somewhat different context is
the Principle of Least Privilege. This was originally used by the
National Institute of Standards and
Technology (NIST) to describe the desired behavior of the
"Role-Based Access Controls" it
developed for mainframe systems: "a user [should] be
given no more privilege than necessary to perform a
job" (http://hissa.nist.gov/rbac/paper/node5l).



Nowadays people often extend the Principle of Least Privilege to
include applications; no application or process should have more
privileges in the local operating environment than it needs to
function. The Principle of Least Privilege and
Ranum's maxim sound like common sense (they
are, in my opinion). As they apply to system
hardening, the real work stems from these corollaries:



Install only necessary software; delete or disable everything else.



Keep all system and application software painstakingly up to date, at
least with security patches, but preferably with
all package-by-package updates.



Delete or disable unnecessary user accounts.



Don't needlessly grant shell access:
/bin/false should be the default shell for
nobody, guest, and any
other account used by services, rather than by an individual local
user.



Allow each service (networked application) to be publicly accessible
only by design, never by default.



Run each publicly accessible service in a
chrooted filesystem (i.e., a subset of
/).



Don't leave any executable file needlessly set to
run with superuser privileges, i.e., with its
SUID bit set (unless owned by a sufficiently
nonprivileged user).



In general, avoid using root privileges
unnecessarily, and if your system has multiple administrators,
delegate root's authority via
sudo.



Configure logging and check logs regularly.



Configure every host as its own firewall; i.e., bastion hosts should
have their own packet filters and access
controls in addition to (but not instead of) the
firewall's.



Check your work now and then with a security scanner, especially
after patches and upgrades.



Understand and use the security features supported by your operating
system and applications, especially when they
add redundancy to your security fabric.



After hardening a bastion host, document its configuration so it may
be used as a baseline for similar systems and so you can rebuild it
quickly after a system compromise or failure.




All of these corollaries are ways of implementing and enforcing the
Principle of Least Privilege on a bastion host.
We'll spend most of the rest of this chapter
discussing each in depth with specific techniques and examples.
We'll end the chapter by discussing Bastille Linux,
a handy tool with which Red Hat and Mandrake Linux users can automate
much of the hardening process.




3.1.1. Installing/Running Only Necessary Software





This is the most obvious of our submaxims/corollaries. But what does
"necessary" really mean? What if
you don't know whether a given
software package is necessary, especially if it was automatically
installed when you set up the system?



You have three allies in determining each package's
appropriateness:



Common sense Manpages Your Linux distribution's package manager
(rpm on Red Hat and its derivatives,
dpkg and dselect on Debian,
and both yast and rpm on
SUSE systems)
Common sense, for example, dictates that a firewall
shouldn't be running apache and
that a public FTP server doesn't need a C compiler.
Remember, since our guiding principle is "that which
is not expressly permitted must be denied," it
follows that "that which is not necessary should be
considered needlessly risky." If you don't know what a given command or package
does, the simplest way to find out is via a man
lookup. All manpages begin with a synopsis of the described
command's function. I regularly use manpage lookups
both to identify unfamiliar programs and to refresh my memory on
things I don't use but have a vague recollection of
being necessary.




Division of Labor Between Servers





Put different services on different hosts
whenever possible. The more roles a single host plays, the more
applications you will need to run on it, and therefore the greater
the odds that it will be compromised.



For example, if a DMZ network contains a web server running Apache,
an FTP server running wuftpd, and an SMTP
gateway running postfix, a new vulnerability in
wuftpd will directly threaten the FTP server but
only indirectly threaten the other two systems. (If compromised, the
FTP server may be used to attack them, but the attacker
won't be able to capitalize on the same
vulnerability she exploited on the FTP server).



If that DMZ contains a single host running all three services, the
wuftpd vulnerability will, if exploited,
directly impact not only FTP functionality, but also World Wide Web
services and Internet email relaying.



If you must combine roles on a single system, aim for consistency.
For example, have one host support public WWW services along with
public FTP services, since both are used for anonymous file sharing,
and have another host provide DNS and SMTP since both are
"infrastructure" services. A little
division of labor is better than none.



In any case, I strongly recommend against using
your firewall as anything but a firewall.




If there's no manpage for the command/package (or if
you don't know the name of any command associated
with the package), try apropos
string for a list of related
manpages The apropos command relies on a
database in /var/cache/man/, which may or may
not contain anything, depending on how recently you installed your
system; you may need to issue the command
makewhatis (Fedora, Red Hat) or mandb
-c (Debian, SUSE) before apropos
queries will return meaningful results.



If man or apropos fails to
help you determine a given package's purpose, your
distribution's package manager should at least be
able to tell you what other packages, if any,
depend on it. Even if this doesn't tell you what the
package does, it may tell you whether it's
necessary.



For example, in reviewing the packages on my Red Hat system, suppose
I see libglade installed but am not sure I need
it. As it happens, there's no manpage for
libglade, but I can ask rpm whether
any other packages depend on it (Example 3-1).



Example 3-1. Using man, apropos, and rpm to identify a package





[mick@woofgang]$ man libglade
No manual entry for libglade
[mick@woofgang]$ apropos libglade
libglade: nothing appropriate
[mick@woofgang]$ rpm -q --whatrequires libglade
memprof-0.3.0-8
rep-gtk-gnome-0.13-3 Aha...libglade is part of
GNOME. If the system in question is a server, it
probably doesn't need the X Window System at all,
let alone a fancy frontend like GNOME, so I can
safely uninstall libglade (along with the rest
of GNOME).



SUSE also has the rpm command, so Example 3-1 is equally applicable to it. Alternatively,
you can invoke yast, navigate to Package
Management Change/Create
Configuration, flag libglade for deletion, and
press F5 to see a list of any dependencies that will be affected if
you delete libglade.



Under Debian, dpkg has no simple means of
tracing dependencies, but dselect handles them
with aplomb. When you select a package for deletion (by marking it
with a minus sign), dselect automatically lists
the packages that depend on it, conveniently marking them for
deletion, too. To undo your original deletion flag, type
"X"; to continue (accepting
dselect's suggested additional
package deletions), press Return.



3.1.1.1 Commonly unnecessary packages





I recommend you not install
the X Window System on publicly accessible servers. Server
applications (Apache, ProFTPD, and Sendmail, to name a few) almost
never require X; it's extremely doubtful that your
bastion hosts really need X for their core functions. If a server is
to run "headless" (without a
monitor and thus administered remotely), it certainly
doesn't need a full X installation with GNOME, KDE,
etc., and probably doesn't need even a minimal one.



During Linux installation, deselecting X Window packages, especially
the base packages, will return errors concerning
"failed dependencies." You may be
surprised at just how many applications make up a typical X
installation. In all likelihood, you can safely deselect
all of these applications, in addition to X
itself.



When in doubt, identify and install the package as described
previously (and as much of the X Window System as it needsskip
the fancy window managers) only if you're
positive you need it. If things
don't work properly as a result of omitting a
questionable package, you can always install the omitted packages
later.



Besides the X Window System and its associated window managers and
applications, another entire category of applications inappropriate
for Internet-connected systems is the
software development environment.
To many Linux users, it feels strange to install Linux without also
installing GCC, GNU Make, and at least enough other development tools
with which to compile a kernel. But if you can
build things on an Internet-connected server, so can a successful
attacker.



One of the first things any accomplished system cracker does upon
compromising a system is to build a
"rootkit,"
a set of standard Unix utilities such as ls,
ps, netstat, and
top, which appear to behave just like the
system's native utilities. Rootkit utilities,
however, are designed not to show directories,
files, and connections related to the attacker's
activities, making it much easier for said activities to go
unnoticed. A working development environment on the target system
makes it much easier for the attacker to build a rootkit
that's optimized for your system.



Of course, the attacker can still upload his own compiler, or
precompiled binaries of his rootkit tools. Hopefully,
you're running Tripwire or some other
system-integrity checker,
which will alert you to changes in important system files (see Chapter 11). Still, trusted internal systems, not
exposed public systems, should be used for developing and building
applications; the danger of making your bastion host
"soft and chewy on the inside"
(easy to abuse if compromised) is far greater than any convenience
you'll gain from doing your builds on it.



Similarly, there's one more type of application I
recommend keeping off of your bastion hosts:
network monitoring and scanning tools.
This should be obvious: tcpdump,
nmap, nessus, and other
tools we commonly use to validate system/network security have
tremendous potential for misuse.



As with development tools, security-scanning tools are infinitely
more useful to illegitimate users in this context than they are to
you. If you want to scan the hosts in your DMZ network periodically
(which is a useful way to
"check your work"), invest a few
hundred dollars in a used laptop system, which you can connect to and
disconnect from the DMZ as needed.



While any unneeded service should be either
deleted or disabled, the following deserve particular attention:



RPC services





Sun's Remote Procedure Control protocol (which is
included on virtually all flavors of Unix) lets you centralize user
accounts across multiple systems, mount remote volumes, and execute
remote commands. But RPC isn't a very secure
protocol, and you shouldn't be running these types
of services on a DMZ hosts anyhow.






Local processes sometimes require the RPC
"portmapper," a.k.a.
rpcbind. Disable this with care, and try
re-enabling it if other things stop working, unless those things are
all X-related. (You shouldn't be running X on any
publicly available server.)



r-services





rsh, rlogin, and
rcp allow remote shell sessions and file
transfers using some combination of username/password and
source-IP-address authentication. But authentication data is passed
in the clear and IP addresses can be spoofed, so these applications
are not suitable for DMZ use. If you need their functionality, use
Secure Shell (SSH), which was specifically designed as a replacement
for the r-services. SSH is covered in detail in Chapter 4.



Comment out the lines corresponding to any
"r-commands" in
/etc/inetd.conf.




inetd





The Internet
Daemon is a handy way to use a single process (i.e.,
inetd) to listen on multiple ports and invoke
the services on whose behalf it's listening as
needed. On a bastion host, however, most of your important services
should be invoked as
persistent daemons: an FTP server, for
example, really has no reason not to run FTPD processes all the time.



Furthermore, most of the services enabled by default in
inetd.conf are unnecessary, insecure, or both.
If you must use inetd, edit
/etc/inetd.conf to disable all services you
don't need (or never heard of!). Many of the RPC
services I warned against earlier are started in
inetd.conf.




sendmail





Many people think that Sendmail, which is enabled by default on most
versions of Unix, should run continuously as a
daemon, even on hosts
that send email only to themselves (e.g., administrative messages
such as crontab output sent to root by the
crontab daemon). This is not so: sendmail (or postfix, qmail, etc.)
should be run as a daemon only on servers that must receive mail from
other hosts. (On other servers, run sendmail to send mail only as
needed; you can also execute sendmail -q as a cron
job to attempt delivery of queued messages periodically.) Sendmail is
usually started in /etc/rc.d/rc2.d or
/etc/rc.d/rc3.d.




Telnet, FTP, and POP





These three protocols have one unfortunate characteristic in common:
they require users to enter a username and password, which are sent
in clear text over the network. Telnet and FTP are easily replaced
with ssh and its file-transfer utilities
scp and sftp; email can be
forwarded to a different host automatically, left on the DMZ host and
read through a ssh session, or downloaded via
POP using a "local forward" to
ssh (i.e., piped through an encrypted Secure
Shell session). All three of these services are usually invoked by
inetd; to disable them, edit
/etc/inetd.conf.





Remember, one of our operating assumptions in the DMZ is that hosts
therein are much more likely to be compromised than internal hosts.
When installing software, you should maintain a strict policy of
"that which isn't necessary may be
used against me." Furthermore, consider not only
whether you need a given application but also whether the host on
which you're about to install it is truly the best
place to run it (see "Division of Labor Between
Servers," earlier in this chapter).

3.1.1.2 Disabling services in Red Hat and related distributions



Perhaps there are certain software packages you want installed but
don't need right away. Or perhaps other things
you're running depend on a given package that has a
nonessential daemon you wish to disable.



If you run Red Hat, one of its derivatives (Mandrake,
Yellow Dog, etc.), or a recent version of SUSE, you should use
chkconfig
to manage startup services.
chkconfig is a simple tool whose options are
listed in Example 3-2.



Example 3-2. chkconfig usage message



[mick@woofgang mick]# chkconfig --help
chkconfig version 1.2.16 - Copyright (C) 1997-2000 Red Hat, Inc.
This may be freely redistributed under the terms of the GNU Public License.
usage: chkconfig --list [name]
chkconfig --add <name>
chkconfig --del <name>
chkconfig [--level <levels>] <name> <on|off|reset>) To list all the startup services on my Red Hat system, I simply enter
chkconfig --list. For each script in
/etc/rc.d, chkconfig lists
that script's startup status
(on or off) at each
runlevel. The output of Example 3-3 has been
truncated for readability.



Example 3-3. Listing all startup scripts' configuration





[root@woofgang root]# chkconfig --list
nfs 0:off 1:off 2:off 3:off 4:off 5:off 6:off
microcode_ctl 0:off 1:off 2:on 3:on 4:on 5:on 6:off
smartd 0:off 1:off 2:on 3:on 4:on 5:on 6:off
isdn 0:off 1:off 2:on 3:on 4:on 5:on 6:off
(etc.) To disable isdn in runlevel 2,
I'd execute the commands shown in Example 3-4.



Example 3-4. Disabling a service with chkconfig





[root@woofgang root]# chkconfig --level 2 isdn off
[root@woofgang root]# chkconfig --list isdn
isdn 0:off 1:off 2:off 3:off 4:off 5:off 6:off (The second command, chkconfig --list isdn, is
optional but useful in showing the results of the first.) To remove
isdn's startup script from all runlevels,
I'd use the command:



chkconfig --del isdn

3.1.1.3 Disabling services in SUSE





SUSE Linux introduced a syntax-compatible
version of chkconfig in SUSE 8.1
(it's actually a frontend to its own
insserv command) but still uses its own format
for init scripts (Example 3-5).



Example 3-5. A SUSE INIT INFO header





# /etc/init.d/apache
#
### BEGIN INIT INFO
# Provides: apache httpd
# Required-Start: $local_fs $remote_fs $network
# X-UnitedLinux-Should-Start: $named $time postgresql sendmail mysql ypclient
dhcp radiusd
# Required-Stop: $local_fs $remote_fs $network
# X-UnitedLinux-Should-Stop:
# Default-Start: 3 5
# Default-Stop: 0 1 2 6
# Short-Description: Apache httpd
# Description: Start the httpd daemon Apache
### END INIT INFO For our purposes, the relevant settings are
Default-Start, which lists the runlevels in which
the script should be started, and Default-Stop,
which lists the runlevels in which the script should be stopped.
Actually, since any script started in runlevel 2, 3, or 5 is
automatically stopped when that runlevel is exited,
Default-Stop is often left empty.



To disable a service in SUSE 8.1 or later, you can use
chkconfig --del as described earlier in this
section. On earlier versions of SUSE, you must use insserv
--remove
. For example:



insserv --remove isdn For more information about the SUSE's particular
version of the System V init script system, see
SUSE's init.d(7) manpage.



3.1.1.4 Disabling services in Debian 3.0





Debian GNU/Linux has its own
command for manipulating startup scripts:
update-rc.d. While this command was designed
mainly to be invoked from installation scripts (i.e., within
deb packages), it's fairly
simple to use to remove an init script's runlevel
links. For example, to disable the startup script for
lpd, we'd use:



update-rc.d -f lpd remove The -f tells update-rc.d to
ignore the fact that the script itself,
/etc/init.d/lpd, has not been deleted, which
update-rc.d would otherwise complain about.



3.1.1.5 Disabling services in other Linux distributions





On all other Linux distributions, you can disable a service simply by
deleting or renaming its links in the appropriate runlevel
directories under /etc/rc.d/. For example, if
you're configuring a web server that
doesn't need to be its own DNS server, you probably
want to disable BIND. The easiest way to do this without deleting
anything is by renaming all links made to the corresponding script in
/etc/init.d/ (Example 3-6).



Example 3-6. Disabling a startup script by renaming its symbolic links



[root@woofgang root]# mv /etc/rc.d/rc2.d/S30named /etc/rc.d/rc2.d/disabled_S30named
[root@woofgang root]# mv /etc/rc.d/rc3.d/S30named /etc/rc.d/rc3.d/disabled_S30named
[root@woofgang root]# mv /etc/rc.d/rc5.d/S30named /etc/rc.d/rc5.d/disabled_S30named (Note that your named startup script may have a
different name and exist in different or additional subdirectories of
/etc/rc.d.)

3.1.2. Keeping Software Up to Date





It isn't enough to weed out unnecessary software:
all software that remains, including both the operating system itself
and "user-space" applications, must
be kept up to date. This is a more subtle
problem than you might think, since many Linux distributions offer
updates on both a package-by-package basis (e.g., the Red Hat Errata
web site) and in the form of new distribution revisions (e.g., new
CD-ROM sets).



What, then, constitutes "up to
date"? Does it mean you must immediately upgrade
your entire system every time your distribution of choice releases a
new set of CD- ROMs? Or is it okay simply to check the
distribution's web page every six months or so? In
my opinion, neither extreme is a good approach.



3.1.2.1 Distribution (global) updates versus per-package updates





The good news is that
it's seldom necessary to upgrade a system completely
just because the distribution on which it's based
has undergone an incremental revision (e.g., 7.2
7.3). The bad news is
that updates to individual packages should probably be applied
much more frequently than
that; if you have one or more Internet-connected systems, I
strongly recommend you subscribe to your
distribution's
security announcement mailing list and
apply each relevant security patch as soon as it's
announced.




Remember, the people who announce
"new" security vulnerabilities as a
public service are not always the first to discover them. The prudent
assumption for any such vulnerability is that the
"bad guys" already know about it
and are ready to exploit it if they find it on your systems.



Therefore, I repeat, the only way to minimize your exposure to
well-known vulnerabilities is to do the following:



Subscribe to your distribution's
security-announcement mailing list.



Apply each security patch immediately after receiving notice of it.



If no patch is available for an application with widely exploited
vulnerabilities, disable that application until
a patch is released.






A "global" revision to an entire
Linux distribution is not a security event in itself. Linux
distributions are revised to add new software packages, reflect new
functionality, and provide bug fixes. Security is hopefully enhanced,
too, but not necessarily. Thus, while there are various reasons to
upgrade to a higher numbered revision of your Linux distribution
(stability, new features, etc.), doing so won't
magically make your system more secure.



In general, it's good practice to stick with a given
distribution version for as long as its vendor continues to provide
package updates for it, and otherwise to upgrade to a newer (global)
version only if it has really compelling new features. In any Linux
distribution, an older but still supported version with all current
patches applied is usually at least as secure as the newest version
with patches and probably more secure than the
new version without patches.



In fact, don't assume that the CD-ROM set you just
received in the mail directly from SUSE, for example, has no known
bugs or security issues just because it's new. You
should upgrade even a brand-new operating system (or at least check
its distributor's web site for available updates)
immediately after installing it.



I do not advocate the practice of checking for
vulnerabilities only periodically and not worrying about them in the
interim; while better than never checking, this
strategy is simply not proactive enough. Prospective attackers
won't do you the courtesy of waiting until after
your quarterly upgrade session before striking. (If they do, then
they know an awful lot about your system and
will probably get in anyhow!) Therefore, I strongly recommend you get into the habit of applying
security-related patches and upgrades in an ad hoc manneri.e.,
apply each new patch as soon as it's announced.



3.1.2.2 Whither X-based updates?





In subsequent sections of this chapter, I'll
describe methods of updating packages in Fedora, Red Hat, SUSE, and
Debian systems. Each of these distributions supports both automated
and manual means of updating packages, ranging from simple commands
such as rpm -Uvh ./mynewrpm-2.0.3.rpm (which works
in all rpm-based distributions: Red Hat, SUSE, etc.) to sophisticated
graphical tools such as yast2 (SUSE only).



Given that earlier in this chapter I recommended against installing
the X Window System on your bastion hosts, it may seem contradictory
for me to cover X-based update utilities. There are two good reasons
to do so, however:



For whatever reason, you may decide that you can't
live without X on one or more of your bastion hosts.



Just because you don't run X on a bastion host
doesn't mean you can't run an
X-based update tool on a host on the internal network, from which you
can relay the updated packages to your bastion hosts via a less
glamorous tool such as scp (see Chapter 4).





Should I Always Update?





Good system administrators make clear
distinctions between stable
"production" systems and volatile
"research and development" (R &
D) systems. One big difference is that on production systems, you
don't add or remove software arbitrarily. Therefore,
you may not feel comfortable applying every update for every software
package on your production system as soon as they're
announced.



That's probably prudent in many cases, but let me
offer a few guidelines:



Apply any update addressing a
"buffer-overflow" vulnerability
that could lead to remote users running arbitrary commands or gaining
unauthorized shell access to the system.



Apply any update addressing an "escalation of local
privileges" vulnerability, even if your
system has no shell users (e.g., it's
strictly a web server). The ugly fact is that a
buffer-overflow vulnerability on a normally shell-less server could
easily lead to an attacker gaining shell access. If that happens, you
won't want any known privilege-escalation
opportunities to be present.



A non-security-related update may be safely skipped, unless, of
course, that update is intended to fix some source of system
instability. (Attackers often intentionally induce instability in the
execution of more complex attacks.)
In my experience, it's relatively rare for a Linux
package update to affect system stability negatively. The only
exception to this is kernel updates: new major versions are nearly
always unstable until the fourth or fifth minor revision (e.g., avoid
kernel Version X.Y.0: wait
for Version X.Y.4 or
X.Y.5).





3.1.2.3 How to be notified of and obtain security updates: Red Hat





If you run Red Hat 6.2 or later, the officially recommended method
for obtaining and installing updates and bug/security fixes
(errata, in Red Hat's parlance)
is to register with the Red Hat Network and then either schedule
automatic updates on the Red Hat Network web site or perform them
manually using the command up2date. While all
official Red Hat packages may also be downloaded anonymously via FTP
and HTTP, Red Hat Network registration is necessary to use
up2date
to schedule automatic notifications and downloads from Red Hat.



At first glance, the security of this arrangement is problematic: Red
Hat encourages you to remotely store a list with Red Hat of the names
and versions of all your system's packages and
hardware. This list is transferred via HTTPS and can only be perused
by you and the fine professionals at Red Hat. In my opinion, however,
the truly security conscious should avoid providing essential system
details to strangers.



There is a way around this. If you can live
without automatically scheduled updates and customized update lists
from Red Hat, you can still use up2date to
generate system-specific update lists locally (rather than have them
pushed to you by Red Hat). You can then download and install the
relevant updates automatically, having registered no more than your
email address and system version/architecture with Red Hat Network.



First, to register with the Red Hat Network, execute the command
rhn_register. (If you aren't
running X, then use the --nox
flag: for example rhn_register --nox.)
In rhn_register's Step 2 screen
(Step 1 is simply a license click-through dialog),
you'll be prompted for a username, password, and
email address: all three are required. You will then be prompted to
provide as little or as much contact information as you care to
disclose, but all of it is optional.



In Step 3 (system profile: hardware), you should enter a profile
name, but I recommend you uncheck the box next
to "Include information about hardware and
network." Similarly, in the screen after that, I
recommend you uncheck the box next to
"Include RPM packages installed on this system in
my System Profile." By deselecting these two
options, you will prevent your system's hardware,
network, and software-package information from being sent to and
stored at Red Hat.



Now, when you click the "Next"
button to send your profile, nothing but your Red Hat Network
username/password and your email address will be registered. You can
now use up2date without worrying quite so much
about who possesses intimate details about your system.



Note there's one more useful Red Hat Network feature
you'll subsequently miss: automatic, customized
security emails. Therefore, be sure to subscribe to the
Redhat-
Watch-list mailing list using the online form at
https://listman.redhat.com. This way,
you'll receive emails concerning all Red Hat bug and
security notices (i.e., for all software packages in all supported
versions of Red Hat), but since only official Red Hat notices may be
posted to the list, you needn't worry about Red Hat
swamping you with email. If you're worried anyhow, a
"daily digest" format is available
(in which all the day's postings are sent to you in
a single message).



Once you've registered with the Red Hat Network via
rhn_register (regardless of whether you opt to
send hardware/package info), you can run
up2date.
First, you need to configure up2date; this task
has its own command, up2date-config
(Figure 3-1). By default, both
up2date and up2date-config
use X, but like rhn_register, both support the
--nox flag if you prefer to
run them from a text console.




Figure 3-1. up2date-config





up2date-config is fairly self-explanatory, and
you should need to run it only once (though you may run it at any
time). A couple of settings, though, are worth noting. First is
whether up2date should verify each
package's cryptographic signature with
gpg. I highly recommend you use this feature
(it's selected by default), as it reduces the odds
that up2date will install any package that has
been corrupted or "Trojaned" by a
clever web site hacker.



Also, if you're downloading updates to a central
host from which you plan to "push"
(upload) them to other systems, you'll definitely
want to select the option "After installation, keep
binary packages on disk" and define a
"Package storage directory." You
may or may not want to select "Do not install
packages after retrieval." The equivalents of these
settings in up2date's
ncurses mode (up2date-config --nox) are
keepAfterInstall,
storageDir, and
retrieveOnly, respectively.




Truth be told, I'm leery of relying on automated
update tools very much, even up2date (convenient
though it is). Web and FTP sites are hacked all the time, including
Linux distributors' sites. Not long ago, the Debian
FTP site was hacked, and although no Debian software was altered that
time, it certainly could have been.



Therefore, if you use up2date,
it's essential you use its
gpg functionality as described earlier. One of
the great strengths of the rpm package format is
its support of embedded digital signatures, but these do you no good
unless you verify them (or allow up2date to
verify them for you).



The command to check an rpm
package's signature manually is rpm
--checksig

/path/packagename.rpm.
Note that both this command and up2date require
you to have the package gnupg installed.





Now you can run up2date. As with
rhn_register and
up2date-config, you can use the
--nox flag to run it from a text console.
up2date uses information stored locally by
rhn_register to authenticate your machine to the
Red Hat Network, after which it downloads a list of (the
names/versions of) updates released since the last time you ran
up2date. If you specified any packages to skip
in up2date-config, up2date
doesn't bother checking for updates to those
packages. Figure 3-2 shows a screen from a file
server of mine on which I run custom kernels and therefore
don't care to download kernel
rpms.




Figure 3-2. Red Hat's up2date: skipping unwanted updates





After installing Red Hat, registering with the Red Hat Network,
configuring up2date and running it for the first
time to make your system completely current, you can take a brief
break from updating. That break should last, however, no longer than
it takes to receive a new security advisory email from
Redhat-Watch that's relevant to
your system.




Why Not Trust Red Hat?





I don't really have any reason
not to trust the Red
Hat Network; it's just that I don't
think it should be necessary to trust them.
(I'm a big fan of avoiding unnecessary trust
relationships!) Perhaps you feel differently. Maybe the Red Hat
Network's customized autoupdate and autonotification
features will mean the difference for you between keeping your
systems up to date and not. If so, then perhaps whatever risk is
involved in maintaining a detailed list of your system information
with the Red Hat Network is an acceptable one.



In my opinion, however,
up2date
is convenient and intelligent enough by itself to make even that
small risk unnecessary. Perhaps I'd think
differently if I had 200 Red Hat systems to administer rather than
two.



But I suspect I'd be even
more worried about remotely caching an entire
network's worth of system details. (Plus
I'd have to pay Red Hat for the privilege, since
each RHN account is allowed only one complimentary system
"entitlement"/subscription.) Far
better to register one system in the manner described earlier
(without sending details) and then use that system to push updates to
the other 199, using plain old rsync,
ssh, and rpm.



In my experience, the less information you needlessly share, the less
that will show up in unwanted or unexpected hands.




3.1.2.4 RPM updates for the extremely cautious





up2date's speed, convenience,
and automated signature checking are appealing. On the other hand,
there's something to be said for fully
manual application of security
updates. Updating a small number of packages really
isn't much more trouble with plain old
rpm than with up2date, and
it has the additional benefit of not requiring Red Hat Network
registration. Best of all from a security standpoint, what you see is
what you get: you don't have to rely on
up2date to relay faithfully any and all errors
returned in the downloading, signature-checking, and
package-installation steps.



Here, then, is a simple procedure for
applying manual updates to systems
running Red Hat, Mandrake, SUSE, and other
rpm-based distributions:



Download the new package.



The security advisory that notified you of the new packages also
contains full paths to the update on your
distribution's primary FTP site. Change directories
to where you want to download updates, and start your FTP client of
choice. For single-command downloading, you can use wget
(which of
course requires the wget package), e.g.:



wget -nd --passive-ftp ftp://updates.redhat.com/7.0/en/os/i386/rhs-printfilters-1.81-
4.rh7.0.i386.rpm Verify the package's
gpg signature.



You'll need to have the
gnupg package installed on your system, and
you'll also need your
distribution's public package-signing key on your
gpg key ring. You can then use
rpm to invoke gpg via
rpm's
--checksig command, e.g.:



rpm --checksig ./rhs-printfilters-1.81-4.rh7.0.i386.rpm Install the package using rpm's update command
(-U).



Personally, I like to see a progress bar, and I also like verbose
output (errors, etc.), so I include the -h and
-v flags, respectively. Continuing the example of
updating rhs-printfilters, the update command
would be:



rpm -Uhv ./rhs-printfilters-1.81-4.rh7.0.i386.rpm Note that in both rpm usages, you may use
wildcards or multiple filenames to act on more than one package,
e.g.:



rpm --checksig ./perl-* and then, assuming the signature checks were successful:



rpm -Uhv ./perl-*

3.1.2.5 Yum: a free alternative to up2date





If you
can't
afford Red Hat Network subscriptions, or if you've
got customized collections of RPMs to maintain at your site,
there's a new, free update utility in the RPM world,
called "Yum" (Yellow Dog
Updater, Modified). As its name implies, Yum evolved from the Yellow
Dog Updater (a.k.a. "yup"), which
was part of the Yellow Dog Linux distribution for Macintosh computers
(http://www.yellowdoglinux.com).
Whereas yup ran only on Yellow Dog (Macintosh) systems, Yum presently
works on Red Hat, Fedora, Mandrake, and Yellow Dog Linux (where
it's replaced yup).



In a nutshell, Yum does for RPM-based systems what
apt-get
does for Debian (see "How to be notified of and
obtain security updates: Debian," later in this
chapter): it provides a simple command that can be used to
automatically install or update a software package, after first
automatically installing and updating any other
packages necessary to satisfy the desired package's
dependencies.



Yum actually consists of two commands: yum is
the client command, and
yum-arch is a server-side command for creating
the header files necessary to turn a web or FTP server into a Yum
"repository."
yum-arch is out of scope for our purposes here
(I want to focus on using Yum for updating your base distribution),
but you need to use it if you want to set up a public Yum repository
(hooray for you!), a private Yum repository for packages you maintain
for local systems, or even for a non-networked Yum repository on your
hard drive. (yum-arch is very simple to use; the
yum-arch(8) manpage tells you everything to
know.) Unlike apt-rpm (the RPM package format. And, says Michael
Stenner,
"Yum is designed to be simple and reliable, with
more emphasis on keeping your machine safe and stable than on
client-side customization." The official Yum download site is http://linux.duke.edu/projects/yum/download.ptml.
That site explains which version of Yum to download, depending on
which version of Red Hat or Fedora Linux you use. Note, however, that
if you're a Fedora user, Yum is part of
Fedora Core 2: the package
yum-2.0.7-1.1.noarch.rpm is on Disc 1 of your
Fedora installation CD-ROMs. If you use Mandrake 9.2, the package
yum-2.0.1-1mdk.noarch.rpm is included in the
distribution's contrib/i586
directory.



Note that Yum is written entirely in Python. Therefore, to
successfully install any Yum RPM, your system needs the Fedora/Red
Hat packages
python,
gettext,
rpm-python,
and
libxml2-python
(or their Mandrake equivalents). On one hand, installing a script
interpreter like Python or Perl on a bastion server runs contrary to
advice I gave earlier in this chapter. However, security always
involves tradeoffs: if Yum will make it easier for you to keep your
system's patchlevels current, then
it's justifiable to accept the risk associated with
installing Python.[1] [1] After all, patching your system as
soon as possible when security updates are released goes a long way
in thwarting attacks by external users; the main risk of having
compilers and interpreters on your system is that they could be used
by an attacker after a successful attack.




So, from where can Yum pull its RPMs? Usually from a remote site via
the Internet; this being a security book, my emphasis here is using
Yum to grab security patches, so the rest of this section focuses on
network updates. In the interest of completeness, however, Yum
can read RPMs from local filesystems (or
"virtually local" filesystems such
as NFS mounts).



Whether on a remote server or a local one, the RPM collection must be
a "Yum repository": it must include
a directory called headers containing the RPM
header information with which Yum identifies and satisfies RPM
dependencies. Therefore, you can't arbitrarily point
Yum at just any old Red Hat mirror or Mandrake CD-ROM.



If you use Fedora Core 1 or 2, you can use Yum with any Fedora
mirror. Since Yum is an officially supported update mechanism for
Fedora, Fedora mirrors are set up as Yum repositories. And did you
know about the Fedora Legacy Project? This branch of the Fedora
effort provides new security patches for legacy Red Hat distributions
(currently Red Hat 7.3, 8.0, and 9.0). Thus, many Fedora mirrors also
contain Red Hat updates, in the form of Yum repositories! See
http://fedoralegacy.org for more
information.



If in doubt, a limited but handy list of Yum repositories
for a variety of distributions is available at http://linux.duke.edu/projects/yum/repos/.
Each link in this list yields a block of text you can copy and paste
directly into your /etc/yum.conf file (which
we'll explore in depth shortly). If all else fails,
Googling for "mydistroname yum
repository" is another way to find repositories.



Configuring Yum is fairly simple; all you need to do is edit one
file, which is named, predictably,
/etc/yum.conf. Example 3-7
shows the default /etc/yum.conf file that comes
with Fedora Core 2's Yum RPM (links specified in
baseurl are subject to change).



Example 3-7. Fedora Core 2's /etc/yum.conf file





[main]
cachedir=/var/cache/yum
debuglevel=2
logfile=/var/log/yum.log
pkgpolicy=newest
distroverpkg=fedora-release
tolerant=1
exactarch=1
[base]
name=Fedora Core $releasever - $basearch - Base
baseurl=http://download./pub/fedora/linux/core/$releasever/i386/os
[updates-released]
name=Fedora Core $releasever - $basearch - Released Updates
baseurl=http://download./pub/fedora/linux/core/updates/$releasever As you can see, this file consists of a list of global variable
settings, followed by one or more [server] blocks
([base] and [updates-released]
in Example 3-7), each of which specifies settings
for a different type of RPM group. I'm not going to
cover every possible global or server-block setting;
that's what the yum.conf(5)
manpage is for. But let's discuss a few key
settings.



In the global section,
debuglevel
determines how verbose yum's
output is: this value may range from 0, for no
output, to 10, for maximum debugging output. The
default value of 2 is shown in Example 3-7. This debuglevel affects
only standard output, not Yum's logfile (whose
location is specified by logfile). Still, I like
to change this value to 4.



Also in the global section,
pkgpolicy
specifies how Yum should decide which version to use if a given
package turns up across multiple [server] blocks.
distroverpkg
specifies the name of your local release-file
package. Your release file (e.g.,
/etc/fedora-release or
/etc/redhat-release) contains the name and
version of your Linux distribution.



Each [server] block defines a set of RPMs.
Personally, I wish these were instead called
[package-type] blocks, since they
don't distinguish by server (a single block may
contain the URLs of many servers) but rather by RPM group. In Example 3-7, the [base] block contains
a single URL pointing to the main Fedora repository at

The /etc/yum.conf
file installed by your Yum RPM will
probably work fine, but you should augment each default URL (i.e.,
http://download.... in Example 3-7) with at least one mirror-site URL to minimize
the chance that your updates fail due to any one server being
unavailable. Just be sure to use your favorite web browser to
"test-drive" any URL you add to
yum.conf to make sure that it successfully
resolves to a directory containing a directory named
headers. Also, make sure your URL ends with a
trailing slash.



The other thing worth noting in Example 3-7 is that
one important [server] option is missing:
gpgcheck. Example 3-8 shows a corrected [base]
block that uses this option (links specified in
baseurl are subject to change):



Example 3-8. Customized [base] section





[base]
name=Fedora Core $releasever - $basearch - Base
baseurl=http://mirror.eas.muohio.edu/fedora/linux/core/$releasever/$basearch/os/
baseurl=http://download./pub/fedora/linux/core/$releasever/i386/os
gpgcheck=1
failovermethod=priority Setting gpgcheck=1 causes Yum to check the GnuPG
signature in each RPM it downloads. For this to work,
you'll need the appropriate GnuPG keys incorporated
into your RPM database. On Fedora Core 2 systems, these keys were
installed on your system as part of the
fedora-release package. To copy them into your
RPM database, execute this command:



rpm --import /usr/share/doc/fedora-release-1/RPM-GPG* The rpm import command can also use a URL as its
argument, so if the GPG key of your Yum source is online, you can
also use the form:



rpm --import http://your.distro.homepage/GPGsignature (where
http://your.distro.homepage/GPGsignature
should be replaced with a real URL.) This may seem like a hassle, but it's worth it.
There have been several intrusions at Linux
distributors' sites over the years that have
resulted in Trojaned or otherwise compromised software packages being
downloaded by unsuspecting users. As I mentioned earlier, taking
advantage of RPM's support for GnuPG signatures is
the best defense against such skulduggery.



The other notable revision made in Example 3-8 is
that I've specified
failovermethod=priority:
this tells Yum to try the URLs in this list in order, starting with
the one at the top. The default behavior
(failovermethod=roundrobin) is for Yum to choose
one of the listed URLs at random. Personally, I prefer the
priority method since it lets me prioritize
faster, closer repositories over my distribution's
primary site.



And now we come to the easy part: using the yum
command. There are two ways to run yum: manually
from a command prompt, or automatically via the
/etc/init.d/yum startup script.



If enabled (which you must do manually by issuing a
chkconfig --add yum command), this script simply
touches a runfile, /var/lock/subsys/yum, which
the cron.daily job yum.cron
checks for. If the script is enabled (i.e., if the runfile exists),
this cronjob runs the yum command to first check
for and install an updated Yum package, and then to check for and
install updates for all other system packages. In doing so,
yum will automatically and transparently resolve
any relevant dependencies: if an updated package depends on another
package, even if it didn't previously,
yum will retrieve and install the other package.



For many users, particularly hobbyists and home users, this is
powerful and useful stuff. However, automatically installing any
software, even if it only updates things you've
already installed, is risky. You really can't be
sure a given patch won't introduce different bugs or
otherwise impair system performance and reliability, unless you test
it before installing it in a production situation. Therefore, if your
server is part of any type of corporate or mission-critical scenario,
I recommend you run yum manually.



To see a list of available updates without installing anything, use
yum check-update
(Example 3-9).



Example 3-9. Checking for updates





[root@iwazaru-fedora etc]# yum check-update
Gathering header information file(s) from server(s)
Server: Fedora Core 1 - i386 - Base
Server: Fedora Core 1 - i386 - Released Updates
Finding updated packages
Downloading needed headers
getting /var/cache/yum/updates-released/headers/coreutils-0-5.0-34.1.i386.hdr
coreutils-0-5.0-34.1.i386 100% |=========================| 13 kB 00:01
Name Arch Version Repo
----------------------------------------------------------------------------------------
XFree86 i386 4.3.0-55 updates-released
XFree86-100dpi-fonts i386 4.3.0-55 updates-released
XFree86-75dpi-fonts i386 4.3.0-55 updates-released
XFree86-Mesa-libGL i386 4.3.0-55 updates-released
etc. -- output truncated for readability To install a single update (plus any other updates necessary to
resolve dependencies), use yum update
packagename, e.g.:



yum update yum That example actually updates Yum itself. If indeed there is an
updated version of the package yum available,
you'll be prompted whether to go ahead and install
it. If you're invoking yum from
a script and you want all such prompts to be automatically answered
"y", use the -y
flag, e.g.:



yum -y update yum The yum check-update
command isn't mandatory before installing updates;
if you prefer, you can use the form yum update
directly. It performs the same checks as yum
check-update
prior to downloading and installing those
updates.



In the last sample command, we specified a single package to update:
yum itself. To initiate a complete update
session for all installed packages on your system, you can simply
omit the last argument (the package specification):



yum update After Yum checks for all available updates and calculates
dependencies, it presents you with a list of all updates it intends
to download, and unless you used the -y flag, asks
you whether to download and install them.



And that's all you need to know to get started using
Yum to keep your system up to date! As you can see, all the real work
is in the setup; ordinary use of the yum command
is about as simple as it gets.



For the sake of completeness, here's a bonus tip:
you can install new packages with Yum, too (you
probably figured that out already). For any package contained in the
sources you've defined in
/etc/yum.conf, you can use the command
yum install packagename
to install the very latest version of that package plus anything it
depends on. For example, to install the FTP server package
vsftpd, you'd issue this
command:



yum install vsftpd If you have any problems using Yum, ample help is available online.
An excellent FAQ can be
found at http://www.phy.duke.edu/~rgb/General/yum_HOWTO/yum_HOWTO/yum_HOWTOl#toc1.
The unofficial Fedora FAQ at http://fedora.artoo.net/faq/ contains Yum
instructions; so does the
Fedora
HOWTO at http://www.fedora.us/wiki/FedoraHOWTO.



If none of those sites helps, there's a
Yum
Mailing List, hosted at

3.1.2.6 How to be notified of and obtain security updates: SUSE





As with
so much else, automatic updates on SUSE systems can be handled
through yast. With every version of SUSE,
yast continues to improve, and in SUSE Versions
8.2 and later, yast provides a simple and quick
means of updating packages. In addition, SUSE has carefully mirrored
all the functionality of the X version of yast
in the text version; all of what I'm about to
describe applies equally to the X and text versions of
yast.



To use yast to automatically update all packages
for which new RPM files are available, start
yast and select Software
Online Update.
You'll probably want to change
"Installation source" from its
default of Automatic Update...", one of the nicer innovations
in yast v2. This will cause
yast to periodically check your preferred
download site for new updates, automatically download them, and,
optionally, install them. Personally I love this feature, but prefer
to use it with the option "Only Download
Patches" set. This causes patches to be downloaded
automatically but not installed until I manually run
yast Online Update. Unless you enjoy
"living on the edge," you
shouldn't patch a working system without making sure
the system will still work properly after patching (i.e., be sure to
monitor your system during and immediately after patching).



Unless you do opt for both automated patch downloading and
installation, you'll need to keep abreast of SUSE
security issues (so you'll know when to run
yast and install the patches it automatically
downloads). And the best way to achieve this is to subscribe to the
official SUSE security-announcement mailing list,
suse-security-announce. To subscribe, use the
online form at
http://www.suse.com/us/private/support/online_help/mailinglists/indexl.



Even if you don't use yast at
all (e.g., maybe you prefer to run rpm at the
command line), you can follow the instructions in the notice to
download the new package, verify its GNUpg signature (as of SUSE
Linux Version 7.1, all SUSE RPMs are signed with the key
cautious."

3.1.2.7 SUSE's online-update feature



In addition to yast and
rpm, you can use
yast2
to update SUSE packages.[2] This method is
particularly useful for performing a batch update of your entire
system after installing SUSE. yast2 uses X by
default but will automatically run in ncurses
mode (i.e., with an ASCII interface structured identically to the X
interface) if the environment variable DISPLAY
isn't set.



[2] Now
that yast2 is SUSE's default
setup tool (rather than yast), recent versions
of SUSE have a symbolic link from /sbin/yast to
/sbin/yast2. On such systems, the two commands
(yast and yast2) are
therefore interchangeable.




In yast2, start the Software applet and select
Online Update. You have the choice of either an automatic update in
which all new patches are identified, downloaded, and installed or a
manual update in which you're given the choice of
which new patches should be downloaded and installed (Figure 3-3). With either option, you can click the Expert
button to specify an FTP server other than


Figure 3-3. Selecting patches in yast2



Checking Package Versions





To see a list of all currently
installed packages and their version numbers on your RPM-based
system, use this command:



rpm -qa To see if a specific package is installed, pipe this command to
grep, specifying part or all of the
package's name. For example:



rpm -qa |grep squid on my SUSE 7.1 system returns this output:



squid23-2.3.STABLE4-75 The equivalent commands for deb-package-based
distributions such as Debian would be dpkg -l and
dpkg -l |grep squid, respectively. Of course,
either command can be redirected to a file for later reference (or
off-system archivale.g., for crash or compromise recovery)
like this:



rpm -qa > packages_07092002.txt


Overall, yast2's Online Update
functionality is simple and fast. The only error
I've encountered running it on my two SUSE servers
was the result of invoking yast2 from an xterm
as an unprivileged user: yast2 claimed that it
couldn't find the update list on
, which wasn't
exactly true. The real problem was that yast2
couldn't write that file
locally where it needed to because it was running with my
non-root privileges.



Invoking yast2 from a window-manager menu (in
any window manager that susewm configures)
obviates this problem: you will be prompted for the
root password if you aren't
running X as root. Running X as
root, of course, is another workaround, but not
one I recommend due to the overall insecurity of X. A better approach
is to open a terminal window, su to root by
using the command su -, and then run the command
yast2. By su-ing with the
"-" (hyphen),
you'll set all your environment variables to
root's default values,
including DISPLAY.



3.1.2.8 How to be notified of and obtain security updates: Debian



As
is typical of Debian GNU/Linux, updating Debian packages is less
flashy yet simpler than with most other distributions. The process
consists mainly of two commands (actually, one command,
apt-get, invoked twice but with different
options):



apt-get update
apt-get -u upgrade The first command, apt-get update, updates your
locally cached lists of available packages (which are stored, if
you're curious, in
/var/state/apt/lists). This is necessary for
apt-get to determine which of your currently
installed packages have been updated.



The second command, apt-get -u
upgrade
,
causes apt-get to actually fetch and install the
new versions of your local outdated packages. (The
-u flag tells apt-get to
display a list of upgraded packages.) Note that as with most other
Linux package formats, the deb format includes
pre- and post-installation scripts; therefore, it
isn't necessarily a good idea to run an
apt-get upgrade unattended, since one or more
scripts may prompt you for configuration information.



That's really all there is to it! Naturally, errors
are possible: a common cause is outdated FTP/HTTP links in
/etc/apt/sources.list. If
apt-get seems to take too long to fetch package
lists and/or reports such that it can't find files,
try deleting or replacing the sources.list entry
corresponding to the server that apt-get was
querying before it returned the error. For a current list of
Debian download sites worldwide, see
http://www.debian.org/distrib/ftplist.



Another common error is new dependencies (ones that
didn't apply when you originally installed a given
package), which will cause apt-get to skip the
affected package. This is fixed by simply invoking
apt-get again, this time telling it to install
the package plus any others on which it depends.



For example, suppose that in the course of an upgrade session,
apt-get reports that it's
skipping the package blozzo. After
apt-get finishes the rest of the upgrade
session, you can get a detailed view of what you're
getting into (in resolving
blozzo's dependencies) by
typing the command:



apt-cache show blozzo If you next type:



apt-get install blozzo apt-get
will attempt to install the latest version of
blozzo and will additionally do a more thorough
job of trying to resolve its dependencies. If your old version of
blozzo is hopelessly obsolete, however, it may
be necessary to upgrade your entire distribution; this is done with
the command apt-get -u dist-upgrade.



Detailed instructions on using apt-get can be
found in the apt-get(8) manpage and in the APT
HOWTO (available at http://www.debian.org/doc/manuals/apt-howto).



To receive prompt, official notification of Debian security fixes,
subscribe to the debian-security-announce email
list. An online subscription form is available at http://www.debian.org/MailingLists/subscribe.




Unfortunately, the deb package format
doesn't currently support GNUpg signatures, or even
md5 hashes; nor are external hashes or GNUpg signatures maintained or
checked. Therefore, be careful to stick to official Debian FTP mirror
sites when using apt-get.



Reportedly, a future version of the deb package
format will support GNUpg signatures.






3.1.3. Deleting Unnecessary User Accountsand Restricting Shell Access





One of the popular
distributions'
more annoying quirks is the inclusion of a long list of entries in
/etc/passwd for application-specific user
accounts, regardless of whether those applications are even
installed. (For example, my SUSE 7.1 system created 48 entries during
installation!) While few of these are privileged accounts, many can
be used for interactive login (i.e., they specify a real shell rather
than /bin/false). This is not unique to SUSE: my
Red Hat 7.0 system created 33 accounts during installation, and my
Debian 2.2 system installed 26.



While it's by no means certain that a given unused
account can and will be targeted by attackers, I personally prefer to
err on the side of caution, even if that makes me look superstitious
in some people's eyes. Therefore, I recommend that
you check /etc/passwd and comment out any
unnecessary entries.



If you aren't sure what a given account is used for
but see that account has an actual shell specified, one way to
determine whether an account is active is to see whether it owns any
files and, if so, when they were last modified. This is easily
achieved using the find command.



Suppose I have a recently installed web server whose
/etc/passwd file contains, among many others,
the following entry:



yard:x:29:29:YARD Database Admin:/usr/lib/YARD:/bin/bash I have no idea what the YARD database might be used for. Manpage
lookups and rpm queries suggest that it
isn't even installed. Still, before I comment out
yard's entry in
/etc/passwd, I want to make sure the account
isn't active. It's time to try
find / -user and ls
-lu
(Example 3-10).



Example 3-10. Using find with the -user flag



root@woofgang:~ # find / -user yard -print
/usr/lib/YARD
root@woofgang:~ # ls -lu /usr/lib/YARD/
total 20
drwxr-xr-x 2 yard yard 35 Jan 17 2001 .
drwxr-xr-x 59 root root 13878 Dec 13 18:31 ..



As we see in Example 3-10, yard
owns only one directory, /usr/lib/YARD, and
it's empty. Furthermore, according to ls
-lu
(which displays and lists files by access times), the
directory hasn't been accessed since January 17.
Since the system was installed in October, this date must refer to
the directory's creation on my installation media by
SUSE! Clearly, I can safely assume that this account
isn't in use.



Some accounts that are usually
necessary if present are as follows:



root bin daemon halt shutdown man at
Some accounts that are often unnecessary, at
least on bastion hosts, are as follows:



uucp games gdm xfs rpcuser rpc
If nothing else, you should change the final field (default shell),
in unknown or process-specific accounts' entries in
/etc/passwd, from a real shell to
/bin/false; only accounts used by human beings
should need shells.




3.1.4. Restricting Access to Known Users





Some FTP daemons allow
anonymous
login by default. If your FTP server is intended to provide public
FTP services, that's fine, but if it
isn't, there's no good reason to
leave anonymous FTP enabled.



The same goes for any other service running on a publicly accessible
system: if that service supports but doesn't
actually require anonymous connections, the service should be
configured to accept connections only from authenticated, valid
users. Restricting access to FTP, HTTP, and other services is
described in subsequent chapters.




3.1.5. Running Services in chrooted Filesystems





One of our most important threat models is that of the
hijacked daemon: if a malicious user
manages to take over and effectively
"become" a process on our system,
he will assume the privileges on our system that that process has.
Naturally, developers are always on the alert for vulnerabilities,
such as buffer overflows, that compromise their applications, which
is why you must keep on top of your distribution's
security advisories and package updates.



However, it's equally important to mitigate the risk
of potential
daemon
vulnerabilities, i.e., vulnerabilities that might be unknown to
anyone but the "bad guys." There
are two primary means of doing so: running the process with as low a
set of privileges as possible (see the next section) and running the
process in a chroot jail.



Normally, a process can see and interact with as much of a
system's filesystem as the user account under which
the process runs. Since most of the typical Linux
host's filesystem is world-readable, that amounts to
a lot of real estate. The
chroot system call functionally
transposes a process into a subset of the filesystem, effectively
redefining the / directory for that process to a
small subdirectory under the real root.



For example, suppose a system has the following filesystem hierarchy
(see Figure 3-4).




Figure 3-4. Example network architecture





For most processes and users, configuration files are found in
/etc, commands are found in
/usr/bin, and various
"volatile" files such as logs are
found in /var. However, we
don't want our DNS daemon,
named, to
"see" the entire filesystem, so we
run it chrooted to /var/named. Thus, from
named's perspective,
/var/named/etc is /etc,
/var/named/usr/bin is
/usr/bin, and
/var/named/var appears as
/var. This isn't a foolproof
method of containment, but it helps.



Many important network
daemons now support command-line flags
and other built-in means of being run chrooted. Subsequent chapters
on these daemons describe in detail how to use this functionality.



(Actually, almost any process can be run chrooted if invoked via the
chroot command, but this usually requires a much
more involved chroot jail than do commands with built-in chroot
functionality. Most applications are compiled to use shared libraries
and won't work unless they can find those libraries
in the expected locations. Therefore, copies of those libraries must
be placed in particular subdirectories of the chroot jail.)


chroot is not an absolute control: a
chroot
jail can be subverted via techniques such as using a hard link that
points outside of the chroot jail or by using
mknod to access the hard disk directly. However,
since none of these techniques is very easy to execute without
root privileges, chroot is a useful tool for
hindering an attacker who has not yet achieved
root privileges.






3.1.6. Minimizing Use of SUID root





Normally, when you execute a command or application, it runs with
your user and group privileges. This is how file and directory
permissions are enforced: when I, as user mick,
issue the command ls /root, the system
doesn't really know that mick
is trying to see what's in
root's home directory. It knows
only that the command ls, running with
mick's privileges, is trying to
exercise read privileges on the directory /root.
/root probably has permissions
drwx------; so unless
mick's UID is zero, the command
will fail.



Sometimes, however, a command's permissions include
a set user-ID (SUID) bit or a
set group-ID (SGID) bit, indicated by an
s where normally there would be an
x (see Example 3-11).



Example 3-11. A program with its SUID bit set



-rwsr-xr-x 1 root root 22560 Jan 19 2001 crontab This causes that command to run not with the privilege level of the
user who executed it but of the user or group
who owns that command. If the
owner's user or group ID is 0
(root), the command will run with superuser
privileges no matter who actually executes it.
Needless to say, this is extremely dangerous!



The SUID and SGID bits are most often used for commands and daemons
that normal users might need to execute but that also need access to
parts of the filesystem not normally accessible to those users. For
some utilities like su and
passwd, this is inevitable: you
can't change your password unless the command
passwd can alter
/etc/shadow (or
/etc/passwd), but obviously, these files
can't be directly writable by ordinary users. Such
utilities are very carefully coded to make them nearly impossible to
abuse.



Some applications that run SUID or SGID have only limited need of
root privileges, while others needn't really be run
by unprivileged users. For example, mount is
commonly run SUID root, but on a server-class
system, there's no good reason for anybody but
root to be mounting and unmounting volumes, so
mount can therefore have its SUID bit unset.



3.1.6.1 Identifying and dealing with SUID root files



The simplest way to identify files with their
SUID and SGID bits set is with the
find command. To find all
root-owned regular files with SUID and SGID set,
we use the following two commands:



find / -perm +4000 -user root -type f -print
find / -perm +2000 -group root -type f -print If you determine that a file thus identified doesn't
need to run SUID/SGID, you can use this command to unset SUID:



chmod u-s /full/path/to/filename and this command to unset GUID:



chmod g-s /full/path/to/filename Note that doing so will replace the SUID or SGID permission with a
normal x: the file will still be executable, just
not with its owner's/group's
permissions.




Delegating root's Authority



If your bastion host is going to be administered by more than one
person, do everything you can to limit use of the
root password. In other words, give
administrators only as much privilege as they need to perform their
jobs.



Too often, systems are configured with only two basic privilege
levels: root and everyone else. Use groups and
group permissions wherever possible to delineate different roles on
your system with more granularity. If a user or group needs
root privileges to execute only a few commands,
use sudo to grant them this access without
giving them full root privileges.




Bastille Linux, the hardening utility
covered later in this chapter, has an entire module devoted to
unsetting SUID and SGID bits. However, Bastille deals only with some
SUID files common to many systems; it doesn't
actually identify all SUID/ GUID files specific to your system.
Therefore, by all means use Bastille to streamline this process, but
don't rely solely on it.




3.1.7. Using su and sudo





Many new Linux users,
possibly because they often run single-user systems, fall into the
habit of frequently logging in as root. But
it's bad practice to log in as
root in any context other than direct console
access (and even then it's a bad habit to get into,
since it will be harder to resist in other contexts). There are
several reasons why this is so:




Eavesdroppers



Although the whole point of SSH is to make
eavesdropping
unfeasible, if not impossible, there have been a couple of nearly
feasible man-in-the-middle attacks over the years. Never assume
you're invincible: if someday someone finds some
subtle flaw in the SSH protocol or software you're
using and successfully reconstructs one of your sessions,
you'll feel pretty stupid if in that session you
logged in as root and unknowingly exposed your
superuser password, simply to do something trivial like browse Apache
logs.




Operator error



In the hyperabbreviated world of Unix, typing errors can be deadly.
The less time you spend logged in as root, the
less likely you'll accidentally erase an entire
volume by typing one too many forward slashes in an
rm command.





Local attackers



This book is about bastion hosts, which tend to not have very many
local user accounts. Still, if a system cracker compromises an
unprivileged account, they will probably use it as a foothold to try
to compromise root, too, which may be harder for
them to do inconspicuously if you seldom log in as
root.





su and
sudo
can help minimize the time you spend logged on as or operating with
root privileges.



3.1.7.1 Using su





You're probably familiar with
su,
which lets you escalate your privileges to root
when needed and demote yourself back down to a normal user when
you're done with administrative tasks. This is a
simple and excellent way to avoid logging in as
root, and you probably do it already.



Many people, however, aren't aware that
it's possible to use su to
execute single commands rather than entire shell sessions. This is
achieved with the -c flag. For example, suppose
I'm logged in as mick but want
to check the status of the local Ethernet interface (which normally
only root can do). See Example 3-12 for this scenario.



Example 3-12. Using su -c for a single command



[mick@kolach mick]$ su -c "ifconfig eth0" -
Password: (superuser password entered here)
eth0 Link encap:Ethernet HWaddr 00:10:C3:FE:99:08
inet addr:192.168.201.201 Bcast:192.168.201.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:989074 errors:0 dropped:0 overruns:0 frame:129
TX packets:574922 errors:0 dropped:0 overruns:0 carrier:0
[mick@kolach mick]$ If logging in as an unprivileged user via SSH and only occasionally
su-ing to root is admirable
paranoia, then doing that but using su for
single commands is doubly so.



3.1.7.2 Using sudo





su is part of every flavor of
Linuxindeed, every flavor of Unix, period. But
it's a little limited: to run a shell or command as
another user, su requires you to enter that
user's password and essentially become that user
(albeit temporarily). But there's an even better
command you can use, one that probably isn't part of
your distribution's core installation but probably
is somewhere on its CD-ROM:
sudo,
the "superuser do." (If for some
reason your Linux of choice doesn't have its own
sudo package,
sudo's latest source-code
package is available at http://www.courtesan.com/sudo/.) sudo lets you run a specific privileged command
without actually becoming root, even
temporarily. Unlike with su -c, authority can thus
be delegated without having to share the root
password. Example 3-13 demonstrates a typical
sudo scenario.



Example 3-13. Using sudo to borrow authority



[mick@kolach mick]$ sudo ifconfig eth0
We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these two things:
#1) Respect the privacy of others.
#2) Think before you type.
Password: (mick's password entered here)
eth0 Link encap:Ethernet HWaddr 00:10:C3:FE:99:08
inet addr:192.168.201.201 Bcast:192.168.201.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:989074 errors:0 dropped:0 overruns:0 frame:129
TX packets:574922 errors:0 dropped:0 overruns:0 carrier:0
collisions:34 txqueuelen:100
Interrupt:3 Base address:0x290 Memory:d0000-d4000
[mick@kolach mick]$ Just like with su -c, we started out as
mick and ended up as mick
again. Unlike with su -c, we
didn't have to be root while
running ifconfig. This is very cool, and
it's the way true paranoiacs prefer to operate.



Less cool, however, is the fact that sudo
requires some manpage look-ups to configure properly (in most
people's cases, many manpage look-ups). This is due
to sudo's flexibility.
(Remember what I said about flexibility bringing complexity?) I'll save you the first couple of manpage look-ups
by showing and dissecting the two- line configuration file needed to
achieve Example 3-13i.e., setting up a single
user to run a single command as root. The file
in question is /etc/sudoers, but you
don't really need to remember this, since you
aren't supposed to edit it directly anyhow: you need
to run the command visudo.
visudo looks and behaves (and basically is)
vi, but before allowing you to save your work,
it checks the new sudoers file for syntax errors
(see Example 3-14).



Example 3-14. Simple visudo session





# sudoers file.
#
# This file MUST be edited with the 'visudo' command as root.
# See the sudoers manpage for the details on how to write a sudoers file.
#
# Host, User, and Cmnd alias specifications not used in this example,
# but if you use sudo for more than one command for one user you'll want
# some aliases defined [mdb]
# User privilege specification
root ALL=(root) ALL
mick ALL=(root) /sbin/ifconfig The last two lines in Example 3-14 are the ones that
matter. The first translates to
"root may, on all systems, run
as root any command." The
second line is the one we'll dissect.



Each sudoers line begins with the user to whom
you wish to grant temporary privilegesin this case,
mick. Next comes the name of the system(s) on
which the user will have these privilegesin this example,
ALL (you can use a single
sudoers file across multiple systems). Following
an = sign is the name, in parentheses, of the
account under whose authority the user may act,
root. Finally comes the command the user may
execute, /sbin/ifconfig.



It's extremely important that the
command's full path be given; in fact,
visudo won't let you specify a
command without its full path. Otherwise, it would be possible for a
mischievous user to copy a forbidden command to their home directory,
change its name to that of a command sudo lets
them execute, and thus run rampant on your system.



Note also that in Example 3-14, no flags follow the
command, so mick may execute
/sbin/ifconfig with whichever flags
mick desires, which is, of course, fine with me,
since mick and root are one
and the same person. If/when you use sudo to
delegate authority in addition to minimizing your own use of
root privileges, you'll
probably want to specify command flags.



For example, if I were root but not
jeeves, (e.g., root=me,
jeeves=one of my minions), I might want this
much less trustworthy jeeves to view but not
change network-interface settings. In that case, the last line of
Example 3-16 would look like this:



jeeves ALL=(root) /sbin/ifconfig -a This sort of granular delegation is highly recommended if you use
sudo for privilege delegation: the more
unnecessary privilege you grant non-root
accounts, the less sudo is actually doing for
you.

3.1.8. Configuring, Managing, and Monitoring Logs





This is something we should do but often fail to follow through on.
You can't check logs that don't
exist, and you can't learn anything from logs you
don't read. Make sure your important services are
logging at an appropriate level, know where those logs are stored and
whether/how they're rotated when they get large, and
get in the habit of checking the current logs for anomalies.



Chapter 12 is all about setting up, maintaining,
and monitoring system logs. If you're setting up a
system right now as you read this, I highly
recommend you skip ahead to Chapter 12 before
you go much further.




3.1.9. Every System Can Be Its Own Firewall: Using iptablesfor Local Security





In my
opinion, the best Linux tool for logging and controlling access to
local daemons is the same one we
use to log and control access to the network: iptables
(or
ipchains,
if you're still using a 2.2 kernel).
I've said that it's beyond the
scope of this book to cover Linux firewalls in depth, but
let's examine some examples of using iptables to
enhance local security.[3] [3] For an in-depth guide to
building Linux firewalls using both ipchains and
iptables/netfilter, I highly recommend Robert
Ziegler's book,
Linux Firewalls (New Riders).




We're about to dive pretty deeply into TCP/IP
networking. If you're uncomfortable with the
concepts of ports, TCP flags, etc., you need to do some remedial
reading before proceeding. Do not simply shrug and say,
"Oh well, so much for packet
filtering." The whole point of this book is to help you protect your
Internet-connected servers: if you're serious about
that, then you need to understand how the Internet Protocol and its
supporting subprotocols work.




Craig Hunt's book
TCP/IP Network Administration
(O'Reilly) is one of the very best ground-up
introductions to this subject. Chapter 1 and Chapter 2 of
Hunt's book tell you most of what you need to know
to comprehend packet filtering, all in the space of 50 pages of
well-illustrated and lucid prose.





3.1.9.1 Using iptables: Preparatory steps





First, you need a kernel compiled with netfilter, Linux
2.4's packet filtering code. Most
distributions' stock 2.4 kernels should include
support for netfilter and its most important
supporting modules. If you compile your own kernel, though, this
option is listed in the
"networking" section of the
make menuconfig GUI and is called
"Network Packet Filtering." netfilter
refers to the packet-filtering code in the Linux 2.4 kernel. The
various components of netfilter are usually compiled as kernel
modules.




iptables is a command for configuring and
managing your kernel's netfilter modules. These
modules may be altered via system calls made by any
root-privileged application, but in practice
nearly everyone uses iptables for this purpose;
therefore, iptables is
often used as a synonym for netfilter.





In addition, under the subsection IP: Netfilter Configuration, you
should select Connection Tracking, IP tables support, and, if
applicable, FTP protocol support and IRC protocol support. Any of the
options in the Netfilter Configuration subsection can be compiled
either statically or as modules.



(For our purposesi.e., for a server rather than a
gatewayyou should not need any of the NAT
or Packet Mangling modules.) Second, you need the
iptables command. Your distribution of choice,
if recent enough, almost certainly has a binary package for this;
otherwise, you can download its source code from http://netfilter.samba.org. Needless to say,
this code compiles extremely easily on Linux systems (good thing,
since iptables and netfilter are supported only on Linux).



Third, you need to formulate a high-level access policy for your
system. Suppose you have a combination FTP and WWW server that you
need to bastionize. It has only one (physical) network interface, as
well as a routable IP address in our DMZ network (Figure 3-5).




Figure 3-5. Example network architecture





Table 3-1 shows a simple but complete example
policy for this bastion host (not for the
firewall, with which you should not confuse it).



Table 3-1. High-level access policy for a bastion host

Routing/forwarding:




none

Inbound services, public:




FTP, HTTP

Inbound services, private:




SSH

Outbound services

ping, DNS queries


Even such a brief sketch will help you create a much more effective
iptables configuration than if you skip this step;
it's analogous to sketching a flowchart before
writing a C program.



Having a plan before writing packet filters is important for a couple
of reasons. First, a packet-filter configuration needs to be the
technical manifestation of a larger security policy. If
there's no larger policy, then you run the risk of
writing an answer that may or may not correspond to an actual
question.



Second, this stuff is complicated and very difficult to improvise.
Enduring several failed attempts and possibly losing productivity as
a result may cause you to give up altogether. Packet filtering at the
host level, though, is too important a tool to abandon unnecessarily.



Returning to Table 3-1, we've
decided that all inbound FTP and HTTP traffic will be permitted, as
will administrative traffic via inbound SSH (see Chapter 4 if you don't know why this
should be your only means of remote administration). The server
itself will be permitted to initiate outbound
pings (for diagnostic purposes) and DNS queries
so our logs can contain hostnames and not just IP addresses.




You might be tempted to allow all outbound
services, which (unfortunately) is a common practice: you can trust
your own system, right? Well, not
necessarily: in a buffer-overflow attack, the attacker may
attempt to initiate a connection from your system back to hers. (This
can happen when, in security-bulletin parlance, a vulnerability
"may permit arbitrary commands to be
executed.") It's true that if you're subject to
a "remote root" vulnerability, the
attacker could simply reconfigure your firewall rules to allow the
outbound connection. However, not all buffer-overflow vulnerabilities
involve root access. In
non-remote-root attack scenarios, a restrictive
firewall policy will significantly hamper the
attacker. Besides, on a bastion host, it just isn't
that big a deal to figure out precisely what you need to allow out
(so that you can block the rest).





Our next task is to write iptables commands that
will implement this policy. First, a little background.



3.1.9.2 How netfilter works





Linux 2.4's
netfilter code provides the Linux kernel
with "stateful"
(connection-tracking) packet filtering, even for the complex FTP and
IRC application protocols. This is an important step forward for
Linux: the 2.2 kernel's ipchains firewall code was
not nearly as sophisticated.



In addition, netfilter has powerful Network Address Translation (NAT)
features, the ability to "mangle"
(rewrite the headers of) forwarded packets, and support for filters
based on MAC addresses (Ethernet addresses) and on specific network
interfaces. It also supports the creation of custom
"chains" of filters, which can be
matched against, in addition to the default chains.



The bad news is that this means it takes a lot of reading, a strong
grasp of TCP/IP networking, and some experimentation to build a
firewall that takes full advantage of netfilter. The good news is
that that's not what we're trying
to do here. To use netfilter/iptables to protect
a single host is much, much less involved than using it to protect an
entire network.



Not only are the three default filter chainsINPUT, FORWARD,
and OUTPUT sufficient; since our bastion
host has only one network interface and is not a gateway, we
don't even need FORWARD. (Unless, that is,
we're using stunnel or some
other local tunneling/redirecting technology.) Each packet that the kernel handles is first evaluated for routing:
if destined for the local machine, it's checked
against the INPUT chain. If originating from the local machine,
it's checked against the OUTPUT chain. If entering a
local interface but not destined for this host, it's
checked against the FORWARD chain. This is illustrated in Figure 3-6.




Figure 3-6. How each packet traverses netfilter's built-in packet-filter chains






Figure 3-6 doesn't show the
PREFILTER or POSTFILTER tables or how custom chains are handled; see
http://www.netfilter.org for more
information on these topics.





When a rule matches a packet, the rule may ACCEPT or DROP it, in
which case the packet is done being filtered; the rule may LOG it,
which is a special case wherein the packet is copied to the local
syslog facility but also continues its way down
the chain of filters; or the rule may transfer the packet to a
different chain of filters (i.e., a NAT chain or a custom chain).



If a packet is checked against all rules in a chain without being
matched, the chain's default policy is applied. For
INPUT, FORWARD, and OUTPUT, the default policy is ACCEPT, unless you
specify otherwise. I highly recommend that the default policies of
all chains in any production system be set to DROP.

3.1.9.3 Using iptables





There are basically two ways to use iptables: to
add, delete, and replace individual netfilter
rules and to list or manipulate one or more chains of
rules. Since netfilter has no built-in means of recording or
retaining rules between system boots, rules are typically added via
startup script. Like route,
iptables is a command you
shouldn't have to invoke interactively too often
outside of testing or troubleshooting scenarios.



To view all rules presently loaded into netfilter, we use this
command:



iptables --list We can also specify a single chain to view, rather than viewing all
chains at once:



iptables --list INPUT To see numbered rules (by default, they're listed
without numbers), use the --line-numbers option:



iptables --line-numbers --list INPUT To remove all rules from all chains, we use:



iptables --flush iptables --list is probably the most useful
command-line invocation of iptables. Actually
adding rules requires considerably more flags and options (another
reason we usually do so from scripts).



The basic syntax for writing iptables rules is:



iptables -I[nsert] chain_name rule_# rule_specification
-D[elete]
-R[eplace]
-A[ppend] where chain_name is
INPUT, OUTPUT,
FORWARD, or the name of a custom chain;
rule_# is the number of the rule you wish
to delete, insert a new rule before, or replace; and
rule_specification is the rest of the
command line, which specifies the new rule.
rule_# isn't used with
-A, which appends the rule to the end of the
specified chain. With -I, -D,
and -R, the default
rule_# is 1.



For example, to delete the third rule in the
OUTPUT chain, we'd use the
command:



iptables -D OUTPUT 3 To append a rule to the bottom of the INPUT chain,
we'd use a command like the one in Example 3-15.



Example 3-15. Appending a rule to the INPUT chain





iptables -A INPUT -p tcp --dport 80 -j ACCEPT -m state --state NEW In Example 3-15, everything following the word
INPUT makes up the command's Rule
Specification. Table 3-2 is a simplified list of
some of the most useful options that can be included in packet-filter
(as opposed to NAT) Rule Specifications.



Table 3-2. Common options used in Rule Specifications

Option

Description

-s sourceIP

Match if the packet originated
from sourceIP.
sourceIP may be an IP address (e.g.,
192.168.200.201), network address (e.g., 192.168.200.0/24), or
hostname (e.g., woofgang.dogpeople.org). If not specified, defaults
to 0/0 (which denotes
"any").




-d destinationIP

Match if packet is destined for
destinationIP.
destinationIP may take the same forms as
sourceIP, listed earlier in this table. If
not specified, defaults to 0/0.




-i ingressInterface

Match if packet entered system on
ingressInterfacee.g.,
eth0. Applicable only to INPUT,
FORWARD, and PREROUTING chains.




-o egressInterface

Match if packet is to exit system on
egressInterface. Applicable only to
FORWARD, OUTPUT, and
POSTROUTING chains.




-p tcp | udp | icmp | all

Match if the packet is of the specified protocol. If not specified,
defaults to all.




--dport destinationPort

Match if the packet is being sent to TCP/UDP port
destinationPort. Can be either a number or
a service name referenced in /etc/services. If
numeric, a range may be delimited by a colone.g.,
137:139 to denote ports 137-139. Must be preceded
by a -p (protocol) specification.




--sport sourcePort

Match if the packet was sent from TCP/UDP
sourcePort. The format of
sourcePort is the same as with
destinationPort, listed earlier in this
table. Must be preceded by a -p [udp
|
tcp] specification.




--tcp-flags mask match

Look for flags listed in mask; if
match is set, match the packet. Both
mask and match
are comma-delimited lists containing some combination of
SYN, ACK,
PSH, URG,
RST, FIN,
ALL, or
NONE. Must be
preceded by -p tcp.




--icmp-type type

Match if the packet is icmp-type
type. type can
be a numeric ICMP type or a name. Use the command iptables
-p icmp -h
to see a list of allowed names. Must be preceded
by -p icmp.




-m state --state statespec

Load state module, and match packet if
packet's state matches
statespec.
statespec is a comma-delimited list
containing some combination of NEW,
ESTABLISHED, INVALID, or
RELATED.




-j accept | drop | log | reject |
[chain_name]

Jump to the specified action (accept,
drop, log, or
reject) or to a custom chain named
chain_name.




Table 3-2 is only a partial list, and
I've omitted some flag options within that list in
the interests of simplicity and focus. For example, the option
-f can be used to match TCP packet fragments, but
this isn't worth explaining here since
it's rendered unnecessary by
--state, which I recommend using on bastion hosts.



At this point, we're ready to dissect a sample
iptables script. We'll expand our commands
controlling FTP and HTTP to handle some related security problems.
Since even this limited script is a lot to digest if
you're new to iptables, I've split
it up into sections in Examples Example 3-16 through
Example 3-21, with the full script in Example 3-22. Let's walk through these
examples. The script has been condensed from an actual, working
script on one of my SUSE servers. (I've omitted
SUSE-isms here, but the complete SUSE script is listed in the
Appendix.) Let's start with the commands at the beginning,
which load some kernel modules and ensure that netfilter is starting
empty (Example 3-16).



Example 3-16. Initializing netfilter





modprobe ip_tables
modprobe ip_conntrack_ftp
# Flush old rules, old custom tables
$IPTABLES --flush
$IPTABLES --delete-chain
# Set default-deny policies for all three default chains
$IPTABLES -P INPUT DROP
$IPTABLES -P FORWARD DROP
$IPTABLES -P OUTPUT DROP We use
modprobe
rather than
insmod,
because modprobe probes for and loads any
additional modules on which the requested module depends.
modprobe ip_conntrack_ftp, for example, loads not
only the FTP connection-tracking module
ip_conntrack_ftp, but also the generic
connection-tracking module ip_conntrack, on
which ip_conntrack_ftp depends.



There's no reason for any rules or custom chains to
be active yet, but to be sure we're starting out
fresh, we use the
--flush
and
--delete-chain
commands. We then use the -P flag to set all three
default chains' default policies to
DROPremember, the default is ACCEPT, which I strongly
discourage (as it is contrary to the Principle of Least Privilege).



Moving on, we have loopback policies (Example 3-17).



Example 3-17. Loopback policies





# Give free rein to loopback interfaces
$IPTABLES -A INPUT -i lo -j ACCEPT
$IPTABLES -A OUTPUT -o lo -j ACCEPT Aha, our first Rule Specifications! They're very
simple, too; they say "anything arriving or exiting
on a loopback interface should be allowed." This is
necessary because local applications such as the X Window System
sometimes "bounce" data to each
other over the TCP/IP stack via loopback.



Next come some rules that match packets whose source IP addresses are
non-Internet-routable and therefore presumed to be
spoofed (Example 3-18).



Example 3-18. Anti-IP-spoofing rules





# Do some rudimentary anti-IP-spoofing drops
$IPTABLES -A INPUT -s 255.0.0.0/8 -j LOG --log-prefix "Spoofed source IP!"
$IPTABLES -A INPUT -s 255.0.0.0/8 -j DROP
$IPTABLES -A INPUT -s 0.0.0.0/8 -j LOG --log-prefix "Spoofed source IP!"
$IPTABLES -A INPUT -s 0.0.0.0/8 -j DROP
$IPTABLES -A INPUT -s 127.0.0.0/8 -j LOG --log-prefix "Spoofed source IP!"
$IPTABLES -A INPUT -s 127.0.0.0/8 -j DROP
$IPTABLES -A INPUT -s 192.168.0.0/16 -j LOG --log-prefix "Spoofed source IP!"
$IPTABLES -A INPUT -s 192.168.0.0/16 -j DROP
$IPTABLES -A INPUT -s 172.16.0.0/12 -j LOG --log-prefix " Spoofed source IP!"
$IPTABLES -A INPUT -s 172.16.0.0/12 -j DROP
$IPTABLES -A INPUT -s 10.0.0.0/8 -j LOG --log-prefix " Spoofed source IP!"
$IPTABLES -A INPUT -s 10.0.0.0/8 -j DROP
$IPTABLES -A INPUT -s 208.13.201.2 -j LOG --log-prefix "Spoofed Woofgang!"
$IPTABLES -A INPUT -s 208.13.201.2 -j DROP Prospective attackers use
IP spoofing to
mimic trusted hosts that might be allowed by firewall rules or other
access controls. One class of IP addresses we can easily identify as
likely spoof candidates are those specified in RFC 1918 as
"reserved for internal use":
10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16. Addresses in these
ranges are not deliverable over the Internet, so you can safely
assume that any packet arriving at our Internet-connected host
bearing such a source IP is either a freak or an imposter.



This assumption doesn't work if, for example, the
internal network on the other side of your firewall is numbered with
RFC 1918 addresses that are not translated or
masqueraded by the firewall prior to arriving at your bastion host.
This would be both unusual and unadvisable: you should treat your
internal IP addresses as confidential data. But if not one word of
this paragraph makes sense, don't worry:
we're not going to consider such a scenario.




Obviously, if you use RFC 1918 address space on your own DMZ or
internal network, you'll need your bastion
host's anti-spoofing rules to reflect that. For
example, if your bastion host's IP address is
10.0.3.1, you won't want to drop all packets coming
from 10.0.0.0/8, since other legitimate hosts on the same LAN will
have IP addresses in that range.





If our bastion host's own IP
address is used as a source IP of inbound packets, we can assume that
that IP is bogus. One might use this particular brand of spoofed
packet to try to trick the bastion host into showering itself with
packets. If our example host's IP is 208.13.201.2,
the rule to block these is as follows:



$IPTABLES -A INPUT -s 208.13.201.2 -j DROP which of course is what we've got in Example 3-18.



Note that each of these antispoofing rules consists of a pair: one
rule to log the packet, followed by the actual DROP rule. This is
important: once a packet matches a DROP rule, it
isn't checked against any further rules, but after a
LOG action, the packet is. Anything you want
logged, therefore, must be logged before being
dropped.



There's one other type of tomfoolery we want to
squash early in our rule base, and that's the
possibility of strange TCP packets (Example 3-19).



Example 3-19. Anti-stealth-scanning rule





# Tell netfilter that all TCP sessions do indeed begin with SYN
$IPTABLES -A INPUT -p tcp ! --syn -m state --state NEW -j LOG --log-prefix "Stealth
scan attempt?"
$IPTABLES -A INPUT -p tcp ! --syn -m state --state NEW -j DROP This pair of rules addresses a situation in which the first packet to
arrive from a given host is not a simple SYN
packet but is instead a SYN-ACK, a FIN, or some weird hybrid. Without
these rules, such a packet would be allowed if netfilter interprets
it as the first packet in a new permitted connection. Due to an
idiosyncrasy (no pun intended) of netfilter's
connection-tracking engine, this is possible. The odds are slim,
however, that a SYN-less "new
connection" packet is anything but a
"Stealth scan" or some other form
of skulduggery.



Finally, we arrive at the heart of our packet-filtering
policythe parts that are specific to our sample bastion host.
Let's start this section with the INPUT rules (Example 3-20).



Example 3-20. The INPUT chain





# Accept inbound packets that are part of previously-OK'ed sessions
$IPTABLES -A INPUT -j ACCEPT -m state --state ESTABLISHED,RELATED
# Accept inbound packets which initiate SSH sessions
$IPTABLES -A INPUT -p tcp -j ACCEPT --dport 22 -m state --state NEW
# Accept inbound packets which initiate FTP sessions
$IPTABLES -A INPUT -p tcp -j ACCEPT --dport 21 -m state --state NEW
# Accept inbound packets which initiate HTTP sessions
$IPTABLES -A INPUT -p tcp -j ACCEPT --dport 80 -m state --state NEW
# Log anything not accepted above
$IPTABLES -A INPUT -j LOG --log-prefix "Dropped by default:" The first rule in this part of the
INPUT chain tells netfilter to pass any
inbound packets that are part of previously accepted and tracked
connections. We'll return to the subject of
connection tracking momentarily.



The next rule allows new inbound SSH sessions to be started. SSH, of
course, has its own access controls (passwords, DSA/RSA keys, etc.),
but this rule would be even better if it limited SSH connections by
source IP. Suppose for example's sake that we want
users from our organization's internal network (and
only those users) to access our bastion host through SSH;
furthermore, our internal network is behind a firewall that performs
IP
masquerading:
all packets originating from the internal network are rewritten to
contain the firewall's external or DMZ IP address as
their source IPs.



Since our bastion host is on the other side of the
firewall, we can match packets coming from the entire internal
network by checking for a source-IP address of the
firewall's DMZ interface. Here's
what our SSH rule would look like, restricted to internal users
(assume the firewall's DMZ IP address is
208.13.201.1):



$IPTABLES -A INPUT -p tcp -j ACCEPT -s 208.13.201.1 --dport 22 -m state --state NEW Since SSH is used only by our internal administrators to manage the
FTP/HTTP bastion host and not by any external users (we hope), this
restriction is a good idea.



The next two rules in Example 3-20 allow new inbound
FTP and HTTP connections, respectively. Since this is a public
FTP/WWW server, we don't need to restrict these
services by IP or network.



But wait...isn't FTP a fairly complicated protocol?
Do we need separate rules for FTP data streams in addition to this
rule allowing FTP control channels?



No! Thanks to
netfilter's
ip_conntrack_ftp module, our kernel has the
intelligence to associate FTP PORT commands (used for directory
listings and file transfers) with established FTP connections, in
spite of the fact that PORT commands occur on random high ports. Our
single FTP rule, along with our blanket "allow
ESTABLISHED/RELATED" rule, is all we need.



The last rule in our INPUT chain is sort of a
"clean-up" rule. Since each packet
traverses the chain sequentially from top to bottom, we can assume
any packet that hasn't matched so far is destined
for our chain's default policy, which of course is
DROP.



We don't need to go so far as to add an explicit
DROP rule to the end of the chain, but if we want to log packets that
make it that far, we do need a logging rule. This is the purpose of
the last rule in Example 3-20, which has no match
criteria other than the implied "this packet matches
none of the above." The top four rules in Example 3-20 are the core of
our INPUT policy: "allow new
inbound SSH, FTP, and HTTP sessions, and all subsequent packets
pertinent to them." Example 3-21 is an even shorter list of rules,
forming the core of our
OUTPUT chain.



Example 3-21. OUTPUT chain of rules





# If it's part of an approved connection, let it out
$IPTABLES -I OUTPUT 1 -m state --state RELATED,ESTABLISHED -j ACCEPT
# Allow outbound ping (comment-out when not needed!)
$IPTABLES -A OUTPUT -p icmp -j ACCEPT --icmp-type echo-request
# Allow outbound DNS queries, e.g. to resolve IPs in logs
$IPTABLES -A OUTPUT -p udp --dport 53 -m state --state NEW -j ACCEPT
# Log anything not accepted above - if nothing else, for t-shooting
$IPTABLES -A OUTPUT -j LOG --log-prefix "Dropped by default:" Again we begin with a rule permitting packets associated with already
established (allowed) connections. The next two rules are not
strictly necessary, as they allow outbound ping
and DNS query transactions.
ping
is a useful tool for testing basic IP connectivity, but there have
been various Denial of Service exploits over the years involving
ping. Therefore, that particular rule should
perhaps be considered temporary, pending our bastion host entering
full production status.



The outbound DNS is a convenience for whoever winds
up monitoring this host's logs: without DNS, the
system's system-logging facility
won't be able to resolve IP addresses to names,
making for more arduous log parsing. On the other hand, DNS can also
slow down logging, so it may be undesirable anyhow. Regardless,
it's a minimal security riskfar less than
that posed by pingso this rule is safely
left in place if desired.




Some people experience anomalies with netfilter's
ftp-conntrack module, especially with
passive-mode FTP (explained in Chapter 11).
It's supposed to be sufficient
to (1) load the ftp-conntrack module, (2) put
"allow related/established" rules
at the heads of your INPUT and OUTPUT chains, and (3) put
"allow new connections to TCP 21"
rules in your INPUT chain (as shown in Examples Example 3-20 through Example 3-22).



But if
you experience problems with passive-mode FTP, you may also need to
add the following rule to your INPUT chain:



iptables -A INPUT -p tcp --sport 1024: --dport 1024: -m state --state
ESTABLISHED -j ACCEPT and this one to your OUTPUT chain:



iptables -A OUTPUT -p tcp --sport 1024: --dport 1024: -m state --state
ESTABLISHED,RELATED -j ACCEPT This may look insecure, as it allows connections from all
non-privileged ports to all privileged ports, in both directions
(yikes!). But if you look closely at these two rules,
you'll see that in fact they allow this only for
related and
established connections, that is, connections
related to explicitly allowed FTP transactions.





Finally, we end with another rule to
log "default
DROPs." That's our complete policy!
The full script is listed in Example 3-22 (and in
even more complete form in the Appendix, Example A-1).



Example 3-22. iptables script for a bastion host running FTP and HTTP services





#! /bin/sh
# init.d/localfw
#
# System startup script for Woofgang's local packet filters
#
# last modified 12 Oct 2004 mdb
#
IPTABLES=/usr/sbin/iptables
test -x $IPTABLES || exit 5
case "$1" in
start)
echo -n "Loading Woofgang's Packet Filters"
# SETUP -- stuff necessary for any host
# Load kernel modules first
modprobe ip_tables
modprobe ip_conntrack_ftp
# Flush old rules, old custom tables
$IPTABLES --flush
$IPTABLES --delete-chain
# Set default-deny policies for all three default chains
$IPTABLES -P INPUT DROP
$IPTABLES -P FORWARD DROP
$IPTABLES -P OUTPUT DROP
# Give free reign to loopback interfaces
$IPTABLES -A INPUT -i lo -j ACCEPT
$IPTABLES -A OUTPUT -o lo -j ACCEPT
# Do some rudimentary anti-IP-spoofing drops
$IPTABLES -A INPUT -s 255.0.0.0/8 -j LOG --log-prefix "Spoofed source IP!"
$IPTABLES -A INPUT -s 255.0.0.0/8 -j DROP
$IPTABLES -A INPUT -s 0.0.0.0/8 -j LOG --log-prefix "Spoofed source IP!"
$IPTABLES -A INPUT -s 0.0.0.0/8 -j DROP
$IPTABLES -A INPUT -s 127.0.0.0/8 -j LOG --log-prefix "Spoofed source IP!"
$IPTABLES -A INPUT -s 127.0.0.0/8 -j DROP
$IPTABLES -A INPUT -s 192.168.0.0/16 -j LOG --log-prefix "Spoofed source IP!"
$IPTABLES -A INPUT -s 192.168.0.0/16 -j DROP
$IPTABLES -A INPUT -s 172.16.0.0/12 -j LOG --log-prefix " Spoofed source IP!"
$IPTABLES -A INPUT -s 172.16.0.0/12 -j DROP
$IPTABLES -A INPUT -s 10.0.0.0/8 -j LOG --log-prefix " Spoofed source IP!"
$IPTABLES -A INPUT -s 10.0.0.0/8 -j DROP
$IPTABLES -A INPUT -s 208.13.201.2 -j LOG --log-prefix "Spoofed Woofgang!"
$IPTABLES -A INPUT -s 208.13.201.2 -j DROP
# Tell netfilter that all TCP sessions do indeed begin with SYN
$IPTABLES -A INPUT -p tcp ! --syn -m state --state NEW -j LOG --log-prefix
"Stealth scan attempt?"
$IPTABLES -A INPUT -p tcp ! --syn -m state --state NEW -j DROP
# Finally, the meat of our packet-filtering policy:
# INBOUND POLICY
# Accept inbound packets that are part of previously-OK'ed sessions
$IPTABLES -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
# Accept inbound packets which initiate SSH sessions
$IPTABLES -A INPUT -p tcp -j ACCEPT --dport 22 -m state --state NEW
# Accept inbound packets which initiate FTP sessions
$IPTABLES -A INPUT -p tcp -j ACCEPT --dport 21 -m state --state NEW
# Accept inbound packets which initiate HTTP sessions
$IPTABLES -A INPUT -p tcp -j ACCEPT --dport 80 -m state --state NEW
# Log anything not accepted above
$IPTABLES -A INPUT -j LOG --log-prefix "Dropped by default (INPUT):"
# OUTBOUND POLICY
# If it's part of an approved connection, let it out
$IPTABLES -I OUTPUT 1 -m state --state RELATED,ESTABLISHED -j ACCEPT
# Allow outbound ping (comment-out when not needed!)
$IPTABLES -A OUTPUT -p icmp -j ACCEPT --icmp-type echo-request
# Allow outbound DNS queries, e.g. to resolve IPs in logs
$IPTABLES -A OUTPUT -p udp --dport 53 -m state --state NEW -j ACCEPT
# Log anything not accepted above - if nothing else, for t-shooting
$IPTABLES -A OUTPUT -j LOG --log-prefix "Dropped by default (OUTPUT):"
;;
wide_open)
echo -n "DANGER!! Unloading Woofgang's Packet Filters!!"
# Unload filters and reset default policies to ACCEPT.
# FOR EMERGENCY USE ONLY -- else use `stop'!!
$IPTABLES --flush
$IPTABLES -P INPUT ACCEPT
$IPTABLES -P FORWARD ACCEPT
$IPTABLES -P OUTPUT ACCEPT
;;
stop)
echo -n "Portcullis rope CUT..."
# Unload all fw rules, leaving default-drop policies
$IPTABLES --flush
;;
status)
echo "Querying iptables status (via iptables --list)..."
$IPTABLES --line-numbers -v --list
;;
*)
echo "Usage: $0 {start|stop|wide_open|status}"
exit 1
;;
esac


iptables for the Lazy





SUSE has a utility for creating iptables policies, called
SUSEfirewall2.
If you install this package, all you need to do is edit the file
/etc/sysconfig/SUSEfirewall2 (in earlier
versions of SUSE,
/etc/rc.config.d/firewall2.rc.config), run
SUSEconfig, and reboot. If you know anything at
all about TCP/IP, however, it's probably not that
much more trouble to write your own iptables script.



Similarly, Red Hat and Mandrake users can avail themselves of
Bastille Linux's Firewall
module. Bastille's Q & A is actually a simple,
quick way to generate a good iptables configuration.



There are also a number of GUI-based tools that can write iptables
rules. As with SUSEfirewall2 and Bastille,
it's up to you to decide whether a given tool is
convenient and therefore worth adding complexity to your bastion host
in the form of extra software.




We've covered only a subset of
netfilter's features, but it's an
extremely useful subset. While local packet filters
aren't a cure-all for system security,
they're one of the thicker layers of our security
onion and well worth the time and effort it takes to learn iptables
and fine-tune your filtering policies.

3.1.10. Checking Your Work with Scanners





You may have heard scare stories about how easy it is for evil system
crackers to probe potential victims' systems for
vulnerabilities using software tools readily available on the
Internet. The bad news is that these stories are generally true. The
good news is that many of these tools are extremely useful (and even
designed) for the legitimate purpose of scanning
your own systems for weaknesses.



In my
opinion, scanning is a useful step in the system-hardening process,
one that should be carried out after most other hardening tasks are
completed and that should be repeated periodically as a sanity check.
Let's discuss, then, some uses of
nmap and nessus, arguably
the best port scanner and security scanner (respectively) available
for Linux.



3.1.10.1 Types of scans and their uses



There are basically two types of system scans. Port scans
look for open TCP and UDP portsi.e., for
"listening services."
Security scans go a step further and probe
identified services for known weaknesses. In terms of sophistication,
doing a port scan is like counting how many doors and windows a house
has; running a security scan is more like rattling all the doorknobs
and checking the windows for alarm sensors.



3.1.10.2 Why we (good guys) scan





Why scan? If you're a system cracker, you scan to
determine what services a system is running and which well-known
vulnerabilities apply to them. If you're a system
administrator, you scan for essentially the same reasons, but in the
interest of fixing (or at least understanding) your systems, not
breaking into them.



It may sound odd for good guys to use the same kinds of tools as the
bad guys they're trying to thwart. After all, we
don't test dead-bolt locks by trying to kick down
our own doors. But system security is exponentially more complicated
than physical security. It's nowhere near as easy to
gauge the relative security of a networked computer system as it is
the door to your house.



Therefore, we security-conscious geeks are obliged to take seriously
any tool that can provide some sort of sanity check, even an
incomplete and imperfect one (as is anything that tries to measure a
moving target such as system security). This is despite or even
because of that tool's usefulness to the bad guys.
Security and port scanners give us the closest thing to a
"security benchmark" as we can
reasonably hope for.



3.1.10.3 nmap, world champion port scanner





The basic premise of port scanning is simple: if you try to connect
to a given port, you can determine whether that port is
closed/inactive or whether an application (web server, FTP daemon,
etc.) is accepting connections there. As it happens, it is easy to
write a simple port scanner that uses the local connect() system call to attempt TCP connections on various ports;
with the right modules, you can even do this with Perl. However, this
method is also the most obtrusive and obvious way to scan, and it
tends to result in numerous log entries on one's
target systems.



Enter nmap, by Fyodor. nmap can do simple connect() scans if you like, but its real forte is
stealth scanning.
Stealth scanning uses packets that have unusual flags or
don't comply with a normal TCP state to trigger a
response from each target system without actually completing a TCP
connection.



nmap supports not one, but four different kinds of stealth scans,
plus TCP Connect scanning, UDP scanning, RPC scanning,
ping
sweeps, and even operating-system fingerprinting. It also boasts a
number of features more useful to black-hat than white-hat hackers,
such as FTP-bounce
scanning, ACK
scanning, and Window firewall scanning (many of which
can pass through firewalls undetected but are of little interest to
this book's highly ethical readers). In short, nmap
is by far the most feature-rich and versatile port scanner available
today.



Here, then, is a summary of the most important types of scans nmap
can do:



TCP Connect scan





This uses the OS's native connect() system call to attempt a full three-way TCP handshake
(SYN, ACK-SYN, ACK) on each probed port. A failed connection (i.e.,
if the server replies to your SYN packet with an ACK-RST packet)
indicates a closed port. It doesn't require
root privileges and is one of the faster
scanning methods. Not surprisingly, however, many server applications
log connections that are closed immediately after
they're opened, so this is a fairly
"noisy" scan.




TCP SYN scan





This is two-thirds of a TCP Connect scan; if the target returns an
ACK-SYN packet, nmap immediately sends an RST packet rather than
completing the handshake with an ACK packet.
"Half-open''
connections such as these are far less likely to be logged, so SYN
scanning is harder to detect than TCP Connect scanning. The trade-off
is that since nmap, rather than the kernel, builds these packets, you
must be root to run nmap in this mode. This is
the fastest and most reliable TCP scan.




TCP FIN scan





Rather than even pretending to initiate a standard TCP connection,
nmap sends a single FIN (final) packet. If the
target's TCP/IP stack is RFC-793-compliant (MS-
anything, HP-UX, IRIX, MVS, and Cisco IOS are
not), open ports will drop the packet and closed
ports will send an RST.




TCP NULL scan





Similar to a FIN scan, TCP NULL scan uses a TCP-flagless packet
(i.e., a null packet). It also relies on the RFC-793-compliant
behavior described earlier.




TCP Xmas Tree scan





Similar to a FIN scan, TCP Xmas Tree scan instead sends a packet with
its FIN, PSH, and URG flags set (final,
push data, and
urgent, respectively). It also relies on the
RFC-793-compliant behavior described earlier.




UDP scan





Because UDP is a connectionless protocol (i.e.,
there's no protocol-defined relationship between
packets in either direction), UDP has no handshake to play with, as
in the TCP scans described earlier. However, most operating
systems' TCP/IP stacks will return an ICMP
"Port
Unreachable'' packet if a UDP
packet is sent to a closed UDP port. Thus, a port that
doesn't return an ICMP packet can be assumed open.
Since neither the probe packet nor its potential ICMP packet are
guaranteed to arrive (remember, UDP is connectionless and so is
ICMP), nmap will typically send several UDP packets per UDP probed
port to reduce false positives. More significantly, the Linux kernel
will send no more than 80 ICMP error messages every four seconds;
keep this in mind when scanning Linux hosts. In my experience, the
accuracy of nmap's UDP scanning varies among target
OSes, but it's better than nothing.




RPC scan





Used in conjunction with other scan types, this feature causes nmap
to determine which of the ports identified as open are hosting RPC
(remote procedure call) services and what those services and version
numbers are.





Whew! Quite a list of scanning methodsand
I've left out ACK scans and Window scans (see the
manpage nmap(1), if you're
interested). nmap has another very useful feature: OS fingerprinting.
Based on characteristics of a target's responses to
various arcane packets that nmap sends, nmap can make an educated
guess as to which operating system each target host is running.

3.1.10.4 Getting and installing nmap





So useful and popular is
nmap that it is now included in
most Linux distributions. Fedora Core 2, SUSE 9.0, and Debian 3.0,
for example, all come with nmap. Therefore, the easiest way for most
Linux users to install nmap is via their system's
package manager (e.g., RPM, dselect, or yast)
and preferred OS installation medium (CD-ROM, FTP, etc.).




Where Should I Install Port Scanners and Security Scanners?






Not on any bastion host or firewall! As useful as these tools are,
they are doubly so for prospective attackers.



My best recommendation for monitoring your DMZ's
security with scanners is to use a system dedicated to this purpose,
such as a laptop system, which can be easily connected to the DMZ
network when needed and promptly disconnected
when not in use.




If, however, you want the very latest version of nmap or its source
code, both are available from http://www.insecure.org/
(Fyodor's web site) in RPM and TGZ formats. Should
you wish to compile nmap from source, simply download and expand the
tarball, and then enter the commands listed in Example 3-23 (allowing for any difference in the expanded
source code's directory name; nmap v3.50 may be
obsolete by the time you read this).



Example 3-23. Compiling nmap





root@woofgang: # cd nmap-3.50
root@woofgang: # ./configure
root@woofgang: # make
root@woofgang: # make install

3.1.10.5 Using nmap





There are two different ways to
run
nmap. The most powerful and flexible way is via the command prompt.
There is also a GUI called nmapfe, which
constructs and executes an nmap scan for you (Figure 3-7).




Figure 3-7. Sample nmapfe session





nmapfe is useful for quick-and-dirty scans or as
an aid to learning nmap's command- line syntax.
(Note that in Fedora Core 2 and Red Hat 9.0, the RPM for
nmapfe is called
nmap-frontend.) But I strongly recommend
learning nmap proper: it is quick and easy to use even without a GUI.



The syntax for simple scans is as follows:



nmap [-s scan-type] [-p port-range]|-F options target The -s flag must be
immediately followed by one of the following:



T




TCP Connect scan
S




TCP SYN scan
U




UDP scan (can be combined with the previous flags)
R




RPC scan (can be combined with previous flags)
F, N, X, L, W, O, V, P




Fin, Null, Xmas Tree, List, Window, IP Protocol, Version, and Ping
scans, respectivelythese options are far more useful in
penetration-testing scenarios than in the basic sanity-checking cases
we're discussing now, so see the
nmap(1) manpage for more information

For example, -sSUR tells nmap to perform a SYN
scan, a UDP scan, and finally an RPC scan/identification on the
specified target(s). -sTSR would fail, however,
because TCP Connect and TCP SYN are types of TCP scans.



If you state a port range using the
-p flag, you can combine commas and
dashes to create a very specific group of ports to be scanned. For
example, typing -p
20-23,80,53,600-1024 tells nmap to scan ports 20
through 23, 80, 53, and 600 through 1024. Don't use
any spaces in your port range, however. Alternatively, you can use
the -F flag (short for "fast
scan"), which tells nmap to scan only those ports
listed in the file
/usr/share/nmap/nmap-services; these are ports
Fyodor has found to frequently yield interesting results.



The "target''
expression can be a hostname, a host IP address, a network IP
address, or a range of IP addresses. Wildcards may be used. For
example, 192.168.17.* expands to all 255 IP
addresses in the network 192.168.17.0/24 (in fact, you could use
192.168.17.0/24 instead);
10.13.[1,2,4].* expands to 10.13.1.0/24,
10.13.2.0/24, and 10.13.4.0/24. As you can see, nmap is very flexible
in the types of target expressions it understands.



3.1.10.6 Some simple port scans





Let's examine a
basic scan (Example 3-24). This is my favorite
"sanity check" for hardened
systems: it's nothing fancy, but thorough enough to
help validate the target's iptables configuration
and other hardening measures. For this purpose, I like to use a
plain-vanilla TCP Connect scan, because it's fast
and because the target is my own systemi.e.,
there's no reason to be stealthy.



I also like the -F option, which probes nearly all
"privileged ports" (0-1023) plus
the most commonly used "registered
ports" (1024-49,151). This can take considerably
less time than probing all 65,535 TCP and/or UDP ports. Another
option I usually use is -P0, which tells nmap not
to ping the target. This is important for the
following reasons:



Most of my bastion hosts do not respond to
pings, so I have no expectation that anybody
else's will either.



The scan will fail and exit if an attempted ping
fails.



It can take a while for pings to time out.




The other option I like to include in my basic scans is -O,
which attempts
"OS
fingerprinting." It's good to know
how obvious certain characteristics of my systems are, such as
operating system, kernel version, uptime, etc. An accurate nmap OS
fingerprint of one of my painstakingly hardened bastion hosts never
fails to provide me with an appropriately humble appreciation of how
exposed any host on the Internet is:
there's always some measure of
intelligence that can be gained in this way.



And so we come to our sample scan (Example 3-24). The
output was obtained using nmap Version 3.30 running on SUSE 9.0. The
target system is none other than woofgang, the
example FTP/WWW server we've been bastionizing
throughout this chapter.



Example 3-24. Simple scan against a bastion host





[root@mcgruff]# nmap -sT -F -P0 -O woofgang.dogpeople.org
Starting nmap 3.30 ( http://www.insecure.org/nmap/ ) at 2004-03-21 16:57 CST
Insufficient responses for TCP sequencing (0), OS detection may be less accurate
Insufficient responses for TCP sequencing (0), OS detection may be less accurate
Insufficient responses for TCP sequencing (0), OS detection may be less accurate
Interesting ports on 208.13.201.2:
(The 1194 ports scanned but not shown below are in state: filtered)
Port State Service
21/tcp open ftp
22/tcp open ssh
80/tcp closed http
Too many fingerprints match this host to give specific OS details
Nmap run completed -- 1 IP address (1 host up) scanned in 270.629 seconds (Notice anything familiar about the scan in Example 3-24? It's consistent with the
output in Figure 3-7.) Good, our bastion host
responded exactly the way we expected: it's
listening on TCP ports 21, 22, and 80 and not responding on any
others. So far, our iptables configuration appears to be doing the
job.



Let's add just a couple of options to this scan to
make it more comprehensive. First, let's include
UDP. (We're not expecting to see any listening UDP
ports.) This is achieved by adding a U to our
-s specificationi.e.,
-sTU. While we're at it,
let's throw in RPC too; our bastion host
shouldn't be accepting any Remote Procedure Call
connections. Like the UDP option, this can be added to our TCP scan
directivei.e., -sTUR.



The UDP and
RPC scans
go particularly well together: RPC is a UDP-intensive protocol. When
nmap finds an RPC service on an open port, it appends the RPC
application's name in parentheses, including the
version number, if nmap can make a credible guess at one.



Our new, beefier scan is shown in Example 3-25.



Example 3-25. A more comprehensive scan



[root@mcgruff]# nmap -sTUR -F -P0 -O woofgang.dogpeople.org
Starting nmap 3.30 ( http://www.insecure.org/nmap/ ) at 2004-03-21 19:01 CST
Insufficient responses for TCP sequencing (0), OS detection may be less accurate
Insufficient responses for TCP sequencing (0), OS detection may be less accurate
Insufficient responses for TCP sequencing (0), OS detection may be less accurate
Interesting ports on 208.13.201.2:
(The 2195 ports scanned but not shown below are in state: filtered)
Port State Service (RPC)
21/tcp open ftp
22/tcp open ssh
80/tcp closed http
Too many fingerprints match this host to give specific OS details
Nmap run completed -- 1 IP address (1 host up) scanned in 354.540 seconds Whew, no surprises: nmap found no UDP or RPC listening ports.
Interestingly, the scan took awhile: 354 seconds, just shy of 6
minutes, even though we specified the -F
("fast") option! This is because
woofgang is running netfilter and is configured
to drop nonallowed packets rather than reject them.



Without netfilter, the kernel would reply to attempted connections on
inactive ports with "icmp
port-unreachable" and/or TCP RST packets, depending
on the type of scan. In the absence of these courteous replies, nmap
is compelled to wait for each connection attempt to time out before
concluding the port isn't open, making for a lengthy
scan. nmap isn't stupid, however: it reported that
"The 2195 ports scanned but not shown below are in
state: filtered." So, is our bastion host secure? Clearly it's on the
right track, but let's perform one more sanity
check: a security
scan.

3.1.10.7 Nessus, a full-featured security scanner





Seeing what "points of entry" a
host offers is a good start in evaluating that
host's security. But how do we interpret the
information nmap gives us? For example, in Examples Example 3-24 and Example 3-25, we verified
that the host woofgang is accepting SSH, FTP,
and HTTP connections; that tells us that this host is running a web
server on TCP port 80, an FTP server on TCP 21, and a SSH daemon on
TCP port 22. But which of these services are actually
exploitable and, if so, how?



This is where
security scanners come into play. At
the risk of getting ahead of ourselves, let's look
at the output from a Nessus scan of woofgang
(Figure 3-8).




Figure 3-8. Nessus scan of woofgang





Space doesn't permit me to show the entire
(expanded) report, but suffice it to say that Nessus generated two
warnings for our target system and provided two supplemental security
notes.



3.1.10.8 Security scanners explained



Whereas a port scanner such as nmap (which, again, is the gold
standard in port scanners) tells you what's
listening, a security scanner like Nessus tells you
what's vulnerable. Since you need to know
what's listening before even
trying to probe for actual weaknesses,
security
scanners usually either contain or are linked to port scanners.



As it happens, Nessus invokes nmap as the initial step in each scan.
Once a security scanner has determined which services are present, it
performs various checks to determine which software packages are
running, which version each package seems to have, and whether
they're subject to any known vulnerabilities.
Predictably, this level of intelligence requires a good vulnerability
database that must be updated periodically as new vulnerabilities
come to light.



Ideally, the database should be user
editablethat is, it should be possible
for you to create custom vulnerability tests particular to your
environment and needs. This also ensures that should the
scanner's developer not immediately release an
update for a new vulnerability, you can create the update yourself.
Not all security scanners have this level of customizability, but
Nessus does.



After a security scanner locates, identifies, and analyzes the
listening services on each host it's been configured
to scan, it creates a report of its findings. The better scanners
don't stop at pointing out vulnerabilities; they
explain them in detail and suggest how to fix them.



So meaty are the reports generated by good security scanners that
highly paid consultants have been known to present them as the
primary deliverables of supposedly comprehensive security audits.
This is a questionable practice, but it emphasizes the fact that a
good security scan produces a
lot of data.



There are a number of
free
security scanners available:
VLAD,
SAINT, and Nessus are
just a few. Nessus, however, stands out as a viable alternative to
powerful commercial products such as ISS's Internet Scanner.
Developed primarily by Renaud Deraison and Jordan
Hrycaj,
Nessus surely ranks with
GIMP and Apache as free
software tools that equal and often exceed the usability and
flexibility of their commercial counterparts.



3.1.10.9 Nessus's architecture





Nessus
has two major parts: a server, which runs all scans, and a client,
with which you control scans and view reports. This distributed
architecture makes Nessus flexible and also allows you to avoid
monopolizing your workstation's CPU cycles with
scanning activities. It also allows you to mix and match platforms:
you can use the Unix variant of your choice as the server, with your
choice of X, MS-Windows, or web-based clients. (The standard X Window
System client is part of the Nessus distribution; for other clients,
see http://www.nessus.org/related/indexl.) nessusd listens for
client connections on TCP 1241 (1241 was recently assigned to Nessus
by the Internet Assigned Numbers Authority; previously
nessusd used TCP 3001). Client sessions are
authenticated and encrypted via OpenSSL.



Nessus's client component,
nessus, can
connect to and authenticate against the nessusd
server either with a standard username and password scheme (which is
the method I'll describe momentarily) or via a
challenge-response scheme using X.509 certificates.
Don't be afraid that the username/password method is
weak; if you've compiled OpenSSL into Nessus (on
both your client and server systems), your logon session will be
encrypted.



Furthermore, you can use the same system as both
nessus client and nessusd
server, in which case each session's authentication
and subsequent scanning data will never leave your local system (with
the exception of the scan itself, which of course will connect to
various "target" hosts).



Once you've connected to a Nessus server,
you're presented with a list of
"plug-ins" (vulnerability tests)
supported by the server and a number of other options. You may also
choose to run a "detached" scan
that can continue running even if you close your client session; the
scan's output will be saved on the server for you to
retrieve later. Nessus also supports a Knowledge Base, which allows
you to store scan data and use it to track your
hosts' security from scan to scan (e.g., to run
"differential" scans).



Once you've configured and begun a scan, Nessus
invokes each appropriate module and plug-in as specified and/or
applicable, beginning with an nmap scan. The results of one
plug-in's test may affect how or even whether
subsequent tests are run; Nessus is pretty intelligent that way. When
the scan is finished, the results are sent back to the client. (If
the session-saving feature is enabled, the results may also be stored
on the server.)

3.1.10.10 Getting and installing Nessus





Nessus, like most open source
packages, is available in both source-code and binary distributions.
RPM binary packages of Nessus Version 2.0.10a (the latest stable
version at this writing) are available for Red Hat and Fedora Linux
from http://atrpms.physik.fu-berlin.de/, courtesy
of Axel Thimm.



Debian 3.0 and SUSE 9.0 both include Nessus as part of their
respective distributions. However, if you run Debian 3.0, I recommend
you install Nessus from source: the version of Nessus included in
Debian is 1.0, which is obsolete. The remainder of this discussion
assumes you're running Nessus 2.0 or later.



Compiling and installing Nessus from source is easy:
it's a simple matter of installing a few
prerequisites, downloading the Nessus installer script (which
contains all Nessus's source code), and following
Nessus's installation instructions. The Nessus FAQ
(http://www.nessus.org/doc/faql) and
Nessus Mailing List (http://list.nessus.org) provide ample hints
for compiling and installing Nessus.



Nessus has only a few prerequisites:



nmap (Nessus will
compile without nmap but won't be able to trigger
nmap scans without it.) OpenSSL (again, Nessus will compile without this, but without OpenSSL
all communications between the Nessus daemon and its clients will be
cleartext rather than encrypted. Note that you
also need your distro's
openssl-devel
package, a.k.a. libssl-dev in Debian
3.0.) gtk, the GIMP Tool Kit
v1.2. Besides GTK 1.2's core libraries, Nessus
won't compile without the utility
gtk-config, so be sure to install
gtk-devel. Note that many distributions now ship
with GTK v2.0, so be sure you install v1.2 for Nessus. In Debian 3.0,
the GTK packages are named libgtk1.2,
libgtk1.2-devel, etc.; in Fedora Core 2
they're gtk+-devel, etc.




After all prerequisites are in place, you're ready
to compile or install your Nessus packages. The compiling process has
been fully automated: simply download the file
nessus-installer.sh from one of the sites listed
at http://www.nessus.org/nessus_2_0l and
invoke it with the command:



sh ./nessus-installer.sh to automatically configure, compile, and install Nessus from source.



nessus-installer.sh prompts you for
Nessus's base path (/usr/local
by default) and proceeds to extract and compile Nessus. Keep an eye
out for the message "SSL support is
disabled." If you receive this error,
you'll need to uninstall Nessus, install your
distribution's OpenSSL-development package (probably
named either openssl-devel or
libssl-dev), and rerun
nessus-installer.sh.



The installation script may take a while to prepare source code and
even longer to compile it. Make sure you've got
plenty of space on the volume where /tmp
resides: this is where the installer unzips and builds the
Nessus source-code tree. If you have trouble building, you can rename
/tmp to /tmp.bak and create
a symbolic link named /tmp that points to a
directory on a volume with more space.



After everything's been built and installed, you
will then have several new binaries in
/usr/local/bin and
/usr/local/sbin, a large collection of Nessus
plug-ins in /usr/local/lib/nessus/plugins, and
new manpages for the Nessus programs nessus,
nessus-mkcert,
nessus-adduser, getpass,
and nessus-update-plugins.
You'll be presented with this message (Example 3-26).



Example 3-26. "Success" message from nessus-installer.sh





---------------------------------------------------------------------------
Nessus installation : Finished
---------------------------------------------------------------------------
Congratulations ! Nessus is now installed on this host
. Create a nessusd certificate using /usr/local/sbin/nessus-mkcert
. Add a nessusd user use /usr/local/sbin/nessus-adduser
. Start the Nessus daemon (nessusd) use /usr/local/sbin/nessusd -D
. Start the Nessus client (nessus) use /usr/local/bin/nessus
. To uninstall Nessus, use /usr/local/sbin/uninstall-nessus
. Remember to invoke 'nessus-update-plugins' periodically to update your
list of plugins
. A step by step demo of Nessus is available at :
http://www.nessus.org/demo/
Press ENTER to quit nessus-mkcert
is a wrapper for openssl, and it walks you
through the process of creating a server certificate for
nessusd to use.
nessus-mkcert requires no arguments.



nessusd-adduser
is a wizard for creating new Nessus client accounts. When you run
this script, it will prompt you for a username, authentication
method, and password for the new account. This account will be
specific to Nessus; it won't be a system account.
Example 3-27 shows a sample nessus-adduser
session.



Example 3-27. Running the nessus-adduser script





woofgang:/usr/local/etc/nessus # nessus-adduser
Using /var/tmp as a temporary file holder
Add a new nessusd user
----------------------
Login : Bobo
Authentication (pass/cert) [pass] :
Login password : 3croc)IGATOR
User rules
----------
nessusd has a rules system which allows you to restrict the hosts
that Bobo has the right to test. For instance, you may want
him to be able to scan his own host only.
Please see the nessus-adduser(8) man page for the rules syntax
Enter the rules for this user, and hit ctrl-D once you are done :
(the user can have an empty rules set)
Login : Bobo
Password : 3croc)IGATOR
DN :
Rules :
Is that ok ? (y/n) [y] y
user added.The possible authentication methods are "pass" (password) and "cert"
(X.509 digital certificate).



Allowable authentication methods are
pass (a standard username-password scheme)
and cert (a challenge-response scheme using X.509
digital certificates). The pass method is much
simpler, and if you compiled OpenSSL support into
nessusd when you built Nessus (either manually
or via nessus-installer.sh), your
users' usernames and passwords will be encrypted in
transit. This is a reasonably secure authentication mechanism.



The cert scheme is arguably more secure, since
it's more sophisticated and doesn't
involve the transmission of any private information, encrypted or
not. However, setting up X.509 authentication in Nessus can be a
little involved and is beyond the scope of our simple task of
performing quick sanity checks on our bastion hosts.



See Chapter 5 for more information on creating
and using X.509 certificates, and the Nessus source-code
distribution's README_SSL file
for more on how they're used in Nessus (this file
may be viewed online at http://cgi.nessus.org/cgi-bin/cvsweb.cgi/nessus-core/README_SSL?rev=1.27& content-type=text/vnd.viewcvs-markup).
Or, you can stick to simple password-based authenticationjust
make sure you're using it over OpenSSL!




Using Nessus's client-server architecture is not
mandatory! If, for example, you're using a laptop
system as your security scanner and wisely prefer not to have any
scanning systems whatsoever permanently installed in your DMZ
network, it makes perfect sense to run both
nessusd and nessus on the
same system. If you do so, you'll simply set your
nessusd host to
"localhost" in
nessus. In that case, it won't
matter whether you compiled Nessus with OpenSSL support, since none
of the scan-setup or report data will traverse any network.





nessus-adduser
also allows you to specify rules that restrict which hosts the user
may scan. I leave it to you to read the
nessus-adduser(8) manpage if
you're interested in that level of user-account
managementNessus's access-control syntax is
both simple and well documented.



After you've created your server certificate and
created one or more Nessus user accounts, it's time
to start nessusd. To start it manually, simply
run the command nessusd -D &. Note, however,
that for nessusd to start automatically at boot
time, you'll need a startup script in
/etc/init.d and links in the appropriate
rcX.d directories. If you installed Nessus from
RPMs, these should already be in place; otherwise
you'll need to create your own startup script. (In
the latter case, don't forget to run
chkconfig or update-rc.d to
create the runlevel links.) Our last setup task is to update
Nessus's scan scripts
(plug-ins). Because one of
Nessus's particular strengths is the regularity with
which Messrs. Deraison et al add new plug-ins, you should be sure to
run the script nessus-update-plugins immediately
after installing Nessus and get in the habit of running it
periodically afterward, too. This script will automatically download
and install all plug-ins created since the last time you ran it, or
since the current version of Nessus was released.



I recommend using the command-form nessus-update-plugins
-v
, because without the -v flag, the
script runs "silently," i.e,
without printing the names of the plug-ins it's
installing. After downloading, uncompressing, and saving new scripts,
nessus-update-plugins resets
nessusd so that it
"sees" the new plug-ins (assuming a
nessusd daemon is active at that moment).




But take care: at present, nessus-update-plugins
does not check new plug-ins against MD5 or other hashes. This
mechanism can therefore be subverted in various ways. If that bothers
you, you can always download the plug-ins manually from http://www.nessus.org/scripts.php one at a
time and then review each script (they reside in
/usr/local/lib/nessus/plugins) before the next
time you run a scan.





3.1.10.11 Nessus clients





Unless you're only going to use the Nessus server as
its own client (i.e., run both nessusd and
nessus on the same host),
you'll need to perform additional installations of
Nessus on each host you wish to use as a client. While the Nessus
server (the host running nessusd) must be a Unix
host,[4] clients can run on either Unix or MS Windows. Compiling
and installing Nessus on Unix client machines isn't
much different from installing on servers (as described earlier),
except that on client-only systems, you may skip the steps of
creating a server certificate, adding users, and starting the daemon.



[4] A commercial Windows version of
nessusd may be purchased from Tenable Security
(http://www.tenablesecurity.com).




3.1.10.12 Performing security scans with Nessus



And now the real fun begins! After you've installed
Nessus, created your
server certificate and at least one user account, and started
nessusd, you're ready to scan.
First, start a client session. In the Nessusd host screen, enter the
name or IP address of the server you wish to connect to (use
"localhost" or 127.0.0.1 if
you're running nessus and
nessusd on the same system), the port on which
your server is listening (most users will use the default setting,
1241), and your Nessus login/username (Figure 3-9).




Figure 3-9. User Bobo logs on to a Nessus server





When you're ready to connect, click the Log in
button. If this is the first time you've run
nessus on a given system,
you'll be asked what level of paranoia to exercise
in accepting Nessus server certificates and whether to accept the
certificate of the server you're connecting. If
authentication succeeds, you'll also next be
reminded that by default,
"dangerous" plug-ins (those with
the potential to crash or disrupt target systems) are disabled. And
with that, you should be connected and ready to build a scan!



nessus will automatically switch to its Plugins
tab, where you're presented with a list of all
vulnerability tests available on the Nessus server, grouped by
"family" (Figure 3-10). Click on a family's name
(these are listed in the upper half of the window) to see a list of
that family's plug-ins below. Click on a
family's checkbox to enable or disable all its
plug-ins.




Figure 3-10. Plugins screen





If you don't know what a given plug-in does, click
its name: an information window will pop up. If you
"hover" the mouse pointer over a
plug-in's name, a summary caption will pop up that
states very briefly what the plug-in does. Plug-ins with yellow
triangles next to their checkboxes are dangerous: the particular
tests they perform have the potential to interrupt or even crash
services on the target (victim) host.



By the way, don't be too worried about selecting all
or a large number of plug-ins: Nessus is intelligent enough to skip,
for example, Windows tests on non-Windows hosts. In general, Nessus
is efficient in deciding which tests to run and in which
circumstances.



The next screen to configure is Prefs (Figure 3-11).
Contrary to what you might think, this screen contains not general,
but plug-in-specific preferences, some of which are mandatory for
their corresponding plug-in to work properly. Be sure to scroll down
the entire list and provide as much information as you can.




Figure 3-11. Plugins preferences screen


Especially important here are the nmap settings. Personally,
I've had much better luck running a separate nmap
scan and then feeding its output to Nessus than I've
had configuring Nessus to perform port scans itself. This is easy to
do. First, under Nmap options, specify the file containing your nmap
output (i.e., output obtained by running nmap with the
-oN flag). Second, click on the Scan options tab
and make sure "Consider unscanned ports as
closed" is unchecked (Figure 3-12).
Third, still in Scan options, make sure that the box next to Nmap is
the only one checked in the Port scanner: section.[5] [5] I
figured out how to do this in Nessus v2.0 with the help of David
Kyger's excellent "Nessus
HOWTO" (http://www.norootsquash.net/cgi-bin/howto.pl),
which also explains how to run Nikto web scans from Nessus.




If you do run your nmap scan from Nessus, take particular care with
the Prefs page's ping settings:
more often than not, selecting either ping
method (TCP or ICMP) can cause Nessus to decide mistakenly that hosts
are down when in fact they are up. Nessus will not perform any tests
on a host that doesn't reply to
pings, so when in doubt, don't
ping.



After Prefs comes Scan options (Figure 3-12). Among
other things, we see the Optimize the test option, which tells Nessus
to avoid all apparently inapplicable tests. That saves time, but
selecting this option can at least theoretically result in
"false negatives."
You'll need to decide for yourself whether a faster
scan with a higher risk of false negatives is preferable to a more
complete but slower scan. Speaking of speed, if you care about it,
you probably want to avoid using the "Do a reverse
(DNS) lookup..." feature, which attempts to
determine the hostnames for all scanned IP addresses.




Figure 3-12. Scan options screen


Now we specify our targets. We specify these in the Target(s): field
of the Target Selection screen (Figure 3-13). This
field can contain hostnames, IP addresses, and network addresses in
the format x.x.x.x/y (where
x.x.x.x is the network number and
y is the number of bits in the subnet
maske.g., 192.168.1.0/24) in a comma-separated list.




Figure 3-13. Target selection screen


The Perform a DNS zone transfer option instructs Nessus to obtain all
available DNS information on any domain names or subdomain names
referred to in the Target(s): box. Unless your DNS servers are
configured to deny zone-transfer requests by unknown hosts, this will
result in all hosts registered in your local DNS to be scanned, too.



Finally, one last screen before we begin our scan
(we're skipping KB, which is out of the scope of
this introduction to Nessus): User (Figure 3-14). In
this screen, we can fine-tune the targets we specified in the Target
selection screen.




Figure 3-14. User screen





The specifications you type in this text box are called
rules, and they follow a simple format:
accept address,
deny address, or
default [accept | reject]. The
rules listed in Figure 3-14 mean
"Don't scan 10.193.133.60, but scan
everything else specified in the Target screen." Finally, the payoff for all our careful scan setup: click the
"Start the scan" button at the
bottom of the screen. The scan's length will vary,
depending mainly on how many hosts you're scanning
and how many tests you've enabled. The end result? A
report such as that shown earlier in Figure 3-8.



From the Report window, you can save the report to a file, besides
viewing the report and drilling down into its various details.
Supported report file formats include XML, HTML, ASCII,
LATEX, and, of
course, a proprietary Nessus Report format, NBE (which you should use
for reports you wish to view again within Nessus).



Read this report carefully. Be sure to expand all + boxes and fix the
things Nessus turns up. Nessus can find problems and can even suggest
solutions, but it won't fix things for you. Also,
Nessus won't necessarily find everything wrong with
your system.



Returning to our woofgang example (see Figure 3-8), Nessus has determined that
woofgang may be running a vulnerable version of
OpenSSH! Even after all the things we've done so far
to harden this host, we may still have a major vulnerability to take
care of. I say "may" because, as
the Nessus report notes, Nessus made this inference based on
sshd's greeting banner, not by
attempting to exploit the vulnerabilities of this version of SSH.
Because some distributions routinely patch software packages without
incrementing their version numbers, sshd on
woofgang may or may not be vulnerable.
It's up to me, at this point, to make sure that
woofgang is indeed fully up to date with
security patches before putting this system into
production.

3.1.11. Understanding and Using Available Security Features





This corollary to the Principle of Least Privilege is probably one of
the most obvious but least observed. Since many
applications' security features
aren't enabled by default (running as an
unprivileged user, running in a chroot jail, etc.), those features
tend not to get enabled, period. Call it laziness or call it a
logical aversion to fixing what doesn't seem to be
broken, but many people tinker with an application only enough to get
it working, indefinitely postponing that crucial next step of
securing it, too.



This is especially easy to justify with a server
that's supposedly protected by a firewall and maybe
even by local packet filters: it's covered, right?
Maybe, but maybe not. Firewalls and packet filters protect against
certain types of network attacks (hopefully, most of them), but they
can't protect you against vulnerabilities in the
applications that firewalls/filters still allow.



As we saw with woofgang, the server we hardened
with iptables and then scanned with nmap and Nessus, it takes only
one vulnerable application (OpenSSH, in this case) to endanger a
system. It's therefore imperative that a variety of
security strategies and tools are employed. This is called
Defense in
Depth, and it's one of the most important concepts
in information security. In short, if an attacker breaks through one
defense, she'll still have a few more to go through
before causing a lot of damage.




3.1.12. Documenting Bastion Hosts' Configurations





Finally, document the steps you take in configuring and hardening
your bastion hosts. Maintaining
external documentation of this kind serves three important functions.
First, it saves time when building subsequent, similar systems.
Second, it helps you to rebuild the system quickly in the event of a
hard-drive crash, system compromise, or any other event requiring a
"bare-metal recovery." Third, good documentation can also be used to disseminate important
information beyond one key person's head. (Even if
you work alone, it can keep key information from being lost
altogether, should it get misplaced somewhere in that head!) Just be
sure to keep this documentation up to date: obsolete documentation
can be almost as dangerous as no documentation at all.




/ 94