This is a read-only archive. Find the latest Linux articles, documentation, and answers at the new Linux.com!

Linux.com

Feature: Security

Securing a fresh Linux install, part 1

By Mike Peters on April 20, 2004 (8:00:00 AM)

Share    Print    Comments   

Most Linux distros provide a wide variety of server applications, and many network-aware apps are enabled by default when you install the operating system. Before you put your new Linux machine online, there are a number of steps you should take to make your network secure. Use these tips every time you perform a fresh install; none of these steps will help to secure a machine that has already been compromised.

Before you install anything on your machine, check the Web site of the distro you plan to use and download any security patches or updates that have been released since the version you are going to install. As soon as the install process is finished apply the patches and updates you found.

Choose sensible passwords

Many users, if left to their own devices, choose passwords that are easy to remember -- and just as easy for a cracker to guess. Any password can be cracked given enough time and resources, but a safe password is one which would take an unreasonably long time to crack, while not being impossible for the user to remember.

One of the first steps in securing any system is to ensure that all users have safe passwords. Passwords should be at least eight characters in length and contain a mixture of upper- and lower-case letters, numbers, and special characters. A common way of choosing a safe password is to think of a phrase of eight or more words -- for example 'There was an old woman who lived in a shoe.' Take the first letter of each word in the phrase -- 'twaowwlias.' Now replace some of the letters with numbers, mix the case, and add some special characters to get your password -- 'tW40ww!iAS.' This password is much more difficult to crack than your dog's name, but not too much more difficult to remember.

su and sudo

You've probably been told a hundred times since you started using Linux that you shouldn't log on as root; instead, you should log on as a normal user and use su to gain root privileges for specific tasks. You can take this a step further and restrict which users can actually use the su command to gain root privileges. In the file /etc/suauth add the line:

root:ALL EXCEPT GROUP wheel:DENY

This line require that a user be a member of the wheel group before he can su to root. Check man /etc/suauth for more options. You can achieve the same effect using PAM, as we'll discuss later.

It is a good idea to make sure that all su activity is logged. Normally logging to syslog is enabled by default. Make sure the line:

SYSLOG_SU_ENAB          yes

is uncommented in /etc/login.defs to be sure. You can also enable logging of su activity to its own file by uncommenting the line:

#SULOG_FILE     /var/log/sulog

You should never need to give out the root password of your servers to users. If you really need to give a user or users access to something that requires root privileges, you should use sudo instead. Sudo allows certain users to perform certain tasks with root privileges without needing to know the root password.

You can configure sudo using the visudo command, which opens the /etc/sudoers configuration file in a special vi session. sudo's man page contains plenty of details on configuring sudo, with examples.

Restrict the number of running services

One of the most common errors made by Linux admins is having unnecessary services running. The more services you have running, the greater the risk of your box being broken into. If you're not running a service, it can't be exploited, so you should run only services you really need.

To see a list of the services currently running, issue the command # ps -aux | less to show all running processes, and # netstat -atu to see a list of services and the ports that they are listening to. Examine the output of these commands and decide which services you really need. If you don't know whether you need a service or not, the simple answer is, you don't. It is better to be aggressive when deciding what to disable; if you later find something you need is missing, you can always re-enable it.

Many network services are initiated by the Internet superserver daemon -- inetd for short. inetd reads its configuration from /etc/inetd.conf. You can prevent services from being started by commenting out (placing a # at the beginning of) the lines in inetd.conf:

# echo          stream  tcp     nowait  root    internal

To begin with, you should comment out all of the services in this file. If you later find you need something, you can enable it.

You may also consider replacing inetd with xinetd, which is a more recent and secure alternative.

Your distro's boot scripts are also responsible for initiating services at system startup. Exactly where these scripts are depends upon the distro you use, but you should check them thoroughly to see what services are being started and disable the ones you don't need. In Red Hat you can use the chkconfig utility. Running chkconfig --list shows you what daemons are started at which run level. You can use the --del option to turn off services. So, for example, to disable routed, you would type # chkconfig --del routed.

Not all services are chkconfig-friendly. You must disable such services by removing the symlinks in the directories corresponding to the different run levels -- e.g. /etc/rc.d/rc3.d/S50inet. It's enough to remove just the links; keep the actual files in case you need to enable something later.

In the case of Slackware you should check the scripts under /etc/rc.d, and either comment out the startup commands of services you don't need, or alternatively, remove the executable bit from the appropriate script with a command such as # chmod a-x /etc/rc.d/rc.sendmail.

Once you are sure you have disabled everything you don't need, reboot -- it's the best way to make sure that you really have disabled everything you think you have. It's no good if all the good work you've put in so far will be undone next time your machine reboots.

Once you've restarted, run the ps and netstat commands I gave earlier again to check what's running. Repeat as necessary until you have the bare minimum of services running.

TCP Wrappers

TCP Wrappers is a daemon that uses two files, /etc/hosts.allow and /etc/hosts.deny, to decide which users and domains can connect to the services run by inetd. Most default installations leave these files blank. The first thing to do with TCP Wrappers is set your default policy to "deny." The best policy in security is to lock all the doors to begin with, and then and only then to open the ones you need. Edit /etc/hosts.deny and add the line:

ALL:ALL

This denies access to all services from all hosts. If you want to be notified by mail of any failed connection attempts, you can modify the above to read:

ALL:ALL:/bin/mail -s "%s connection attempt from %c" mike@localhost

Having set the default policy to deny all access, you can enable access for individual hosts to certain services by editing /etc/hosts.allow. For example, the line ALL:127.0.0.1 allows access for 127.0.0.1 to all services. Similarly, ipop3d:192.168.1.1 allows 192.168.1.1 access to pop3. You can specify a range of addresses with a line like ipop3d:192.168.1., or use multiple addresses separated by a comma: ipop3d:192.168.1.1, 192.168.1.4 You can use domain names rather than IP addresses, but this could really slow things down if you experience a DNS failure. Where possible stick to using IP addresses.

Now that you've secured your services, it's time to look at files. That's what we'll tackle next time.

Mike Peters is a freelance consultant and programmer and long-time Linux user.

Share    Print    Comments   

Comments

on Securing a fresh Linux install, part 1

Note: Comments are owned by the poster. We are not responsible for their content.

This is one of the weaknesses of linux

Posted by: Anonymous Coward on April 21, 2004 01:27 AM
Turn services off by default is not good enough, despite the protestations by members of organized crime families and others.

All linux distros, debian distros included, need to have an installed, working, firewall installed by default. Someone doesn't want it? apt-get remove it. Not installing a firewall in any distro is inexcusable. Turning services off is not the answer. All linux installs, including most if not all minimal installs, have one or several ports listening on the public end. One problem with a minimal install included exim listening on the ethernet card, and several other services.

As has been evident over the last few years, reports have included instances of installations being cracked even before the initial installation is finished, or during the first update. I have installed minimal distributions that were less than a month old when the iso was created by the distributor, and it still had to be patched through apt-get for security patches. But guess what? The debian servers were down due to the master servers being cracked. This by itself, is reason enough, for installing by default, and activating a simple/simple-ish front end to iptables. For everyone.

Thanks to the cracking of the debian servers, I gave up on attaching a computer to a public ip address, and am now doing the same through an appliance that performs nat and includes a firewall of its own. This shouldn't be necessary, but hacking iptables isn't easy, even with the iptables scripts out there. I know, because I've tried. Instead, I purchased another nat appliance, this time with an included firewall, to place my future mail/dns/web server behind. This appliance (without firewall) has allowed me to connect my lan to the internet without ever worrying about being hacked, as long as I kept it locked down. And I have never been hacked (as far as I'm aware, and as far as the rootkit checkers and other tools tell me), in the three years I've been using such an appliance. But again, this shouldn't be necessary. And buying an additional one for each of my public ip addresses doesn't appeal to me either, but they are getting cheaper (and their reliability and stability is a plus), so this shouldn't be such a big deal anymore.

The problem becomes, if I install debian in a laptop, or in someone else's computer, what do I do then? Or if using dialup?

A simple frontend to iptables needs to be installed with every distro, and activated by default. A member of a crime family read my post on slashdot about this, and after attacking my post, said he would recommend to debian that exim only listen on the loopback interface by default. This doesn't solve the problem of other open ports, or the security servers going down for any reason, or the distro not being up-to-the-minute fresh, requiring a security update, during which one can still get hacked.

Install a front end to iptables that is comprehensible. And activate it by default. Instead of arguing why not, just do it. There are more than one reason as to why, and more security is always better, regardless of the religious beliefs underlying a particular distro.

#

Re:This is one of the weaknesses of linux

Posted by: Anonymous Coward on April 21, 2004 06:29 PM
I agreed with you in most of the aspects that you mention but, what garantees you had by installing someone's else work to protect your machines because you don't want the trouble to learn for yourself how to configure iptables?

#

Then Linux is not ready for the desktop

Posted by: Anonymous Coward on April 22, 2004 01:15 AM
If I have to learn iptables, instead of using a simple front end to it, then Linux is not ready for the desktop, nor will it be, until this issue is resolved, and your attitude changes.

Learning iptables is no joke. iptables is not for the newbie, nor someone with even a year or more desktop experience. For somoene in the computer field, learning iptables may be an option, or even mandatory, but then that limits Linux to that category of professional.

Is Linux ready for the desktop? Or not? What is the purpose of the new installer if it is not targeted at newbies/non-computer professionals? Installing Linux is an easy floppy disk install for professionals. So why the new installer?

So have you decided? Is it ready for the desktop? Or not? Is debian starting to target/move toward gaining installed base share, with everything I'm seeing, it is. Can you even admit this?

Securing debian by a professional is easy. And keeping it secure, even with services that listen on the ethernet port by default, and even with the security servers going down, by a professional, is still easy. Doing this as a newbie, or non-professional is not possible, until an easy to configure front end to iptables is included, installed, and activated by default on all installs, including debian's new installer for Sarge.

Telling a newbie, or someone who is not a sys admin by profession to learn iptables is the same as telling a non-professional desktop user to go rtfm, instead of helping to direct that person to understand how to find the information for themselves (pointing out to google the error message, reminding about man pages, info pages, the documentation project, etc., and just saying rtfm.). I'll bet you've done that more than once in your career.

#

Re:Then Linux is not ready for the desktop

Posted by: Anonymous Coward on April 22, 2004 05:57 PM
Is this article in the "newbie" category? What I mean with "learning iptables" is not "read the fine manual", but learning iptables is a good thing, if you don't want to be on top's of someone's work. Or someone should not install a firewall, for example, in windows, just because Microsoft is "big"? Or some linux firewall vendor says, we kill "pirates", you trust them blindly? Lazyness it's the worst enemy of security. Look at Stanford(?) Univ. case. They suffer several attacks in their high speed network where you found alot of unix machines (Solaris and linux), and the servers become compromised because the sys admins didn't applied the latest updates. That's my point.

#

Debian ready for the desktp? Or not?

Posted by: Anonymous Coward on April 23, 2004 09:04 AM
Killing pirates? Laziness?

Stanford University was hacked because of local exploits to kernels that even just a few months old are still vulnerable. Patching kernels on production systems in small organizations is one thing. Patching every server and every desktop in an organization that may have more computers, and more ip addresses than IBM or Microsoft is quite another. But it is still their fault. Laziness is not the reason, or every organization, in every profession, who didn't perform every day-to-day operation "by the book" is lazy. And yet, nearly all organizations do not do things "by the book", instead saving "by the book" operations for work slowdowns during labor/compensation disputes.

Or someone should not install a firewall, for example, in windows, just because Microsoft is "big"?


The above makes no sense, at least to me. Perhaps you are attempting to use an example, and the example falls short due to language differences? No problem.

Or some linux firewall vendor says, we kill "pirates", you trust them blindly?


No, this is not the case. In context with what I wrote, and with your previous sentence (Windows firewall), my example had to do with Debian, not Linux in general (as someone else posted). And with Zonealarm, and possibly BlackIce Defender, and possibly other firewalls for Windows, not Microsoft firewalls.

I'm fully aware of other distributions including front ends to iptables. Red Hat has (or had last time I checked) one, Suse has one, and some other distributions have them. But the smaller installs of Debian, and possibly the new Sarge installer, are not installing a front end to Debian, and activating it by default (or at least offering to prior to setting up the ethernet card). This leaves open the possibility of being hacked during the first update/upgrade of security packages, which has happened, as has been reported on the Internet previously. Both Windows, and Linux distros have had reported these incidents.

The bigger issue here is the insistence, at least of some, that "turning unneeded services off" is the only acceptable way of securing a Debian install. There are a number of reasons why this attitude is a problem. One reason, is that the Debian servers were cracked, and were unavailable even for security updates, for a while. Patch the vulnerabilities yourself (with security patch diffs) while the Debian servers are down, instead of an apt-get update/upgrade you say? Tell new users to do that. Then watch them dump Debian, and find something else. Good riddance? Then Debian isn't ready for the desktop, and the project shouldn't bother completing the Sarge installer, because anyone with experience can install Debian with a floppy disk. So, which is it? Ready for the desktop, or not?

As for crackers, a more concerted attack would be to take out the Debian servers again, just after a new remote vulnerability became known, and then use the down time to attack whatever servers they were interested in. This is a non-issue for services which aren't exposed to the Internet, but some services are exposed by default in Debian, including Exim, regardless of whether running a public mail server on the box or not. Being aware of this issue, will help me in securing an installation by disabling exim, but it would be a non issue if the equivalent of zonealarm were installed, and I was able to block ports for the mail server, both incoming, and outgoing, and all other ports, other than port 80, and other ports that I need, and that I know don't have current security vulnerabilities. This is a non-issue with apt-get update/upgrade, but it is an issue if the Debian servers are unavailable for security patches, and during initial installations, prior to updating security patches, and also during the actual security upgrades, while the upgrade is still happening. The whole point is, without being an expert, I can cite a few examples where the lack of a incoming/outgoing simple firewall (frontend) is a problem. And the point is, that other, more expert, individuals, believe that because what works for them (so far), others have no excuse, no cause, no reason, to use a simple front end for iptables. Don't like it? Go rtfm! Go find another distro! Go learn iptables! Go back to Windows!

Ready for the desktop? Or not?

#

Re:Debian ready for the desktp? Or not?

Posted by: Anonymous Coward on April 23, 2004 11:17 PM
Sorry for the typos and/or grammar. But try write your posts in portuguese and we can discuss this subject later<nobr> <wbr></nobr>;-).

Back on topic. My posts are not intended to start a flame or neither an OS war. You blame Debian for taking down the servers. What would be your decision? You are taking the article by the wrong point (and other articles do the same). It's not the Os, desktop, whatever, that are not ready. It's the user. Despite the miriade of firewall software around (for windows), people don't used them. Same goes to updates. I told you that I agreed with you that the Debian team must care about the issue that you pointed in your posts. But their decision (turnig off the servers) seemed to me as better (not) than the one that Microsoft did concerning ASN.1 critical bug (leading users to a six months period without a fix) or the last TCP problem, probably know by almost everyone in the field but only take steps to fix the bug in their products because someone posted an advisory. I'm not considering myself an expert, the fact is, I consider learning something more usefull than expecting things just happen. And you must agreed that saying to a newcomer that he must learn iptablesd or install a firewall (windows) has the same results...

#

Re:Then Linux is not ready for the desktop

Posted by: Anonymous Coward on April 22, 2004 06:12 PM
Here, you have a good start point to learn iptable<nobr> <wbr></nobr>:P

http://www.davidcoulson.net/writing.php

#

Linux distros *have* ready firewalls

Posted by: Variola Cola on April 22, 2004 05:13 PM
Linux distros already have firewalls. Try RedHat, Mandrake, SuSe and others. It's point and click.


That said, firewalls are highly over rated / over hyped.

#

Re:This is one of the weaknesses of linux

Posted by: Anonymous Coward on April 22, 2004 11:51 PM
OMG!!!WFT?? Youve nevar bin haX0red??? You sure must be the touhgest man out there!! Peace out, nigga!!! oMG!!

#

'at' trick with ipchains / iptables

Posted by: Variola Cola on April 22, 2004 06:00 PM
To avoid getting locked out of a remote machine while you experiment with the firewall, you can use <A HREF="http://www.opengroup.org/onlinepubs/7908799/xcu/at.html" TITLE="opengroup.org">at</a opengroup.org> to automatically restore the last working firewall.


Here's and example:

<nobr> <wbr></nobr><TT>/sbin/iptables-save ><nobr> <wbr></nobr>/home/foo/works.iptables</TT>
<TT>echo "/sbin/iptables-restore <<nobr> <wbr></nobr>/home/foo/works.iptables" | at now +4 minutes</TT>
Now you can try your experiment and if you get locked out, you have less than 4 minutes wait. If it works, then just find the at job with
<TT>atq</TT>
and kill it with
<TT>atrm</TT>
.

#

This story has been archived. Comments can no longer be posted.



 
Tableless layout Validate XHTML 1.0 Strict Validate CSS Powered by Xaraya