Monday, January 24, 2005

Yet Another Web Resource

I have recently discovered a new web resource for Tech news and information, thanks to /., Tech-Blog.org. It appears to contain information related to bigger players in the tech market.

Thursday, January 20, 2005

What exactly does a search engine do for me?

How many of your friends really understand what a search engine does? I have found that most of the people who I associate with do not understand that a search engine does not search the Internet...what does it search then?...

Even though the popular search engine Google has indexed over 8,000,000,000 pages that you are allowed to search when you use it, you are not searching the entire Internet... You are searching the internet as Google sees it. What does this really mean? Google does not want to keep track of all of the trash and garbage that is on the Internet, they want to keep track of the data that they think people want to know about. Google has developed a very complex algorithm that allows them to make an automated decision about each page that they potentially index based upon some pre-determined and proprietary parameters. One of the greatest challenges to advertisers and marketers is figuring out exactly what a search engine is looking for so that they show up in the search! If you merely create a web page and then search for it on a search engine an hour later, you will not find it there! Maybe an example will help...

Example:
A search engine is similar to a person who is in an occupation where they need to remember a lot of stuff... When you need to know something, you go to this person because they usually have the answer, or something close to what you want. Well, this person has to obtain this information somehow, and has to prioritize what information they want to learn or retain. Google is a very intelligent person capable of retaining a lot of information (it does run on over 100,000 Linux machines) which gives you a very good chance of finding the data that you want. The only problem is, that like most people, Google does not know everything...and probably never will...

If you really had the capacity to search the entire Internet in the time that your favorite search engine returns the results of your search, then the world you live in would have massive amounts of processing power and more bandwidth than you would know what to do with...

Keeping this in mind, this is why it is very good for all of us that there is more than one search engine... We would not want one company to define the internet that we have the capacity to search, just like we would not want one company deciding what we can do with our computer...(unless you use Windows, the you have already handed the keys to Microsoft).

Tuesday, January 18, 2005

Script to Automate Configuration of Tripwire

In a previous post, I gave a general overview of the process used to configure Tripwire on a Linux system. The most time consuming part of the configuration is configuring the twpol.txt file. The following Perl script will aid in configuring this file by going through each line and checking each file to see if it is on your system. If the file is on your system, it will pass by, if not, it will comment that line out so that you will not get an error when you scan your system with Tripwire.

Script:

#!/usr/bin/perl
#
# Author: Joshua M. Miller
# Date: 08/26/2004
#
# Purpose: To automate the configuration of the tripwire policies.
#

use strict ;

my $file = "/etc/tripwire/twpol.txt" ;
my $new_file = "/etc/tripwire/new_twpol.txt" ;

print "Opening $file\n\n" ;

open INFILE, $file or die "Can't open input file : $!" ;
open OUTFILE, ">$new_file" or die "Can't open output file: $!" ;

print "Processing the current tripwire config file...\n" ;

while () {

# If it is a file that requires checking, check it to see if the file is on this system
# If the line begins with a /, then we know it needs to be checked
# If the file is not on this system, comment it out
if (m{^\s+/\w}) {

# Take the file's path from the line
my @tst_file = split(/\s+/,$_) ;

# Check to see if the file exists
unless ( -e $tst_file[1] ) {
$_ = "#" . $_ ;
}

# Debug, print results
print "Result: $tst_file[1]\n" ;


# Test - print this section to the outfile
#print OUTFILE "$tst_file[1]\n" ;
}

# Write the line to the new file
print OUTFILE "$_" ;
}

close INFILE ;
close OUTFILE ;

The resulting file will be in the /etc/tripwire directory and will be named new_twpol.txt. The next step is to back up the old copy and rename the new file to twpol.txt. The next step in the process would be to run the twinstall.sh script.

Keep in mind that this script will only remove entries, and if any files are added which are critical to operation of the system, they should be added to the Tripwire policy through the use of the twadmin tool.

Monday, January 17, 2005

Configuring Tripwire - Just Another Host-Based IDS

Although Intrusion Detection Systems (IDSs) are becoming less popular in the media with the emergence of Intrusion Prevention Systems (IPSs), they are still widely used in the IT Security industry and any network or security administrator would benefit from knowing how to configure and use them. In this article, I will explain how to configure Tripwire 2.3.1.2 on Linux.

Tripwire is a valuable tool because it can generate a database full of Md5 check sums of all important and system files on your system (specified by the administrator). Tripwire can then scan your system periodically or on-demand to verify the integrity of system files -- therefore Tripwire is an integrity checker.

For this article, I am using a Dell Inspiron 5100 Laptop, with Gentoo Linux installed and updated with all of the latest packages for the system. I am going to perform a fresh re-install of tripwire through the portage system.

tertiary_linux ~ # emerge tripwire -vp

These are the packages that I would merge, in order:

Calculating dependencies ...done!
[ebuild N ] app-admin/tripwire-2.3.1.2-r2 -debug +ssl 2,201 kB

Total size of downloads: 2,201 kB
tertiary_linux ~ #

tertiary_linux ~ # emerge tripwire -v
Calculating dependencies ...done!
>>> emerge (1 of 1) app-admin/tripwire-2.3.1.2-r2 to /
>>> Downloading http://distfiles.gentoo.org/distfiles/tripwire-2.3.1-2-pherman-portability-0.9.diff.bz2
--08:25:02-- http://distfiles.gentoo.org/distfiles/tripwire-2.3.1-2-pherman-portability-0.9.diff.bz2
=> `/usr/portage/distfiles/tripwire-2.3.1-2-pherman-portability-0.9.diff.bz2'
Resolving distfiles.gentoo.org... 156.56.247.195, 216.165.129.135, 140.211.166.134
Connecting to distfiles.gentoo.org[156.56.247.195]:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 739,663 [text/plain]

...

Now, tripwire has been installed and requires configuration. The configuration files are located in the /etc/tripwire directory, as would be expected. An overview of the configuration files follows:

twpol.txt: a file which holds all of the files that Tripwire will check, as well as their criticality levels

twcfg.txt: miscellaneous configuration settings related to key locations and mail setup

twinstall.sh: installation script which creates site keys

One configuration file takes quite a bit of attention to generate properly, this is the twpol.txt file. It is necessary to go through this file (which was designed for Red Hat Linux) and comment out the files that you do not have or want to protect, as well as adding in the files which you do want to protect. The next step is to configure twcfg.txt to your liking. Once you have completed these steps, run the twinstall.sh script. It is critical that you remember the two pass-phrases that you typed in so that you can access or modify the system configuration at a later time.

After the twinstall.sh script has been run, you should move or delete the configuration files twpol.txt and twcfg.txt, which would provide information to anyone who may have compromised your system, these files will have been replaced by encrypted versions. The twadmin tool included with Tripwire will provide a means to modify or re-generate text versions of these files for future reference (pass-phrase required). The configuration of Tripwire is nearly done.

The last step in the configuration is to generate the database which houses the md5 check sum of all of the critical files listed in the twpol.txt configuration file (now stored in tw.pol). This database will serve as a baseline to check your files against when monitoring for suspicious behavior. This database is generated with the following command:

tripwire --init

You will then be prompted for your local passphrase. Tripwire will then generate the baseline database. Once this database has been generated, you can check your system's integrity with the following command:

tripwire --check

The output from this command will be similar to the following, and will also be in a report stored in /var/lib/tripwire/reports, or wherever you specify in the twcfg.txt file before you run the twinstall.sh script:

===============================================================================
Rule Summary:
===============================================================================

-------------------------------------------------------------------------------
Section: Unix File System
-------------------------------------------------------------------------------

Rule Name Severity Level Added Removed Modified
--------- -------------- ----- ------- --------
Invariant Directories 66 0 0 0
Temporary directories 33 0 0 0
*Tripwire Data Files 100 1 0 0
Critical devices 100 0 0 0
User binaries 66 0 0 0
Tripwire Binaries 100 0 0 0
Libraries 66 0 0 0
Operating System Utilities 100 0 0 0
File System and Disk Administraton Programs
100 0 0 0
Kernel Administration Programs 100 0 0 0
Networking Programs 100 0 0 0
System Administration Programs 100 0 0 0
Hardware and Device Control Programs
100 0 0 0
System Information Programs 100 0 0 0
Application Information Programs
100 0 0 0
(/sbin/genksyms)
Shell Related Programs 100 0 0 0
(/sbin/getkey)
Critical Utility Sym-Links 100 0 0 0
Critical system boot files 100 0 0 0
*System boot changes 100 1 1 29
*OS executables and libraries 100 0 0 2
Security Control 100 0 0 0
Login Scripts 100 0 0 0
*Critical configuration files 100 0 0 1
Shell Binaries 100 0 0 0
*Root config files 100 1 1 1

Total objects scanned: 236798
Total violations found: 38
There will be more information present, but the summary is displayed above. You can see that Tripwire will check all of your specified files and directories and tell you when they have changed.

When you have installed Tripwire, it will create a cron job that will run daily and email the resulting report to root. This will allow you to review the reports without having to run the filesystem check or print the report from the command line. Another option would be to have the cron job also send the report to the printer.

An important part of the Tripwire IDS is that the baseline database be untouched by any attacker, this requires that it be on a read-only medium. One way to achieve this is to have the database burned to a CD-ROM and have Tripwire run in that configuration daily. Be sure to create a backup and secure this CD-ROM so that it may not be tampered with.

There are alternatives to Tripwire for performing integrity checks to your filesystem. One of those alternatives is Aide, which is being developed and not yet in a 1.0 version release. Some people have used the rpm program with the Red Hat systems as an integrity checker, and there is always the option of creating your own application to perform an MD5 of all of your important files and verifying them periodically.

Tripwire is a very valuable tool which can be used in a variety of scenarios. The experienced System Administrator will be able to leverage the existing power to find out what has been tampered with or altered on a system that is of questionable integrity. In a follow-up post I will provide a Perl script which I have created to ease the configuration of the tripwire configuration file twpol.txt and save a substantial amount of time.

Sunday, January 16, 2005

Comparing Software Security to Physical Security

On Crypto.com, there is a very interesting article titled "Safecracking for the Computer Scientest" which goes into great detail on the construction, strengths, and weaknesses of physical locks and enclosures, then comparing these items to Software Development. The article is well written, with an interesting point made. The author would suggest that since physical security devices are constructed without an expectation of perfection, but rather with an expectation of imperfection, that software be developed in the same manner and have allowances for such imperfections so that it will withstand security vulnerabilities more effectively. This is a very interesting topic worthy of consideration and debate.

Saturday, January 15, 2005

Should Virus Writing be a Crime?

I just read an article on News.com (thanks to Slashdot.org) that is an interview with an ex-virus writer about his new job and some of the issues that he faces in it. It seems to me that most people focus on how bad virus' are, and how anyone who might write a virus is evil and should be banned from the IT industry for life... I think that this article is well written and shows people that writing virus' does not mean that you're a criminal.

Writing virus' is similar to penetration testing, or figuring out how a system might be vulnerable. Unless the person writing the virus actually releases the virus or causes some sort of damage, they have not hurt anyone, but they have provided some knowledge to the community so that improvements may be made on existing systems and security. One stipulation to my last statement is that the virus author should release the source-code so that it may be studied and serve as a learning tool.

Link to article.

Wednesday, January 12, 2005

Google Hacking: Essential Skill

Search engines are extremely useful tools. How much time do you spend each day searching on the internet? It is worth knowing how to use your favorite search engine well, and to know how to get the data you want. If you spend some time learning how to use your favorite search engine, you will save yourself time in the long haul. For all of those who use Google to search, there is a great site devoted to finding useful and revealing information here. If you register, you can download the "Google hacking guide", which outlines many tips and tricks to aid in your search of valuable information.

With the following search, you will return sites that probably contain sensitive or private information:

allinurl: admin mdb

One option that I consider invaluable at this point is the site: option, it allows the user to search only the specified domain during the search. If I wanted to search my blog, I could use the following string:

site:itsecureadmin.blogspot.com admin

Although the above string does not return anything, play with that option while searching a site that you are familiar with. You can also specify the extension of the file you are searching for, or the filetype.

There are many, many options to choose from and there is a Google Hacking Database that is maintained on johnny's web site.



Disclaimer: All information contained in this post is for educational purposes and should serve as a guide to protecting your own server or software, in addition to aiding those who search for knowledge without the intent to cause harm or damage to others.

Tuesday, January 11, 2005

Great Reference Website --

I have recently stumbled upon a jewel of a website. For anyone who loves technical books, check out http://www.techbooksforfree.com. This website has some great books and serves as an index for many, many technical books that are on-line.

Monday, January 10, 2005

Setting up Samba to recognize a Linux group as a Domain Administrator group

When using Samba to act as a Primary Domain Controller (PDC), it is important to map Linux groups to Windows groups so that you do not have to use the root user to perform any Domain Administrator functions. The tool provided to perform this task is within the net command set, called groupmap. Groupmap is used to map any Linux group to any Windows group, which will allow the systems administrator to specify a group to function as Domain Administrators and another group to act as Domain Users. In the following commands, I'll demonstrate how I map the Linux group smb_users to the Windows Domain Users group and the Linux group smb_admins to the Windows group Domain Administrators. I am assuming that the Linux groups have already been created.

See what the current mapping status is:

net groupmap list

Map the Domain Admins group:

net groupmap set "Domain Admins" "smb_admins"

Map the Domain Users group:

net groupmap set "Domain Users" "smb_users"

Verify that the mapping worked:

net groupmap list

Now, you must restart Samba for the changes to take effect. This will allow you to use members of the Domain Admins group to add machines to the domain and administer machines on the domain.

Monday, January 03, 2005

Script to parse IPTables Logs

In a previous post, I demonstrated how to set up IPTables to log incoming traffic. I have created the following script to parse my logs for network traffic, returning IP addresses and associated ports:

(Please see previous post for logging configuration.)


#!/bin/bash
#
# Program: ipports
# Purpose: To list all external IPs that have been logged by the firewall from
# the /var/log/messages file and the associated ports that the IP was
# attempting to connect to.
#
# Author: Josh Miller
# Date: 08/26/2004

LOGFILE='/var/log/messages'
OUTFILE='ipports.out'
TMP='ipports.tmp'

# Default to external logs
PARAM1='EXTERNAL'

echo
echo

# Determine which type of logs to parse and report from if user input present
if [ -n "$1" ] ; then
if [ $1 == "-e" ] ; then
PARAM1='EXTERNAL';
echo "Parsing $LOGFILE for external IP addresses/ports..."
elif [ $1 == "-i" ] ; then
PARAM1="INTERNAL";
echo "Parsing $LOGFILE for internal IP addresses/ports..."
elif [ $1 == "-o" ] ; then
PARAM1="SRC" ; # Select all logs
echo "Parsing $LOGFILE for all IP traffic/ports..."
elif [ $1 == '-h' ] ; then
echo " Usage: $0 [-e | -i | -o | -h ]"
echo
echo " -e: parse logs for external IP sources"
echo " -i: parse logs for internal IP sources"
echo " -o: parse logs for all IP sources"
echo " -h: this help message"
echo
exit ;
fi
else
echo "Parsing $LOGFILE for external IP addresses/ports..."
fi

# Print out each IP address, with two additional fields, one of which will be the Destination Port
COUNTER=0 ;
for i in `cat $LOGFILE | grep $PARAM1 | awk '{print $10 , $19 , $20}' | sort -u` ;
do

# Make certain to loop three times before writing to the outfile
if echo $i | egrep "^DPT=" > /dev/null ; then
DPORT=`echo $i | cut -c5-9` ;
elif echo $i | egrep "^SRC=" > /dev/null ; then
SRCIP=`echo $i | cut -c5-19` ;
fi

let COUNTER+=1 ;

if [ $COUNTER -gt 2 ] ; then
echo "$SRCIP $DPORT" >> $OUTFILE ;
let COUNTER=0 ;
fi
done

# Apply a header to the output file
echo
echo " >>> IP and Port log <<< " > $TMP ;
echo
echo "IP Address Port # " >> $TMP ;
echo "==================================" >> $TMP ;

# Sort the data and reapply to file
cat $OUTFILE | sort -u >> $TMP ;
cat $TMP > $OUTFILE ;

cat $OUTFILE

# Clean up
rm $TMP ;
rm $OUTFILE ;

Sunday, January 02, 2005

Using IPTables to Log Useful Network Data

This post is the second post related to the configuration of IPTables. Please see the first post for the initial configuration.

One of the most valuable tools at the disposal of any system or security administrator is the logging capability of the system under administration or investigation. The ability to configure the system to log important events and parse those logs for useful information is a valuable skill to have, which can result in decreased downtime and prevent or detect cyber crime. Any good system logger will have the capability to log specific events and output those events in an easy to parse format. A logging application is also considered a form of HIDS (host-based intrusion detection system). In the following post, I will describe how to setup IPTables to log useful network data.

The following instructions were created using IPTables v. 1.2.11 on a Fedora Core 3 system. I will use the same format that I have used previously in posting, which references the IPTables/Netfilter tutorial at Netfilter.org. I will assume that if you are using a system which allows you to compile your own kernel that you know how to make sure that the proper modules are compiled into the kernel. I will also assume that the only interfaces with network connectivity are ethernet devices. If you are using a modem or non-ethernet device for network traffic, adjust the instructions as needed.

The first step is to create a new table, which will house the rules to log network traffic:

iptables -N log_table

The next step is to create the rules used to log traffic. I use a format that was introduced to me by a co-worker, who I will refer to by first name only as Kaleb. I will log all traffic in one of two categories; internal traffic and external traffic. The following command will setup the IPTables logging of all internal traffic, which is traffic on the local network (assuming 192.168.1.0/24 local netework):

iptables -A log_table -s 192.168.1.0/24 -i eth+ -m limit --limit 3/minute -j LOG --log-prefix 'INTERNAL: ' --log-level 5

The next step is to setup logging on the external traffic:

iptables -A log_table -s ! 192.168.1.0/24 -i eth+ -m limit --limit 3/minute -j LOG --log-prefix 'EXTERNAL: ' --log-level 5

Finally, we add this table to the INPUT and FORWARD tables so that all traffic is logged before it moves on to the other tables:

iptables -A INPUT 1 -j log_table
iptables -A FORWARD 1 -j log_table

Remember to save the configuration:

/etc/init.d/iptables save active

Now, we can monitor the logs for traffic and test the logging by checking email, browsing the Internet, or using another network tool. I always use tail to monitor my logs, a good way to use this tool if you are not familiar with it is the following command:

tail -f -n 100 /var/log/messages

Watch the output while you check your mail or browse the web. The '-f' flag will update the output automatically as the file is updated, while the '-n 100' shows the last 100 lines of the file initially.

One downside to this configuration is that your logs will now grow constantly while you are accessing the network, or while any network traffic is coming to you. One benefit to this is that it will help give you a reason to build some good log-parsing scripts.