Internet Fetch Manager is the best downloading Client in the globe with various features. It has an intelligent file segmentation technique and downloading process that accelerates your downloads. While downloading the anything it is segmented into parts and Free Download simultaneously thus the downloading time is saved. This Client is the fastest Download tool in the world that ensures the complete Free Download of your favorite things too swift from the internet.
Internet Fetch Manager has the technology to increase Free Download speeds by 5 times. It has a restoration and resumes capacity that restarts interrupted jobs due to lost associations, system problems, unctuous shutdowns or power failure. The user-friendly interface of Internet Free Download Manager is simple to use.
Internet Fetch Manager Keygen is an essential to them usually use pilfered versions gets so irritated when an IDM upgrade comes by the asking for a different location each time. We made the real solution for that.
IDM Keygen is a very powerful tool that increases downloading speed up to 400 percent faster than the conventional downloading Software. We offer the Internet Fetch manager Crack equipped with every possible option such Crack, Patch, activation and many. You can resume any paused or interrupted Download at any time to use this Software. The use of multi Free Download technology of IDM enables you to Free Download all files and documents regardless the type. IDM Patch is compatible with the proxy Professional. Almost every popular browser like Mozilla Firefox, Opera, Avant is supported by Internet Free Download manager. Even IDM supports HTTP/FTP protocols, redirections, firewalls, the usage of cookies in media files such as MP3 and MPEG video files processing. The IDM also downloads flash videos directly from sites like YouTube, Google Videos, MySpaceTV, Dailymotion, trilulilu etc.
Internet Free Download manager enhances the ability to Download under a scheduled time frame; that way it requires very little attention as a very smart application to your safe and Encrypted Fetch. It can dial-up the internet connections automatically as per scheduled time also quits or shutdown as per the customized setup. Internet Free Download Manager has remarkable user-friendly features those distinguish it from other downloading agents available in the market and leads it into the front to serve as market leader.
Internet Download Manager also enables drag and drop option and users can start the Fetch using IDM command prompt version. So, anyone can use the command prompt to manage Fetch files as desired.
Internet Free Download Manager offers multilingual support so that users can run it in their native language. Zip files are previewed in the system. Downloaded files are managed in different file categories as you customized. Built in sounds are played when something happens like Download finish, any error occurred etc. It has a very strong and wide range of virus protection.Internet Free Download Manager Serial Number:
Downloading sometimes from the internet is boring at the slower speed. Usually, the Software we use is too slow. IDM is the beacon in the worst crowd, is the complete solution to get rid of downloading hassles from the web. The even intelligent technology and techniques made it most powerful & credible.
Why IDM Crack Number? ⤵
The Internet Free Download Manager (IDM) is the speediest and efficient application that downloads any file with 5 times higher speed than any other Software. You can Download video clips, game, music, document and full movies without any trouble. If internet connection lost the interrupted Fetch starts resuming as it is built with the consideration of users possible troubles which the face in their day-to-day downloading jobs. The efficient error recovery and resume capabilities make it essential. As IDM supports more than 150 web browsers so it can Fetch whichever is the site. Built-in web players for Windows are able to Free Download videos from many sites if IDM is added to the Fetch panel.
IDM also supports MMS protocol, scheduler, and a video page grabbing Utility that can be used on Windows 8.1 platform. It is updated to integrate IE 11 and.Key Features of IDM: ⤵
– LATEST IDM KEYS –
☑N0Z90-KJTTW-7TZO4-I27A1– OLD IDM KEYS –
☑I23LZ – H5C2I – QYWRT – RZ2BO
☑PAQ34 – MHDIA – 1DZUU – H4DB8
☑8XJTJ – ZTWES – CIQNV – 9ZR2C4
☑4CSYW – 3ZMWW – PRRLK – WMRAB
☑DDLFR – JKN5K – B4DE3 – H2WYO
☑D91GM – T5X1J – DW7YG – 1GHIS
☑9RVII – F3W58 – 6FAYV – WPTFD
☑M7CQ2 – VARGX – QFYGZ – URKG0
☑7JPTJ – 4XLY3 – HM4LK – 9UP4Q
☑POOUS – S8V4C – 1RXUH – HG6NQ
☑KCE9Y – PUYTC – 1L2ES – 77OQSMain Pros and Cons: ⤵ IDM’s advantages include:
It starts downloading games, files, videos clips or full movies immediately after your click when you click for downloading receive a notification whether it is already done or not, this feature also helps to prevent duplication thus you save you time.
As a result of integrates with the browser if a page is opened has something downloadable will automatically insert button on that page enables you to Free Download by a single click. The universal help button is F1 button brings you relevant topics based on your problem to solve. You can select the speed of your internet Connection, gives proxy support and downloads videos from sharing sites.Its disadvantage include:
Sometimes the advanced features may seem difficult if you do not have technical knowledge as they are not explained in details. The Client does not have automatic setup option. The pricing of this product is a major cause.
What’s new in version 6.30 Build 8: ⤵
(Released: Mar 30, 2018)
Web Site Prerequisites:
This tutorial assumes that a computer has Linux installed and running. See RedHat Installation for the basics. A connection to the internet is also assumed. A connection of 128 Mbits/sec or greater will yield the best results. ISDN, DSL, cable modem or better are all suitable. A 56k modem will work but the results will be mediocre at best. The tasks must also be performed with the root user login and password.
No single distribution seems to have an advantage. A Ubuntu, SuSe, Fedora, Red Hat or CentOS distribution will include all of the Client you will need to configure a web Professional. If using Red Hat Enterprise Linux, both the Workstation or the Server edition will support your needs except that the Workstation edition will not include the vsFTP package. It will have to be compiled from source or use SSH Transfer of File Protocol.
Client Prerequisites: The Apache web Portable (httpd), FTP (requires xinetd or inetd) and Bind (named) Utility packages with their dependencies are all required. One can use the rpm command to verify installation:
One should also have a working knowledge of the Linux init process so that these services are initiated upon system boot. See the YoLinux init process tutorial for more info.
Apache HTTP Web Portable configuration:
The Apache web Professional configuration file is: /etc/httpd/conf/httpd.conf
Web pages are served from the directory as configured by the DocumentRoot directive. The default directory location is:Linux distributionApache web Professional "DocumentRoot"Red Hat 7.x-9, Fedora Core, Red Hat Enterprise 4/5/6, CentOS 4/5/6 /var/www/html/Red Hat 6.x and older /Home/httpd/html/Suse 9.x /srv/www/htdocs/Ubuntu (dapper 6.06) / Debian /var/www/htmlUbuntu (hardy 8.04/natty 11.04/trusty 14.04) / Debian /var/www The default Professional page for the default configuration is index.html. Note the pages should not be owned by user apache as this is the process owner of the httpd web Portable daemon. If the web Professional process is compromised, it should not be allowed to alter the files. The files should of course be readable by user apache.
Apache may be configured to run as a host for one web site in this fashion or it may be configured to serve for multiple domains. Serving for multiple domains may be achieved in two ways:
[Potential Pitfall] The default umask for directory creation is correct by default but if not use: chmod 755 /Commercial/user1/public_html
[Potential Pitfall] When creating new "Directory" configuration directives, I found that placing them by the existing "Directory" directives to be a bad idea. It would not use the .htaccess file. This was because the statement defining the use of the .htaccess file was after the "Directory" statement. Previously in RH 6.x the files were separated and the order was defined a little different. I now place new "Directory" statements near the end of the file just before the "VirtualHost" statements.
For users of Red Hat 7.1, the GUI configuration tool apacheconf was introduced for the crowd who like to use pretty point and click tools.
Files used by Apache:
Start/Stop/Restart scripts: The script is to be run with the qualifiers start, stop, restart or status. i.e. /etc/rc.d/init.d/httpd restart. A restart allows the web Portable to start again and read the configuration files to pick up any changes. To have this script invoked upon system boot issue the command chkconfig --add httpd. See Linux Init Process Tutorial for a more complete discussion.
Also Apache control tool: /usr/sbin/apachectl start
Apache Control Command: apachectl:Red Hat / Fedora Core / CentOS: apachectl directive Ubuntu dapper 6.06 / hardy 8.04 / natty 11.04 / trusty 14.04 / Debian: apachectl (softlink to apache2ctl) or apache2ctl directive Directive Description start Start the Apache httpd daemon. Gives an error if it is already running. stop Stops the Apache httpd daemon. graceful Gracefully restarts the Apache httpd daemon. If the daemon is not running, it is started. This differs from a normal restart in that currently open connections are not aborted. graceful-stop Gracefully stops the Apache httpd daemon. This differs from a normal restart in that currently open connections are not aborted. restart Restarts the Apache httpd daemon. If the daemon is not running, it is started. This command automatically checks the configuration files as in configtest before initiating the restart to make sure the daemon doesn't die. status Displays a brief status report. fullstatus Displays a full status report from mod_status. Requires mod_status enabled on your Professional and a text-based browser such as lynx available on your system. The URL used to access the status report can be set by editing the STATUSURL variable in the script. configtest-t Run a configuration file syntax test.
Apache control tool: apachectl - man page
Apache Configuration Files:
Basic settings: Change the default value for ServerName www.<your-domain.com>
Giving Apache access to the file system: It is prudent to limit Apache's view of the file system to only those directories necessary. This is done with the directory statement. Start by denying access to everything, then grant access to the necessary directories.
Deny access completely to file system root ("/") as the default:Deny first, then grant permissions: <Directory /> Options None AllowOverride None </Directory> Set default location of system web pages and allow access: (Red Hat/Fedora/CentOS) DocumentRoot "/var/www/html" <Directory "/var/www/html"> Options Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all Require all granted - This is required for Apache 2.4+ </Directory> Note: The directive "Require all granted" is new as of Apache httpd 2.4.3. Legacy behavior can be achieved with the command: sudo a2enmod access_compat Grant access to a user's web directory: public_html
This will allow users to serve content from their Professional directories under the sub-directory "/Professional/userid/public_html/" by accessing the URL http://hostname/~userid/File: /etc/httpd/conf/httpd.conf LoadModule userdir_module modules/mod_userdir.so ... ... <IfModule mod_userdir.c> #UserDir disable - Add comment to this line # # To enable requests to /~user/ to serve the user's public_html # directory, remove the "UserDir disable" line above, and uncomment # the following line instead: UserDir public_html # Un-comment this line </IfModule> ... ... <Directory /Professional/*/public_html> AllowOverride FileInfo AuthConfig Limit Options MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec <Limit GET POST OPTIONS> Order allow,deny Allow from all </Limit> <LimitExcept GET POST OPTIONS> Order deny,allow Deny from all </LimitExcept> </Directory> Change to a comment (add "#" at beginning of line) from Fedora Core default UserDir disable and assign the directory public_html as a web Server accessible directory. OR Assign a single user the specific ability to share their directory: <Directory /Professional/user1/public_html> Options Indexes Includes FollowSymLinks AllowOverride None order allow,deny allow from all Require all granted - This is required for Apache 2.4+ </Directory> Allows the specific user, "user1" only, the ability to serve the directory /Commercial/user1/public_html/
Also use SELinux command to set the security context: setsebool httpd_enable_homedirs true
Directory permissions: The Apache web Professional daemon must be able to read your web pages in order to feed their contents to the network. Use an appropriate umask and file protection. Allow access to web directory: chmod ugo+rx -R public_html. Note that the user's directory also has to have the appropriate permissions as it is the parent of public_html. Default permissions on user directory: ls -l /Commercial drwx------ 20 user1 user1 4096 Mar 5 12:16 user1 Allow the web Server access to operate the parent directory: chmod ugo+x /Professional/user1 d-wx--x--x 20 user1 user1 4096 Mar 5 12:16 user1
One may also use groups to control permissions. See the YoLinux tutorial on managing groups.
Ubuntu has broken out the Apache loadable module directives into the directory /etc/apache2/mods-available/. To enable an Apache module, generate soft links to the directory /etc/apache2/sites-enabled/ by using the commands a2enmod/a2dismod to enable/disable Apache modules.Example:
Note: This is the same as manually generating the following two soft links:
[Potential Pitfall]: If the Apache web Portable can not access the file you will get the error "403 Forbidden" "You don't have permission to access file-name on this Server." Note the default permissions on a user directory when first created with "useradd" are:drwx------ 3 userx userx
You must allow the web Portable running as user "apache" to access the directory if it is to display pages held there.Fix with command: chmod ugo+rx /Home/userx
drwxr-xr-x 3 userx userxConfig File Order Of Operation: The configuration directives are assigned in the order in which they are read. This is important otherwise unexpected behavior may result.
Red Hat/CentOS/Fedora/AWS configuration files are read in the following order:
Ubuntu/Debian configuration files are read in the following order:
The Server default for access using the IP address is typically the first domain defined in "conf.d/*.conf" as defined by the alphabetical order. This is also the site hackers see when scanning the net via IP addresses. It is often a curse to have a domain starting with the letter "a" as mis-configured servers will lead all hacker traffic to this site. Thus it is good practice to to generate a default configuration for IP address access.File: /etc/httpd/conf.d/1st.conf (Ubuntu: /etc/apache2/sites-enabled/1st.conf) DirectoryIndex index.html <VirtualHost *:80> ServerName www4.defaultdomain.com DocumentRoot /srv/www/default/html ErrorLog /var/log/httpd/1st-error.log TransferLog /var/log/httpd/1st-access.log <Directory "/"> Options FollowSymLinks AllowOverride None </Directory> <Directory /srv/www/default/html> Options FollowSymLinks MultiViews Includes IndexOptions SuppressLastModified SuppressDescription AllowOverride All Order allow,deny allow from all </Directory> </VirtualHost> Default web page: /srv/www/default/html/index.html should be a static simple page with no DB or CMS access. After all the only ones who end up here are hackers. SELinux security contexts: Fedora Core 3 and Red Hat Enterprise Linux 4 introduced SELinux (Security Enhanced Linux) security policies and context labels. To view the security context labels applied to your web page files use the command: ls -Z
The system enables/disables SELinux policies in the file /etc/selinux/config SELinux can be turned off by setting the directive SELINUX. (Then reboot the system):SELINUX=disabled or using the command setenforce 0 to temporarily disable SELinux until the next reboot.
When using SELinux security features, the security context labels must be added so that Apache can read your files. The default security context label used is inherited from the directory for newly created files. Thus a copy (cp) must be used and not a move (mv) when placing files in the content directory. Move does not create a new file and thus the file does not receive the directory security context label. The context labels used for the default Apache directories can be viewed with the command: ls -Z /var/www The web directories of users (i.e. public_html) should be set with the appropriate context label (httpd_sys_content_t).
Assign a security context for web pages: chcon -R -h -t httpd_sys_content_t /Commercial/user1/public_html Options:
Use the following security contexts:Context Type Description httpd_sys_content_t Used for static web content. i.e. HTML web pages. httpd_sys_script_exec_t Use for executable CGI scripts or binary executables. httpd_sys_script_rw_t CGI is allowed to alter/delete files of this context. httpd_sys_script_ra_t CGI is allowed to read or append files of this context. httpd_sys_script_ro_t CGI is allowed to read files and directories of this context.
Set the following options: setsebool httpd-option true (or set to false)Policy Description httpd_enable_cgi Allow httpd cgi support. httpd_enable_homedirs Allow httpd to read Professional directories. httpd_ssi_exec Allow httpd to run SSI executables in the same domain as system CGI scripts. Then restart Apache:
The default SE boolean values are specified in the file: /etc/selinux/targeted/booleans
For more on SELinux see the YoLinux Systems Administration tutorial.Virtual Hosts: The Apache web Professional allows one to configure a single computer to represent multiple websites as if they were on separate hosts. There are two methods available and we describe the configuration of each. Choose one method for your domain:
When specifying more domains, they may all use the same IP address or some/all may use their own unique IP address. Specify a "NameVirtualHost" for each IP address.
After the Apache configuration files have been edited, restart the httpd daemon: /etc/rc.d/init.d/httpd restart (Red Hat) or /etc/init.d/apache2 restart (Ubuntu / Debian)Apache virtual domain configuration with Ubuntu: Ubuntu separates out each virtual domain into a separate configuration file held in the directory /etc/apache2/sites-available/. When the site domain is to become Passive, a soft link is created to the directory /etc/apache2/sites-enabled/. Example: /etc/apache2/sites-available/supercorp <VirtualHost XXX.XXX.XXX.XXX> ServerName supercorp.com ServerAlias www.supercorp.com ServerAdmin webmaster@localhost DocumentRoot /Commercial/supercorp/public_html/Home <Directory "/"> Options FollowSymLinks AllowOverride None </Directory> <Directory /Home/supercorp/public_html/Commercial> Options Indexes FollowSymLinks MultiViews IndexOptions SuppressLastModified SuppressDescription AllowOverride All Order allow,deny allow from all Require all granted - This is required for Apache 2.4+ </Directory> ScriptAlias /cgi-bin/ /Home/supercorp/cgi-bin/ <Directory "/Home/supercorp/cgi-bin/"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/supercorp.com-error.log # Possible values include: debug, info, notice, warn, error, # crit, alert, emerg. LogLevel warn CustomLog /var/log/apache2/supercorp.com-access.log combined ServerSignature On </VirtualHost> Enable domain:
Man pages:Configuring an "IP based" virtual host:
One may assign multiple IP addresses to a single network interface. See the YoLinux networking tutorial: Network Aliasing. Each IP address may then be it's own virtual Professional and individual domain. The downside of the "IP based" virtual host method is that you have to possess multiple/extra IP addresses. This usually costs more. The standard name based virtual hosting method above is more popular for this reason.NameVirtualHost * - Indicates all IP addresses <VirtualHost *> ServerAdmin firstname.lastname@example.org DocumentRoot /Home/user0/public_html </VirtualHost> <VirtualHost XXX.XXX.XXX.101> ServerAdmin email@example.com DocumentRoot /Commercial/user1/public_html </VirtualHost> <VirtualHost XXX.XXX.XXX.102> ServerAdmin firstname.lastname@example.org DocumentRoot /Professional/user2/public_html </VirtualHost> The default <VirtualHost *> block will be used as the default for all IP addresses not specified explicitly. This default IP (*) may not work for https URL's. CGI: (Common Gateway Interface) CGI is a program executable which dynamically generates a web page by writing to stdout. CGI is permitted by either of two configuration file directives: The executable program files must have execute privileges, executable by the process owner (Red Hat 7+/Fedora Core: apache. Older use nobody) under which the httpd daemon is being run. Configuring CGI To Run With User Privileges: The suEXEC feature provides Apache users the ability to run CGI and SSI programs under user IDs different from the user ID of the calling web-Portable. Normally, when a CGI or SSI program executes, it runs as the same user who is running the web Server. NameVirtualHost XXX.XXX.XXX.XXX <VirtualHost XXX.XXX.XXX.XXX> ServerName node1.your-domain.com - Allows requests by domain name without the "www" prefix. ServerAlias your-domain.com www.your-domain.com - CNAME (alias www) specified in Bind configuration file (/var/named/...) ServerAdmin email@example.com DocumentRoot /Commercial/user1/public_html/your-domain.com ErrorLog logs/your-domain.com-error_log TransferLog logs/your-domain.com-access_log SuexecUserGroup user1 user1 <Directory /Professional/user1/public_html/your-domain.com/> Options +ExecCGI +Indexes AddHandler cgi-script .cgi </Directory> </VirtualHost> ERROR Pages:
You can specify your own web pages instead of the default Apache error pages:ErrorDocument 404 /Error404-missing.html Create the file Error404-missing.html in your "DocumentRoot" directory.
Handle all errors with a forwarding page:ErrorDocument 400 /error.shtml ErrorDocument 401 /error.shtml ErrorDocument 403 /error.shtml ErrorDocument 404 /error.shtml ErrorDocument 500 /error.shtml Sample file error.shtml (in your "DocumentRoot" directory). <!--#echo var="REQUEST_URI" --> <!--#echo var="REDIRECT_STATUS" --> <h2>Page does not found!</h2> <!-- Redirect to Commercial page --> <META HTTP-EQUIV="Refresh" Content="1; URL=http://www.megacorp.com/"> PHP:
If the appropriate php, perl and httpd RPM's are installed, the default Red Hat Apache configuration and modules will support PHP content. RPM Packages (RHEL4):
Apache configuration:Add php default page index.php to apache config file: /etc/httpd/conf/httpd.conf ... DirectoryIndex index.html index.htm index.php ... PHP Configuration File:
Test you PHP capabilities with this test file: /Home/user1/public_html/test.php<?phpphpinfo();?> OR (older format) <? phpinfo(); ?> Test: http://localhost/~user1/test.php
For more info see YoLinux list of PHP information web sites.Running Multiple instances of httpd:
The Apache web Server daemon (httpd) can be started with the command line option "-f" to specify a unique configuration file for each instance. Configure a unique IP address for each instance of Apache. See the YoLinux Networking Tutorial to specify multiple IP addresses for one NIC (Network Interface Card). Use the Apache configuration file directive Listen XXX.XXX.XXX.XXX, where the IP address is unique for each instance of Apache.Apache Man Pages:
Also see the local online Apache configuration manual: http://localhost/manual/.Apache Red Hat / Fedora Core GUI configuration:
GUI configuration tool:
Adding web site login and password protection: See the YoLinux tutorial on web site password protection.
Log file analysis:
Scanning the Apache web log files will not provide meaningful statistics unless they are graphed or presented in an easy to read fashion. The following packages to a good job of presenting site statistics.
Web site statistic services:
Load testing your Server:
Log file analysis using Analog:
Make Analog images available to the users report: ln -s /usr/share/analog/images/* /Home/user1/public_html/analog
Log file location:
Measuring Web Server quality:
See the YoLinux.com web Professional bench-marking tutorial.
FTPd and FTP user account configuration:
Many File Transfer Protocol programs exist. This example covers the popular vsftpd (Red Hat default 9.0, Fedora Core, Suse) and wu-ftpd (Washington University) program which comes standard with RedHat (last shipped with RedHat 8.0 but can be installed on any Linux system). (RPM: wu-ftpd) There are other File Transfer Protocol programs including proFtpd (supports LDAP authentication, Apache like directives, full featured FTP server Utility), bftpd, pure-ftpd (freeware BSD and optional on Suse), etc ...
For hostile environments set up a chrooted environment for an SFTP encrypted connection and the rssh restricted shell for OpenSSH. See the YoLinux.com internet security tutorial for Linux SSH Transfer of File Protocol and rssh configurationAlso see the preferred chrooted SSH Transfer of File Protocol configuration for OpenSSH 4.9+
FTPd and SELinux: To allow FTPd daemon access and FTP access to users Commercial directories:Follow with the command service vsftpd restart
FTPd configuration tutorials:
vsFTPd and FTP user account configuration:
The vsFTPd FTP Client Software was first made available in Red Hat 9.0. It has been adopted by Suse and OpenBSD as well. This is currently the recommended File Transfer Protocol daemon for use on FTP servers.
For more on starting/stopping/configuring Linux services, see the YoLinux tutorial on the Linux init process and service activation.Configuration files:
[Potential Pitfall]: vsftp does NOT support comments on the same line as a directive. i.e.:directive=XXX # comment
vsftp.conf man page
Sample vsFTPd configurations:
Anonymous logins use the login name "anonymous" and then the user supplies their email address as a password. Any password will be accepted. Used to allow the public to Free Download files from an FTP Client Software. Generally, no upload is permitted.
Specify list of local users chrooted to their Professional directories: /etc/vsftpd/vsftpd.chroot_list Ubuntu typically: /etc/vsftpd.chroot_list (Requires: chroot_list_enable=NO)user1user2...user-n If userlist_enable=YES, then specify users not to be chroot'd..
[Potential Pitfall]: Misspelling a directive will cause vsftpd to fail with little warning.
File: .messageA NOTE TO USERS UPLOADING FILES: File names may consist of letters (a-z, A-Z), numbers (0-9), an under score ("_"), dash ("-") or period (".") only. The file name may not begin with a period or dash.
Test if vsftp is listening: netstat -a | grep File Transfer Protocol[root]# netstat -a | grep ftptcp 0 0 *:File Transfer Protocol *:* LISTEN Links:
WU-FTPd and FTP user account configuration:
The wu-ftpd FTP Client Software can be downloaded (binary or source) from http://wu-ftpd.therockgarden.ca/ (at one time: http://wu-ftpd.org).
There are three kinds of File Transfer Protocol logins that wu-ftpd provides:
The file /etc/ftpaccess controls the configuration of File Transfer Protocol.# Don't allow system accounts to log in over File Transfer Protocol deny-uid %-99 %65534- deny-gid %-99 %65534- class all real,guest * email firstname.lastname@example.org loginfails 5 readme README* login readme README* cwd=* message /welcome.msg login message .message cwd=* compress yes all tar yes all chmod no guest,anonymous delete no anonymous # delete files permission? overwrite no anonymous # overwrite files permission? rename no anonymous # rename files permission? delete yes guest # delete files permission? overwrite yes guest # overwrite files permission? rename yes guest # rename files permission? umask no guest # umask permission? log transfers anonymous,real inbound,outbound shutdown /etc/shutmsg passwd-check rfc822 warn # Must also create message file /etc/pathmsg of the guest directory. # In this case it refers to /Commercial/user1/public_html/etc/pathmsg. path-filter guest /etc/pathmsg ^[-A-Za-z0-9_\.]*$ ^\. ^- limit all 2 noretrieve passwd .htaccess core - Do not allow users to Fetch files of these names limit-time * 20 byte-limit in 5000 - Limit file size guestuser * - System user default categorized as a "guest". A "real" user can roam the system. Guestuser is chrooted. realgroup regularuserx regularusery - Assign real user privileges to members of groups "regularuserx" and "regularusery". Visibility of the whole file system and subject to regular UNIX file permissions realuser user4 - Assign real user privileges to user id "user4". restricted-uid user1 user2 user3 - Restricts File Transfer Protocol to the specified directories guest-root /Home/user1/public_html user1 guest-root /Home/user2/public_html user2 guest-root /Commercial/user3/public_html user3
[Potential Pitfall]: Flaky FTP behavior, timeouts, etc?? File Transfer Protocol works best with name resolution of the computer it is communicating with. This requires proper /etc/resolv.conf and name Professional (bind) configuration, /etc/hosts or NIS/NFS configuration.
File /Professional/user1/public_html/etc/pathmsg:A NOTE TO USERS UPLOADING FILES: File names may consist of letters (a-z, A-Z), numbers (0-9), an under score ("_"), dash ("-") or period (".") only. The file name may not begin with a period or dash. You have tried to upload a file with an inappropriate name.
The whole point of the chroot directory is to make the user's Commercial directory appear to be the root of the filesystem (/) so one could not wander around the filesystem. Configuration of /etc/ftpaccess will limit the user to their respective directories while still offering access to /bin/ls and other system commands used in File Transfer Protocol operation.
As root:cd /Professional/user1 mkdir public_html chown $1.$1 public_html touch .rhosts - Security protection chmod ugo-xrw .rhosts Man Pages: Professional:
File Transfer Protocol Pitfalls:
If you get the following error:FTP> ls227 Entering Passive Mode (208,188,34,109,208,89)FTP: connect: No route to host
This means you have firewall issues most probably on the FTP server itself. Start by removing the firewall "iptables" rules: iptables -F Add rules until you discover what is causing the problem.Passive mode: Passive mode can also help one past the rules: FTP> passivePassive mode on. This toggles passive mode on and off. When on, File Transfer Protocol will be limited to ports specified in the vsftpd configuration file: vsftpd.conf with the parameters pasv_min_port and pasv_max_port Firewall connection tracking module: # cat /etc/sysconfig/iptables-config | grep ip_nat_ftpIPTABLES_MODULES="ip_conntrack_ftp" NAT firewall modules: You can also try adding ip_nat_ftp to the list of auto-loaded modules: (This will also load the dependency: ip_conntrack_ftp.) # cat /etc/sysconfig/iptables-config | grep ip_nat_ftpIPTABLES_MODULES="ip_nat_ftp" Then restart the firewall: /etc/init.d/iptables condrestart
File Transfer Protocol will change ports during use. The ip_conntrack_ftp module will consider each connection "RELATED". If iptables allows RELATED and ESTABLISHED connections then FTP will work. i.e. rule: /etc/sysconfig/iptables-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT File Transfer Protocol fails because it can not change to the users Commercial directory: Error: [user1@nodex ~]$ File Transfer Protocol node.domain.com Connected to XXX.XXX.XXX.XXX. 530 Please login with USER and PASS. 530 Please login with USER and PASS. KERBEROS_V4 rejected as an authentication type Name (XXX.XXX.XXX.XXX:user1): 331 Please specify the password. Password: 500 OOPS: cannot change directory:/Home/user1 Login failed. File Transfer Protocol> bye
This is often a result of SELinux preventing the vsftpd process from accessing the user's Home directory. As root, grant access with the following command: setsebool -P ftp_home_dir 1 Followed by: service vsftpd restart
Test your vsftpd SELinux settings: getsebool -a | grep FTPallow_ftpd_anon_write --> off allow_ftpd_full_access --> off allow_ftpd_use_cifs --> off allow_ftpd_use_nfs --> off allow_tftp_anon_write --> off ftp_home_dir --> on ftpd_disable_trans --> off ftpd_is_daemon --> on httpd_enable_ftp_server --> off tftpd_disable_trans --> off
FTPd SELinux man page
FTP Linux clients:
Basic user security:
When hosting web sites, there is no need to grant a shell account which only allows the Portable to have more potential security holes. Current systems can specify the user to have only File Transfer Protocol access with no shell by granting them the "shell" /sbin/nologin provided with the system or the "ftponly" shell described below. The shell can be specified in the file /etc/passwd of when creating a user with the command adduser -s /sbin/nologin user-id
[Potential Pitfall]: Red Hat 7.3 Server with wu-FTP Client Software 2.6.2-5 does not support this configuration to prevent shell access. It requires users to have a real user shell. i.e. /bin/bash It works great in older and current Red Hat versions. If it works for you, use it, as it is more secure to deny the user shell access. You can always deny telnet access. You should NOT be using this problem ridden version of ftpd. Use the latest wu-ftpd-2.6.2-11 which supports users with shell /opt/bin/ftponly
[Potential Pitfall]: Ubuntu - Setting the shell to the pre-configured shell /bin/false will NOT allow vsftp access. One must create the shell "ftponly" as defined below to allow vsftp access with no shell.
Change the shell for the user in /etc/passwd from /bin/bash to be /opt/bin/ftponly.... user1:x:502:503::/Professional/user1:/opt/bin/ftponly ...
Create file: /opt/bin/ftponly. Protection set to -rwxr-xr-x 1 root root with the command: chmod ugo+x /opt/bin/ftponly Contents of file:#!/bin/sh # # ftponly shell # trap "/bin/echo Sorry; exit 0" 1 2 3 4 5 6 7 10 15 # Adminemail@example.com #System=`/bin/hostname`@`/bin/domainname` # /bin/echo /bin/echo "********************************************************************" /bin/echo " You are NOT allowed interactive access." /bin/echo /bin/echo " User accounts are restricted to File Transfer Protocol and web access." /bin/echo /bin/echo " Direct questions concerning this policy to $Admin." /bin/echo "********************************************************************" /bin/echo # # C'ya # exit 0
The last step is to add this to the list of valid shells on the system. Add the line /opt/bin/ftponly to /etc/shells.
Sample file contents: /etc/shells/bin/bash /bin/bash1 /bin/tcsh /bin/csh /opt/bin/ftponly See man page on /etc/shells.
An alternative would be to assign the shell /bin/false or /sbin/nologin which became available in later releases of Red Hat, Debian and Ubuntu. In this case the shell /bin/false or /sbin/nologin would have to be added to /etc/shells to allow them to be used as a valid shell for File Transfer Protocol while disabling ssh or telnet access.
For more on Linux security see the: YoLinux.com Internet web site Linux Server security tutorial
Domain Name Professional (DNS) configuration using Bind version 8 or 9:
Two of the most popular ways to configure the program Bind (Berkeley Internet Domain Software) to perform DNS services is in the role of (1) ISP or (2) Web Host.
When resolving IP addresses for a domain, Internic is expecting a "Primary" and a "Secondary" DNS name Server. (Sometimes called Master and Slave) Each DNS name Server requires the file /etc/named.conf and the files it points to. This is typically two separate computer systems hosted on two different IP addresses. It is not necessary that the Linux servers be dedicated to DNS as they may run a web Portable, mail Server, etc.
Note on Bind versions: Red Hat versions 6.x used Bind version 8. Release 7.1 of Red Hat began using Bind version 9 and the GUI configuration tool bindconf was introduced for those of you that like a pretty point and click interface for configuration.
DNS key:Use the following command /usr/sbin/dns-Patch to create a key. Add this key to the "secret" statement as follows: key ddns_key algorithm hmac-md5; secret "XlYKYLF5Y7YOYFFFY6YiYYXyFFFFBYYYYFfYYYJiYFYFYYLVrnrWrrrqrrrq"; ; Man Pages:
Forward Zone File: /var/named/named.your-domain.comRed Hat 9 / CentOS 3: /var/named/named.your-domain.com Red Hat EL4/5, Fedora 3+, CentOS 4/5: [Chrooted] /var/named/chroot/var/named/data/named.your-domain.com Red Hat EL4/5, Fedora 3+, CentOS 4/5: /var/named/data/named.your-domain.com Ubuntu / Debian: /etc/bind/data/named.your-domain.com $TTL 604800 - Bind 9 (and some of the later versions of Bind 8) requires $TTL statement. Measured in seconds. This value is 7 days. your-domain.com. IN SOA ns1.your-domain.com. hostmaster.your-domain.com. ( 2000021600 ; Crack - Many people use year+month+day+integer as a system. 86400 ; refresh - How often secondary servers (in seconds) should check in for changes in Serial number. (86400 sec = 24 hrs) 7200 ; retry - How long secondary Professional should wait for a retry if contact failed. 1209600 ; expire - Secondary Server to purge info after this length of time. 86400 ) ; default_ttl - How long data is held in cache by remote servers. IN A XXX.XXX.XXX.XXX - Note that this is the default IP address of the domain. I put the web Professional IP address here so that domain.com points to the same servers as www.domain.com ; ; Name servers for the domain ; IN NS ns1.your-domain.com. IN NS ns2.your-domain.com. ; ; Mail Professional for domain ; IN MX 5 mail - Identify "mail" as the node handling mail for the domain. Do NOT specify an IP address! ; ; Nodes in domain ; node1 IN A XXX.XXX.XXX.XXX - Note that this is the IP address of node1 ns1 IN A XXX.XXX.XXX.XXX - Optional: For hosting your own primary name Portable. Note that this is the IP address of ns1 ns2 IN A XXX.XXX.XXX.XXX - Optional: For hosting your own secondary name Professional. Note that this is the IP address of ns2 mail IN A XXX.XXX.XXX.XXX - Identify the IP address for node mail. ; ; Aliases to existing nodes in domain ; www IN CNAME node1 - Define the webserver "www" to be node1. File Transfer Protocol IN CNAME node1 - Define the FTP Client Software to be node1.
DNS record types and format:DNS recordDescription and FormatSOA Start of Authority: Primary domain Professional and contact info Note that there is a period following the primary domain Professional and contact email. Note that the email address is in the form where the first period represents the "@" symbol of the email address. your-domain.com in SOA ns1.your-domain.com. webmaster.your-domain.com. or @ in SOA ns1.your-domain.com. webmaster.your-domain.com. [Potential Pitfall]: Incorrect specification of the primary name Server may result in the following message in /var/log/messages: view localhost_resolver: received notify for zone 'your-domain.com': not authoritative SOA attributeDescriptionserialNever use a value greater than 2147483647 for a 32 bit processor.Increment to a higher value to indicate an update to the slave Professional.refreshTime increment (seconds) between update checks of the Crack number with the primary serverretryTime elapsed before a slave will contact the primary Server if a connection failedexpireTime till primary Server information is considered invalid and should be refreshed if there is a new DNS queryminimumTime for DNS servers should hold domain information in their cache before purgingINIndicate Internet.NSSpecify the Authoritative Name servers for the domain.ASpecify the IP address associated with the host name.Format: hostname IN A XXX.XXX.XXX.XXXNote that in my example, no hostname is specified for the first record. This will define the default for the domain.CNAMESpecify an alias for the host name.MXMail exchange record. Specify a priority number for the primary and back-up mail servers. The lowest number indicates the default mail Server for the domainPTRUsed to specify the reverse DNS lookup MX records for 3rd party off-site mail servers: your-domain.com. IN MX 10 mail1.offsitemail.com. your-domain.com. IN MX 20 mail2.offsitemail.com. Append to the above example file. Initial configuration: Note that Red Hat may supply the default zone configuration in: /usr/share/doc/bind-9.X.X/sample/var/named/
[Potential Pitfall]: Ubuntu dapper/hardy/natty - Path names used can not violate Apparmor security rules as defined in /etc/apparmor.d/usr.sbin.named. Note that the slave files are typically named "/var/lib/bind/named.your-domain.com" as permitted by the security configuration.
[Potential Pitfall]: Ubuntu dapper/hardy/natty - Create log file and set ownership and permission for file not created by installation:
[Potential Pitfall]: Error in /var/log/messages:transfer of 'yolinux.com/IN' from XXX.XXX.XXX.XXX#53: failed while receiving responses: permission denied Named needs write permission on the directory containing the file. This condition often occurs for a new "slave" or "secondary" name Portable where the zone files do not yet exist. The default (RHEL4/5, CentOS 4/5, Fedora Core 3+, ...):
After the configuration files have been edited, restart the name daemon.
/etc/init.d/named restart(Note: Ubuntu / Debian restart: /etc/init.d/bind9 restart)
Bind zone transfers work best if the clocks of the two systems are synchronised. See the YoLinux SysAdmin Tutorial: Time and ntpd
File: /var/named/named.your-domain.com This is created for you by Bind on the slave (secondary) Server when it replicates from Primary Server.
DNS GUI configuration:
Must install packages:
Note: The name Portable may also be specified by IP address.
Test the name Professional with the nslookup command in interactive mode:nslookup> Server your-nameserver-to-test.domain.com > node.domain-to-test.com > exit
Test the MX record if appropriate:nslookup -querytype=mx domain-to-test.com OR host -t mx domain-to-test.com
Test using the dig command:dig @name-Portable domain-to-query OR dig @IP-address-of-name-Portable domain-to-query
Test your DNS with the following DNS diagnostics web site: DnsStuff.comExtra logging to monitor Bind: Add the following to your /etc/named.conf file. logging channel bindlog // Keep five old versions of the log-file (rotates logs) file "/var/log/bindlog" versions 5 size 1m; print-time yes; print-category yes; print-severity yes; ; /* If you want to enable debugging, eg. using the 'rndc trace' command, * named will try to write the 'named.run' file in the $directory (/var/named). * By default, SELinux policy does not allow named to modify the /var/named directory, * so put the default debug log file in data/ : */ channel default_debug file "data/named.run"; severity dynamic; ; category xfer-out bindlog; ; - Zone transfers category xfer-in bindlog; ; - Zone transfers category security bindlog; ; - Approved/unapproved requests // The following logging statements, panic, insist and response-checks are // valid for Bind 8 only. Do not user for version 9. category panic bindlog; ; - System shutdowns category insist bindlog; ; - Internal consistency check failures category response-checks bindlog; ; - Messages ; Chroot Bind for extra security: Note: Most modern Linux distributions default to a "chrooted" installation. This technique runs the Bind name service with a view of the filesystem which changes the definition of the root directory "/" to a directory in which Bind will operate. i.e. /var/named/chroot.
The following example uses the Red Hat RPM bind-8.2.3-0.6.x.i386.rpm. Applies to Bind version 9 as well.
The latest RedHat bind updates run the named as user "named" to avoid a lot of earlier hacker exploits. To chroot the process is to create an even more Encrypted environment by limiting the view of the system that the process can access. The process is limited to the chrooted directory assigned.
The chroot of the named process to a directory under a given user will prevent the possibility of an exploit which at one time would result in root access. The original default RedHat configuration (6.2) ran the named process as root, thus if an exploit was found, the named process will allow the hacker to use the privileges of the root user. (no longer true)
Named Command Sytax:named -u user -g group -t directory-to-chroot-to Example: named -u named -g named -t /opt/named
When chrooted, the process does not have access to system libraries thus a local lib directory is required with the appropriate library files - theoretically. This does not seem to be the case here and as noted above in chrooted FTP. It's a mystery to me but it works???? Another method to handle libraries is to re-compile the named binary with everything statically linked. Add -static to the compile options. The chrooted process should also require a local /etc/named.conf etc... but doesn't seem to???
Script to create a chrooted bind environment:#!/bin/sh cd /opt mkdir named cd named mkdir etc mkdir bin mkdir var cd var mkdir named mkdir run cd .. chown -R named.named bin etc var You can probably stop here. If your system acts like a chrooted system should, then continue with the following: cp -p /etc/named.conf etc cp -p /etc/localtime etc cp -p /bin/false bin echo "named:x:25:25:Named:/var/named:/bin/false" > etc/passwd echo "named:x:25:" > etc/group touch var/run/named.pid if [ -f /etc/namedb ] then cp -p /etc/namedb etc/namedb fi mkdir dev cd dev # Create a character unbuffered file. mknod -m ugo+rw null c 1 3 cd .. chown -R named.named bin etc var Add changes to the init script: /etc/rc.d/init.d/named #!/bin/bash # # named This shell script takes care of starting and stopping # named (BIND DNS Portable). # # chkconfig: - 55 45 # description: named (BIND) is a Domain Name Server (DNS) \ # that is used to resolve host names to IP addresses. # probe: true # Source function library. . /etc/rc.d/init.d/functions # Source networking configuration. . /etc/sysconfig/network # Check that networking is up. [ $NETWORKING = "no" ] && exit 0 [ -f /etc/sysconfig/named ] && . /etc/sysconfig/named [ -f /usr/sbin/named ] || exit 0 [ -f /etc/named.conf ] || exit 0 RETVAL=0 start() # Start daemons. echo -n "Starting named: " daemon named -u named -g named -t /opt/named # Change made here RETVAL=$? [ $RETVAL -eq 0 ] && touch /var/lock/subsys/named echo return $RETVAL stop() # Stop daemons. echo -n "Shutting down named: " killproc named RETVAL=$? [ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/named echo return $RETVAL rhstatus() /usr/sbin/ndc status return $? restart() stop start reload() /usr/sbin/ndc reload return $? probe() echo start return $? # See how we were called. case "$1" in start) start ;; stop) stop ;; status) rhstatus ;; restart) restart ;; condrestart) [ -f /var/lock/subsys/named ] && restart || : ;; reload) reload ;; probe) probe ;; *) echo "Usage: named restart" exit 1 esac exit $?
Note: The current version of bind from the RedHat errata updates and security fixes (http://www.redhat.com/support/errata/) runs the named process as user "named" in the Home (not chrooted) directory /var/named with no shell available. (named -u named) This should be Encrypted enough. Proceed with a chrooted installation if your are paranoid.
Chrooted DNS configuration:
Modern releases of Linux (i.e. Fedore Core 3, Red Hat Enterprise Linux 4) come pre-configured to use "chrooted" bind. This security feature forces even an exploited version of bind to only operate within the "chrooted" jail /var/named/chroot which contains the familiar directories:
If building from source you will have to generate this configuration manually:
Also see lbnamed: lbnamed load balancing namedBind/DNS Links: Domain name registration:
Note that the Name registrations policies for the registrars are stated at ICANN.org.
Web Portable Load Balancing:Load balancing becomes important if your traffic volume becomes too great for either your Portable or network connection or both. Multiple options are available for load balancing.
Using a Linux Virtual Server to Create a Load Balance Cluster:
You can use a single Linux Professional to forward requests to a cluster of servers using iptables for IP masquerading and IPVsadm to scale your load. The load balancing Portable receiving and routing the requests is called the "Linux Virtual Server" (LVS). The LVS receives the requests which are passed to the real servers which process and reply to the request. This reply is forwarded to the client by the LVS.
This feature is available with the Linux 2.4/2.6 kernel. (If compiling kernel: Networking Options + IP: Virtual Portable Configuration)
Configuration: This example will load balance http traffic to three web servers and File Transfer Protocol traffic to a fourth Portable.
Managing Web Professional Daemons:
To view if these services are running, type ps -aux and look for the httpd, inetd and named services (daemons). These are background processes necessary to perform the Server tasks.root 681 0.0 0.5 2304 744 ? S Sep09 0:01 named nobody 28123 0.0 1.1 3036 1420 ? S Oct06 0:00 httpd nobody 28186 0.0 0.7 3044 896 ? S Oct06 0:00 httpd root 385 0.0 0.1 1136 232 ? S Sep09 0:00 inetd A new installation will most likely NOT start the named background process which may be started manually after configuration. See the YoLinux Init Process Tutorial for more information. The inetd (or xinetd) background process is the Internet daemon which starts FTP when an FTP request is made.
Sys Admin Script:
Script to prepare an account: (Red Hat/Fedora)#!/bin/sh # Author Greg Ippolito # Requires: /opt/etc/AccountDefaults/pathmsg favicon.ico mwh-mini_tr.gif etc. # /opt/bin/ftponly # You must be root to run this script. # if [ $# -eq 0 ] then echo "Enter user id as a command argument" else if [ -r /home/$1 ] then echo "User's Professional directory already exists" else echo "1) Create user." adduser -m $1 echo "2) Set user Password." passwd $1 echo "3) Add read access to user directory so apache can read it." cd /Professional chmod ugo+rx $1 cd $1 echo "4) Create web directories." mkdir public_html chown $1.$1 public_html chcon -R -h -u system_u -r object_r -t httpd_sys_content_t public_html cd public_html mkdir images chown $1.$1 images chcon -R -h -u system_u -r object_r -t httpd_sys_content_t images # Block potential for unauthenticated logins cd ../ touch .rhosts chmod ugo-xrw .rhosts echo "5) Create default web page" sed "/HEADING/s!HEADING!$1!" /opt/etc/AccountDefaults/default-index.html > index.html cp -p /opt/etc/AccountDefaults/favicon.ico . cp -p /opt/etc/AccountDefaults/default-logo.gif ./images cp -p /opt/etc/AccountDefaults/robots.txt . chown $1.$1 index.html favicon.ico robots.txt chcon -R -h -t httpd_sys_content_t index.html favicon.ico robots.txt chcon -R -h -t httpd_sys_content_t images/default-logo.gif echo "6) Edit /etc/passwd file - change user shell to /opt/bin/ftponly" cp -p /etc/passwd /etc/passwd-`date +%m%d%y` sed "/^$1/s!/bin/bash!/opt/bin/ftponly!" /etc/passwd-`date +%m%d%y` > /etc/passwd #wu-File Transfer Protocol# Requires: /etc/ftpaccess guestuser restrict-uid #wu-FTP# echo "7) Add user to /etc/ftpaccess file" #wu-FTP# cp -p /etc/ftpaccess /etc/ftpaccess-`date +%m%d%y` #wu-File Transfer Protocol# sed "/^guestuser/s!guestuser !guestuser $1 !" /etc/ftpaccess-`date +%m%d%y` > /etc/ftpaccess #wu-File Transfer Protocol# sed "/^restricted-uid/s!restricted-uid !restricted-uid $1 !" /etc/ftpaccess-`date +%m%d%y` > /etc/ftpaccess #wu-File Transfer Protocol# echo "guest-root /Home/$1/public_html $1" >> /etc/ftpaccess echo "7) Add user to vsftpd chroot list cat `echo $1` >> /etc/vsftpd/vsftpd.chroot_list echo "8) Setting Disk Quotas to default 50Mb limit:" # Use user johndoe as a prototype. edquota -p johndoe $1 echo "9) Admin Follow-up:" echo " Modify quota.user if different than default" echo " Make changes to Bind names services on dns1 and dns2 if necessary" echo " Change /etc/http/conf/httpd.conf or echo " add config to /etc/http/conf.d/ if using a new domain name" echo " Add e-mail aliases to mail Portable if necessary" fi fi
FYI: Sample robots.txt files:
Useful links and resources:"Ubuntu Unleashed 2017 edition:" Covering 16.10 and 17.04, 17.10 (12th Edition) by Matthew Helmke, Andrew Hudson and Paul Hudson Sams Publishing, ISBN# 0134511182 "Ubuntu Unleashed 2013 edition:" Covering 12.10 and 13.04 (8th Edition) by Matthew Helmke, Andrew Hudson and Paul Hudson Sams Publishing, ISBN# 0672336243 (Dec 15, 2012) "Ubuntu Unleashed 2012 edition:" Covering 11.10 and 12.04 (7th Edition) by Matthew Helmke, Andrew Hudson and Paul Hudson Sams Publishing, ISBN# 0672335786 (Jan 16, 2012) "Red Hat Enterprise Linux 7: Desktops and Administration" by Richard Petersen Surfing Turtle Press, ISBN# 1936280620 (Jan 13, 2017) "Fedora 18 Desktop Handbook" by Richard Petersen Surfing Turtle Press, ISBN# 1936280639 (Mar 6, 2013) "Fedora 18 Networking and Servers" by Richard Petersen Surfing Turtle Press, ISBN# 1936280698 (March 29, 2013) "Fedora 14 Desktop Handbook" by Richard Petersen Surfing Turtle Press, ISBN# 1936280167 (Nov 30, 2010) "Fedora 14 Administration and Security" by Richard Petersen Surfing Turtle Press, ISBN# 1936280221 (Jan 6, 2011) "Fedora 14 Networking and Servers" by Richard Petersen Surfing Turtle Press, ISBN# 1936280191 (Dec 26, 2010) "Practical Guide to Ubuntu Linux (Versions 8.10 and 8.04)" by Mark Sobell Prentice Hall PTR, ISBN# 0137003889 2 edition (January 9, 2009) "Fedora 10 and Red Hat Enterprise Linux Bible" by Christopher Negus Wiley, ISBN# 0470413395 "Red Hat Fedora 6 and Enterprise Linux Bible" by Christopher Negus Sams, ISBN# 047008278X "Fedora 7 & Red Hat Enterprise Linux: The Complete Reference" by Richard Petersen Sams, ISBN# 0071486429 "Red Hat Fedora Core 6 Unleashed" by Paul Hudson, Andrew Hudson Sams, ISBN# 0672329298 "Red Hat Linux Fedora 3 Unleashed" by Bill Ball, Hoyt Duff Sams, ISBN# 0672327082 "Red Hat Linux 9 Unleashed" by Bill Ball, Hoyt Duff Sams, ISBN# 0672325888 May 8, 2003
I have the Red Hat 6 version and I have found it to be very helpful. I have found it to be way more complete than the other Linux books. It is the most complete general Linux book in publication. While other books in the "Unleashed" series have dissapointed me, this book is the best out there."Apache Portable Bible 2" by Mohammed J. Kabir ISBN # 0764548212, Hungry Minds
This book is very complete covering all aspects in detail. It is not your basic reprint of the apache.org documents like so many others."Pro DNS and Bind" by Ronald Aitchison Apress, ISBN# 1590594940
Date: 2004/07/26 15:34:42 Revision: 10.4
This document available in Postscript.and PDF.1.1 About the FAQ
This collection of Frequenty Asked Questions (FAQs) and answers has been compiled over a period of years, seeing which questions people ask about firewalls in such fora as Usenet, mailing lists, and Web sites. If you have a question, looking here to see whether it's answered before posting your question is good form. Don't send your questions about firewalls to the FAQ maintainers.
The maintainers welcome input and comments on the contents of this FAQ. Comments related to the FAQ should be addressed to firstname.lastname@example.org. Before you send us mail, please be sure to see sections 1.2 and 1.3 to make sure this is the right document for you to be reading.1.2 For Whom Is the FAQ Written?
Firewalls have come a long way from the days when this FAQ started. They've gone from being highly customized systems administered by their implementors to a mainstream commodity. Firewalls are no longer solely in the hands of those who design and implement security systems; even security-conscious end-users have them at Professional.
We wrote this FAQ for computer systems developers and administrators. We have tried to be fairly inclusive, making room for the newcomers, but we still assume some basic technical background. If you find that you don't understand this document, but think that you need to know more about firewalls, it might well be that you actually need to get more background in computer networking first. We provide references that have helped us; perhaps they'll also help you.
We focus predominately on "network" firewalls, but ``host'' or ``"personal'' firewalls will be addressed where appropriate.1.3 Before Sending Mail
Note that this collection of frequently-asked questions is a result of interacting with many people of different backgrounds in a wide variety of public fora. The firewalls-faq address is not a help desk. If you're trying to use an application that says that it's not working because of a firewall and you think that you need to remove your firewall, please do not send us mail asking how.
If you want to know how to ``get rid of your firewall'' because you cannot use some application, do not send us mail asking for help. We cannot help you. Really.
Who can help you? Good question. That will depend on what exactly the problem is, but here are several pointers. If none of these works, please don't ask us for any more. We don't know.
The FAQ can be found on the Web at
It's also posted monthly to
Posted versions are archived in all the usual places. Unfortunately, the version posted to Usenet and archived from that version lack the pretty pictures and useful hyperlinks found in the web version.1.5 Where Can I Find Non-English Versions of the FAQ?
Several translations are available. (If you've done a translation and it's not listed here, please write us so we can update the master document.)Norwegian Translation by Jon Haugsand http://helmersol.nr.no/haandbok/doc/brannmur/brannmur-faq.html 1.6 Contributors
Many people have written helpful suggestions and thoughtful commentary. We're grateful to all contributors. We'd like to thank afew by name: Keinanen Vesa, Allen Leibowitz, Brent Chapman, Brian Boyle, D. Clyde Williamson, Richard Reiner, Humberto Ortiz Zuazaga, and Theodore Hope.1.7 Copyright and Usage
Copyright ©1995-1996, 1998 Marcus J. Ranum. Copyright ©1998-2002 Matt Curtin. Copyright 2004, Paul D. Robertson. All rights reserved. This document may be used, reprinted, and redistributed as is providing this copyright notice and all attributions remain intact. Translations of the complete text from the original English to other languages are also explicitly allowed. Translators may add their names to the ``Contributors'' section.
Before being able to understand a complete discussion of firewalls, it's important to understand the basic principles that make firewalls work.2.1 What is a network firewall?
A firewall is a system or group of systems that enforces an access control policy between two or more networks. The actual means by which this is accomplished varies widely, but in principle, the firewall can be thought of as a pair of mechanisms: one which exists to block traffic, and the other which exists to permit traffic. Some firewalls place a greater emphasis on blocking traffic, while others emphasize permitting traffic. Probably the most important thing to recognize about a firewall is that it implements an access control policy. If you don't have a good idea of what kind of access you want to allow or to deny, a firewall really won't help you. It's also important to recognize that the firewall's configuration, because it is a mechanism for enforcing policy, imposes its policy on everything behind it. Administrators for firewalls managing the connectivity for a large number of hosts therefore have a heavy responsibility.2.2 Why would I want a firewall?
The Internet, like any other society, is plagued with the kind of jerks who enjoy the electronic equivalent of writing on other people's walls with spraypaint, tearing their mailboxes off, or just sitting in the street blowing their car horns. Some people try to get real work done over the Internet, and others have sensitive or proprietary data they must protect. Usually, a firewall's purpose is to keep the jerks out of your network while still letting you get your job done.
Many traditional-style corporations and data centers have computing security policies and practices that must be followed. In a case where a company's policies dictate how data must be protected, a firewall is very important, since it is the embodiment of the corporate policy. Frequently, the hardest part of hooking to the Internet, if you're a large company, is not justifying the expense or effort, but convincing management that it's safe to do so. A firewall provides not only real security--it often plays an important role as a security blanket for management.
Lastly, a firewall can act as your corporate ``ambassador'' to the Internet. Many corporations use their firewall systems as a place to store public information about corporate products and services, files to Download, bug-fixes, and so forth. Several of these systems have become important parts of the Internet service structure (e.g., UUnet.uu.net, whitehouse.gov, gatekeeper.dec.com) and have reflected well on their organizational sponsors. Note that while this is historically true, most organizations now place public information on a Web Portable, often protected by a firewall, but not normally on the firewall itself.2.3 What can a firewall protect against?
Some firewalls permit only email traffic through them, thereby protecting the network against any attacks other than attacks against the email service. Other firewalls provide less strict protections, and block services that are known to be problems.
Generally, firewalls are configured to protect against unauthenticated interactive logins from the ``outside'' world. This, more than anything, helps prevent vandals from logging into machines on your network. More elaborate firewalls block traffic from the outside to the inside, but permit users on the inside to communicate freely with the outside. The firewall can protect you against any type of network-borne attack if you unplug it.
Firewalls are also important since they can provide a single ``choke point'' where security and audit can be imposed. Unlike in a situation where a computer system is being attacked by someone dialing in with a modem, the firewall can act as an effective ``phone tap'' and tracing tool. Firewalls provide an important logging and auditing function; often they provide summaries to the administrator about what kinds and amount of traffic passed through it, how many attempts there were to break into it, etc.
Because of this, firewall logs are critically important data. They can be used as evidence in a court of law in most countries. You should safeguard, analyze and protect yoru firewall logs accordingly.
This is an important point: providing this ``choke point'' can serve the same purpose on your network as a guarded gate can for your site's physical premises. That means anytime you have a change in ``zones'' or levels of sensitivity, such a checkpoint is appropriate. A company rarely has only an outside gate and no receptionist or security staff to check badges on the way in. If there are layers of security on your site, it's reasonable to expect layers of security on your network.2.4 What can't a firewall protect against?
Firewalls can't protect against attacks that don't go through the firewall. Many corporations that connect to the Internet are very concerned about proprietary data leaking out of the company through that route. Unfortunately for those concerned, a magnetic tape, compact disc, DVD, or USB flash drives can just as effectively be used to export data. Many organizations that are terrified (at a management level) of Internet connections have no coherent policy about how dial-in access via modems should be protected. It's silly to build a six-foot thick steel door when you live in a wooden house, but there are a lot of organizations out there buying expensive firewalls and neglecting the numerous other back-doors into their network. For a firewall to work, it must be a part of a consistent overall organizational security architecture. Firewall policies must be realistic and reflect the level of security in the entire network. For example, a site with top secret or classified data doesn't need a firewall at all: they shouldn't be hooking up to the Internet in the first place, or the systems with the really secret data should be isolated from the rest of the corporate network.
Another thing a firewall can't really protect you against is traitors or idiots inside your network. While an industrial spy might export information through your firewall, he's just as likely to export it through a telephone, FAX machine, or Compact Disc. CDs are a far more likely means for information to leak from your organization than a firewall. Firewalls also cannot protect you against stupidity. Users who reveal sensitive information over the telephone are good targets for social engineering; an attacker may be able to break into your network by completely bypassing your firewall, if he can find a ``helpful'' employee inside who can be fooled into giving access to a modem pool. Before deciding this isn't a problem in your organization, ask yourself how much trouble a contractor has getting logged into the network or how much difficulty a user who forgot his password has getting it reset. If the people on the help desk believe that every call is internal, you have a problem that can't be fixed by tightening controls on the firewalls.
Firewalls can't protect against tunneling over most application protocols to trojaned or poorly written clients. There are no magic bullets and a firewall is not an excuse to not implement Client controls on internal networks or ignore host security on servers. Tunneling ``bad'' things over HTTP, SMTP, and other protocols is quite simple and trivially demonstrated. Security isn't ``fire and forget''.
Lastly, firewalls can't protect against bad things being allowed through them. For instance, many Trojan Horses use the Internet Relay Chat (IRC) protocol to allow an attacker to control a compromised internal host from a public IRC Server. If you allow any internal system to connect to any external system, then your firewall will provide no protection from this vector of attack.2.5 What about viruses and other malware?
Firewalls can't protect very well against things like viruses or malicious Utility (malware). There are too many ways of encoding binary files for transfer over networks, and too many different architectures and viruses to try to search for them all. In other words, a firewall cannot replace security-consciousness on the part of your users. In general, a firewall cannot protect against a data-driven attack--attacks in which something is mailed or copied to an internal host where it is then executed. This form of attack has occurred in the past against various versions of sendmail, ghostscript, scripting mail user agents like Outlook, and Web browsers like Internet Explorer.
Organizations that are deeply concerned about viruses should implement organization-wide virus control measures. Rather than only trying to screen viruses out at the firewall, make sure that every vulnerable desktop has virus scanning Utility that is run when the machine is rebooted. Blanketing your network with virus scanning Utility will protect against viruses that come in via floppy disks, CDs, modems, and the Internet. Trying to block viruses at the firewall will only protect against viruses from the Internet. Virus scanning at the firewall or e-mail gateway will stop a large number of infections.
Nevertheless, an increasing number of firewall vendors are offering ``virus detecting'' firewalls. They're probably only useful for naive users exchanging Windows-on-Intel executable programs and malicious-macro-capable application documents. There are many firewall-based approaches for dealing with problems like the ``ILOVEYOU'' worm and related attacks, but these are really oversimplified approaches that try to limit the damage of something that is so stupid it never should have occurred in the first place. Do not count on any protection from attackers with this feature. (Since ``ILOVEYOU'' went around, we've seen at least a half-dozen similar attacks, including Melissa, Happy99, Code Red, and Badtrans.B, all of which were happily passed through many virus-detecting firewalls and e-mail gateways.)
A strong firewall is never a substitute for sensible Client that recognizes the nature of what it's handling--untrusted data from an unauthenticated party--and behaves appropriately. Do not think that because ``everyone'' is using that mailer or because the vendor is a gargantuan multinational company, you're safe. In fact, it isn't true that ``everyone'' is using any mailer, and companies that specialize in turning technology invented elsewhere into something that's ``easy to use'' without any expertise are more likely to produce Software that can be fooled. Further consideration of this topic would be worthwhile , but is beyond the scope of this document.2.6 Will IPSEC make firewalls obsolete?
Some have argued that this is the case. Before pronouncing such a sweeping prediction, however, it's worthwhile to consider what IPSEC is and what it does. Once we know this, we can consider whether IPSEC will solve the problems that we're trying to solve with firewalls.
IPSEC (IP SECurity) refers to a set of standards developed by the Internet Engineering Task Force (IETF). There are many documents that collectively define what is known as ``IPSEC'' . IPSEC solves two problems which have plagued the IP protocol suite for years: host-to-host authentication (which will let hosts know that they're talking to the hosts they think they are) and encryption (which will prevent attackers from being able to watch the traffic going between machines).
Note that neither of these problems is what firewalls were created to solve. Although firewalls can help to mitigate some of the risks present on an Internet without authentication or encryption, there are really two classes of problems here: integrity and privacy of the information flowing between hosts and the limits placed on what kinds of connectivity is allowed between different networks. IPSEC addresses the former class and firewalls the latter.
What this means is that one will not eliminate the need for the other, but it does create some interesting possibilities when we look at combining firewalls with IPSEC-enabled hosts. Namely, such things as vendor-independent virtual private networks (VPNs), better packet filtering (by filtering on whether packets have the IPSEC authentication header), and application-layer firewalls will be able to have better means of host verification by actually using the IPSEC authentication header instead of ``just trusting'' the IP address presented.2.7 What are good sources of print information on firewalls?
There are several books that touch on firewalls. The best known are:
Related references are:
There are a number of basic design issues that should be addressed by the lucky person who has been tasked with the responsibility of designing, specifying, and implementing or overseeing the installation of a firewall.
The first and most important decision reflects the policy of how your company or organization wants to operate the system: is the firewall in place explicitly to deny all services except those critical to the mission of connecting to the Net, or is the firewall in place to provide a metered and audited method of ``queuing'' access in a non-threatening manner? There are degrees of paranoia between these positions; the final stance of your firewall might be more the result of a political than an engineering decision.
The second is: what level of monitoring, redundancy, and control do you want? Having established the acceptable risk level (i.e., how paranoid you are) by resolving the first issue, you can form a checklist of what should be monitored, permitted, and denied. In other words, you start by figuring out your overall objectives, and then combine a needs analysis with a risk assessment, and sort the almost always conflicting requirements out into a laundry list that specifies what you plan to implement.
The third issue is financial. We can't address this one here in anything but vague terms, but it's important to try to quantify any proposed solutions in terms of how much it will cost either to buy or to implement. For example, a complete firewall product may cost between $100,000 at the high end, and free at the low end. The free option, of doing some fancy configuring on a Cisco or similar router will cost nothing but staff time and a few cups of coffee. Implementing a high end firewall from scratch might cost several man-months, which may equate to $30,000 worth of staff salary and benefits. The systems management overhead is also a consideration. Building a Professional-brew is fine, but it's important to build it so that it doesn't require constant (and expensive) attention. It's important, in other words, to evaluate firewalls not only in terms of what they cost now, but continuing costs such as support.
On the technical side, there are a couple of decisions to make, based on the fact that for all practical purposes what we are talking about is a static traffic routing service placed between the network service provider's router and your internal network. The traffic routing service may be implemented at an IP level via something like screening rules in a router, or at an application level via proxy gateways and services.
The decision to make is whether to place an exposed stripped-down machine on the outside network to run proxy services for telnet, FTP, news, etc., or whether to set up a screening router as a filter, permitting communication with one or more internal machines. There are benefits and drawbacks to both approaches, with the proxy machine providing a greater level of audit and, potentially, security in return for increased cost in configuration and a decrease in the level of service that may be provided (since a proxy needs to be developed for each desired service). The old trade-off between ease-of-use and security comes back to haunt us with a vengeance.3.2 What are the basic types of firewalls?
Conceptually, there are three types of firewalls:
They are not as different as you might think, and latest technologies are blurring the distinction to the point where it's no longer clear if either one is ``better'' or ``worse.'' As always, you need to be careful to pick the type that meets your needs.
Which is which depends on what mechanisms the firewall uses to pass traffic from one security zone to another. The International Standards Organization (ISO) Open Systems Interconnect (OSI) model for networking defines seven layers, where each layer provides services that ``higher-level'' layers depend on. In order from the bottom, these layers are physical, data link, network, transport, session, presentation, application.
The important thing to recognize is that the lower-level the forwarding mechanism, the less examination the firewall can perform. Generally speaking, lower-level firewalls are faster, but are easier to fool into doing the wrong thing.
These days, most firewalls fall into the ``hybrid'' category, which do network filtering as well as some amount of application inspection. The amount changes depending on the vendor, product, protocol and version, so some level of digging and/or testing is often necessary.3.2.1 Network layer firewalls
These generally make their decisions based on the source, destination addresses and ports (see Appendix 6 for a more detailed discussion of ports) in individual IP packets. A simple router is the ``traditional'' network layer firewall, since it is not able to make particularly sophisticated decisions about what a packet is actually talking to or where it actually came from. Modern network layer firewalls have become increasingly sophisticated, and now maintain internal information about the state of connections passing through them, the contents of some of the data streams, and so on. One thing that's an important distinction about many network layer firewalls is that they route traffic directly though them, so to use one you either need to have a validly assigned IP address block or to use a ``private internet'' address block . Network layer firewalls tend to be very fast and tend to be very transparent to users.
In Figure 1, a network layer firewall called a ``screened host firewall'' is represented. In a screened host firewall, access to and from a single host is controlled by means of a router operating at a network layer. The single host is a bastion host; a highly-defended and secured strong-point that (hopefully) can resist attack.
Example Network layer firewall: In Figure 2, a network layer firewall called a ``screened subnet firewall'' is represented. In a screened subnet firewall, access to and from a whole network is controlled by means of a router operating at a network layer. It is similar to a screened host, except that it is, effectively, a network of screened hosts.3.2.2 Application layer firewalls
These generally are hosts running proxy servers, which permit no traffic directly between networks, and which perform elaborate logging and auditing of traffic passing through them. Since the proxy applications are Client components running on the firewall, it is a good place to do lots of logging and access control. Application layer firewalls can be used as network address translators, since traffic goes in one ``side'' and out the other, after having passed through an application that effectively masks the origin of the initiating connection. Having an application in the way in some cases may impact performance and may make the firewall less transparent. Early application layer firewalls such as those built using the TIS firewall toolkit, are not particularly transparent to end users and may require some training. Modern application layer firewalls are often fully transparent. Application layer firewalls tend to provide more detailed audit reports and tend to enforce more conservative security models than network layer firewalls.
Example Application layer firewall: In Figure 3, an application layer firewall called a ``dual homed gateway'' is represented. A dual homed gateway is a highly secured host that runs proxy Software. It has two network interfaces, one on each network, and blocks all traffic passing through it.
Most firewalls now lie someplace between network layer firewalls and application layer firewalls. As expected, network layer firewalls have become increasingly ``aware'' of the information going through them, and application layer firewalls have become increasingly ``low level'' and transparent. The end result is that now there are fast packet-screening systems that log and audit data as they pass through the system. Increasingly, firewalls (network and application layer) incorporate encryption so that they may protect traffic passing between them over the Internet. Firewalls with end-to-end encryption can be used by organizations with multiple points of Internet connectivity to use the Internet as a ``private backbone'' without worrying about their data or passwords being sniffed. (IPSEC, described in Section 2.6, is playing an increasingly significant role in the construction of such virtual private networks.)3.3 What are proxy servers and how do they work?
A proxy Professional (sometimes referred to as an application gateway or forwarder) is an application that mediates traffic between a protected network and the Internet. Proxies are often used instead of router-based traffic controls, to prevent traffic from passing directly between networks. Many proxies contain extra logging or support for user authentication. Since proxies must ``understand'' the application protocol being used, they can also implement protocol specific security (e.g., an File Transfer Protocol proxy might be configurable to permit incoming File Transfer Protocol and block outgoing File Transfer Protocol).
Proxy servers are application specific. In order to support a new protocol via a proxy, a proxy must be developed for it. One popular set of proxy servers is the TIS Internet Firewall Toolkit (``FWTK'') which includes proxies for Telnet, rlogin, FTP, the X Window System, HTTP/Web, and NNTP/Usenet news. SOCKS is a generic proxy system that can be compiled into a client-side application to make it work through a firewall. Its advantage is that it's easy to use, but it doesn't support the addition of authentication hooks or protocol specific logging. For more information on SOCKS, see http://www.socks.nec.com/.3.4 What are some cheap packet screening tools?
The Texas A&M University security tools include Utility for implementing screening routers. Karlbridge is a Computer-based screening router kit available from FTP://File Transfer Protocol.net.ohio-state.edu/pub/kbridge/.
There are numerous kernel-level packet screens, including ipf, ipfw, ipchains, pf, and ipfwadm. Typically, these are included in various freeware Unix implementations, such as FreeBSD, OpenBSD, NetBSD, and Linux. You might also find these tools available in your commercial Unix implementation.
If you're willing to get your hands a little dirty, it's completely possible to build a Encrypted and fully functional firewall for the price of hardware and some of your time.3.5 What are some reasonable filtering rules for a kernel-based packet screen?
This example is written specifically for ipfwadm on Linux, but the principles (and even much of the syntax) applies for other kernel interfaces for packet screening on ``open source'' Unix systems.
There are four basic categories covered by the ipfwadm rules:-A Packet Accounting -I Input firewall -O Output firewall -F Forwarding firewall
ipfwadm also has masquerading (-M) capabilities. For more information on switches and options, see the ipfwadm man page.3.5.1 Implementation
Here, our organization is using a private (RFC 1918) Class C network 192.168.1.0. Our ISP has assigned us the address 22.214.171.124 for our gateway's external interface and 126.96.36.199 for our external mail Portable. Organizational policy says:
The following block of commands can be placed in a system boot file (perhaps rc.local on Unix systems).ipfwadm -F -f ipfwadm -F -p deny ipfwadm -F -i m -b -P tcp -S 0.0.0.0/0 1024:65535 -D 188.8.131.52 25 ipfwadm -F -i m -b -P tcp -S 0.0.0.0/0 1024:65535 -D 184.108.40.206 53 ipfwadm -F -i m -b -P udp -S 0.0.0.0/0 1024:65535 -D 220.127.116.11 53 ipfwadm -F -a m -S 192.168.1.0/24 -D 0.0.0.0/0 -W eth0 /sbin/route add -host 18.104.22.168 gw 192.168.1.2 3.5.2 Explanation 3.6 What are some reasonable filtering rules for a Cisco?
The example in Figure 4 shows one possible configuration for using the Cisco as filtering router. It is a sample that shows the implementation of as specific policy. Your policy will undoubtedly vary.
In this example, a company has Class C network address 22.214.171.124. Company network is connected to Internet via IP Service Provider. Company policy is to allow everybody access to Internet services, so all outgoing connections are accepted. All incoming connections go through ``mailhost''. Mail and DNS are only incoming services.3.6.1 Implementation
Only incoming packets from Internet are checked in this configuration. Rules are tested in order and stop when the first match is found. There is an implicit deny rule at the end of an access list that denies everything. This IP access list assumes that you are running Cisco IOS v. 10.3 or later.no ip source-route ! interface ethernet 0 ip address 126.96.36.199 no ip directed-broadcast ! interface Crack 0 no ip directed-broadcast ip access-group 101 in ! access-list 101 deny ip 127.0.0.0 0.255.255.255 any access-list 101 deny ip 10.0.0.0 0.255.255.255 any access-list 101 deny ip 172.16.0.0 0.15.255.255 any access-list 101 deny ip 192.168.0.0 0.0.255.255 any access-list 101 deny ip any 0.0.0.255 255.255.255.0 access-list 101 deny ip any 0.0.0.0 255.255.255.0 ! access-list 101 deny ip 188.8.131.52 0.0.0.255 access-list 101 permit tcp any any established ! access-list 101 permit tcp any host 184.108.40.206 eq smtp access-list 101 permit tcp any host 220.127.116.11 eq dns access-list 101 permit udp any host 18.104.22.168 eq dns ! access-list 101 deny tcp any any range 6000 6003 access-list 101 deny tcp any any range 2000 2003 access-list 101 deny tcp any any eq 2049 access-list 101 deny udp any any eq 2049 ! access-list 101 permit tcp any 20 any gt 1024 ! access-list 101 permit icmp any any ! snmp-Professional community FOOBAR RO 2 line vty 0 4 access-class 2 in access-list 2 permit 22.214.171.124 0.0.0.255 3.6.2 Explanations
Use at least Cisco version 9.21 so you can filter incoming packets and check for address spoofing. It's still better to use 10.3, where you get some extra features (like filtering on source port) and some improvements on filter syntax.
You have still a few ways to make your setup stronger. Block all incoming TCP-connections and tell users to use passive-FTP clients. You can also block outgoing ICMP echo-reply and destination-unreachable messages to hide your network and to prevent use of network scanners. Cisco.com use to have an archive of examples for building firewalls using Cisco routers, but it doesn't seem to be online anymore. There are some notes on Cisco access control lists, at least, at File Transfer Protocol://File Transfer Protocol.cisco.com/pub/mibs/app_notes/access-lists.3.7 What are the critical resources in a firewall?
It's important to understand the critical resources of your firewall architecture, so when you do capacity planning, quality optimizations, etc., you know exactly what you need to do, and how much you need to do it in order to get the desired result.
What exactly the firewall's critical resources are tends to vary from site to site, depending on the sort of traffic that loads the system. Some people think they'll automatically be able to increase the data throughput of their firewall by putting in a box with a faster CPU, or another CPU, when this isn't necessarily the case. Potentially, this could be a large waste of money that doesn't do anything to solve the problem at hand or provide the expected scalability.
On busy systems, memory is extremely important. You have to have enough RAM to support every instance of every program necessary to service the load placed on that machine. Otherwise, the swapping will start and the productivity will stop. Light swapping isn't usually much of a problem, but if a system's swap space begins to get busy, then it's usually time for more RAM. A system that's heavily swapping is often relatively easy to push over the edge in a denial-of-service attack, or simply fall behind in processing the load placed on it. This is where long email delays start.
Beyond the system's requirement for memory, it's useful to understand that different services use different system resources. So the configuration that you have for your system should be indicative of the kind of load you plan to service. A 1400 MHz processor isn't going to do you much good if all you're doing is netnews and mail, and are trying to do it on an IDE disk with an ISA controller.Table 1: Critical Resources for Firewall Services Service Critical Resource Email Disk I/O Netnews Disk I/O Web Host OS Socket performance IP Routing Host OS Socket quality Web Cache Host OS Socket quality, Disk I/O 3.8 What is a DMZ, and why do I want one?
``DMZ'' is an abbreviation for ``demilitarized zone''. In the context of firewalls, this refers to a part of the network that is neither part of the internal network nor directly part of the Internet. Typically, this is the area between your Internet access router and your bastion host, though it can be between any two policy-enforcing components of your architecture.
A DMZ can be created by putting access control lists on your access router. This minimizes the exposure of hosts on your external LAN by allowing only recognized and managed services on those hosts to be accessible by hosts on the Internet. Many commercial firewalls simply make a third interface off of the bastion host and label it the DMZ, the point is that the network is neither ``inside'' nor ``outside''.
For example, a web Server running on NT might be vulnerable to a number of denial-of-service attacks against such services as RPC, NetBIOS and SMB. These services are not required for the operation of a web Server, so blocking TCP connections to ports 135, 137, 138, and 139 on that host will reduce the exposure to a denial-of-service attack. In fact, if you block everything but HTTP traffic to that host, an attacker will only have one service to attack.
This illustrates an important principle: never offer attackers more to work with than is absolutely necessary to support the services you want to offer the public.3.9 How might I increase the security and scalability of my DMZ?
A common approach for an attacker is to break into a host that's vulnerable to attack, and exploit trust relationships between the vulnerable host and more interesting targets.
If you are running a number of services that have different levels of security, you might want to consider breaking your DMZ into several ``security zones''. This can be done by having a number of different networks within the DMZ. For example, the access router could feed two Ethernets, both protected by ACLs, and therefore in the DMZ.
On one of the Ethernets, you might have hosts whose purpose is to service your organization's need for Internet connectivity. These will likely relay mail, news, and host DNS. On the other Ethernet could be your web Portable(s) and other hosts that provide services for the benefit of Internet users.
In many organizations, services for Internet users tend to be less carefully guarded and are more likely to be doing insecure things. (For example, in the case of a web Portable, unauthenticated and untrusted users might be running CGI, PHP, or other executable programs. This might be reasonable for your web Professional, but brings with it a certain set of risks that need to be managed. It is likely these services are too risky for an organization to run them on a bastion host, where a slip-up can result in the complete failure of the security mechanisms.)
By putting hosts with similar levels of risk on networks together in the DMZ, you can help minimize the effect of a breakin at your site. If someone breaks into your web Portable by exploiting some bug in your web Professional, they'll not be able to use it as a launching point to break into your private network if the web servers are on a separate LAN from the bastion hosts, and you don't have any trust relationships between the web Server and bastion host.
Now, keep in mind that this is Ethernet. If someone breaks into your web Server, and your bastion host is on the same Ethernet, an attacker can install a sniffer on your web Portable, and watch the traffic to and from your bastion host. This might reveal things that can be used to break into the bastion host and gain access to the internal network. (Switched Ethernet can reduce your exposure to this kind of problem, but will not eliminate it.)
Splitting services up not only by host, but by network, and limiting the level of trust between hosts on those networks, you can greatly reduce the likelihood of a breakin on one host being used to break into the other. Succinctly stated: breaking into the web Professional in this case won't make it any easier to break into the bastion host.
You can also increase the scalability of your architecture by placing hosts on different networks. The fewer machines that there are to share the available bandwidth, the more bandwidth that each will get.3.10 What is a `single point of failure', and how do I avoid having one?
An architecture whose security hinges upon one mechanism has a single point of failure. Software that runs bastion hosts has bugs. Applications have bugs. Utility that controls routers has bugs. It makes sense to use all of these components to build a securely designed network, and to use them in redundant ways.
If your firewall architecture is a screened subnet, you have two packet filtering routers and a bastion host. (See question 3.2 from this section.) Your Internet access router will not permit traffic from the Internet to get all the way into your private network. However, if you don't enforce that rule with any other mechanisms on the bastion host and/or choke router, only one component of your architecture needs to fail or be compromised in order to get inside. On the other hand, if you have a redundant rule on the bastion host, and again on the choke router, an attacker will need to defeat three mechanisms.
Further, if the bastion host or the choke router needs to invoke its rule to block outside access to the internal network, you might want to have it trigger an alarm of some sort, since you know that someone has gotten through your access router.3.11 How can I block all of the bad stuff?
For firewalls where the emphasis is on security instead of connectivity, you should consider blocking everything by default, and only specifically allowing what services you need on a case-by-case basis.
If you block everything, except a specific set of services, then you've already made your job much easier. Instead of having to worry about every security problem with everything product and service around, you only need to worry about every security problem with a specific set of services and products.
Before turning on a service, you should consider a couple of questions:
When considering the above questions, keep the following in mind:
A few years ago, someone got the idea that it's a good idea to block ``bad'' web sites, i.e., those that contain material that The Company views ``inappropriate''. The idea has been increasing in popularity, but there are several things to consider when thinking about implementing such controls in your firewall.
The rule-of-thumb to remember here is that you cannot solve social problems with technology. If there is a problem with someone going to an ``inappropriate'' web site, that is because someone else saw it and was offended by what he saw, or because that person's productivity is below expectations. In either case, those are matters for the personnel department, not the firewall administrator.4.1 What is source routed traffic and why is it a threat?
Normally, the route a packet takes from its source to its destination is determined by the routers between the source and destination. The packet itself only says where it wants to go (the destination address), and nothing about how it expects to get there.
There is an optional way for the sender of a packet (the source) to include information in the packet that tells the route the packet should take to get to its destination; thus the name ``source routing''. For a firewall, source routing is noteworthy, since an attacker can generate traffic claiming to be from a system ``inside'' the firewall. In general, such traffic wouldn't route to the firewall properly, but with the source routing option, all the routers between the attacker's machine and the target will return traffic along the reverse path of the source route. Implementing such an attack is quite easy; so firewall builders should not discount it as unlikely to happen.
In practice, source routing is very little used. In fact, generally the main legitimate use is in debugging network problems or routing traffic over specific links for congestion control for specialized situations. When building a firewall, source routing should be blocked at some point. Most commercial routers incorporate the ability to block source routing specifically, and many versions of Unix that might be used to build firewall bastion hosts have the ability to disable or to ignore source routed traffic.4.2 What are ICMP redirects and redirect bombs?
An ICMP Redirect tells the recipient system to override something in its routing table. It is legitimately used by routers to tell hosts that the host is using a non-optimal or defunct route to a particular destination, i.e., the host is sending it to the wrong router. The wrong router sends the host back an ICMP Redirect packet that tells the host what the correct route should be. If you can forge ICMP Redirect packets, and if your target host pays attention to them, you can alter the routing tables on the host and possibly subvert the security of the host by causing traffic to flow via a path the network manager didn't intend. ICMP Redirects also may be employed for denial of service attacks, where a host is sent a route that loses it connectivity, or is sent an ICMP Network Unreachable packet telling it that it can no longer access a particular network.
Many firewall builders screen ICMP traffic from their network, since it limits the ability of outsiders to ping hosts, or modify their routing tables.
Before you decide to block all ICMP packets, you should be aware of how the TCP protocol does ``Path MTU Discovery'', to make certain that you don't break connectivity to other sites. If you can't safely block it everywhere, you can consider allowing selected types of ICMP to selected routing devices. If you don't block it, you should at least ensure that your routers and hosts don't respond to broadcast ping packets.4.3 What about denial of service?
Denial of service is when someone decides to make your network or firewall useless by disrupting it, crashing it, jamming it, or flooding it. The problem with denial of service on the Internet is that it is impossible to prevent. The reason has to do with the distributed nature of the network: every network node is connected via other networks which in turn connect to other networks, etc. A firewall administrator or ISP only has control of a few of the local elements within reach. An attacker can always disrupt a connection ``upstream'' from where the victim controls it. In other words, if someone wanted to take a network off the air, he could do it either by taking the network off the air, or by taking the networks it connects to off the air, ad infinitum. There are many, many, ways someone can deny service, ranging from the complex to the trivial brute-force. If you are considering using Internet for a service which is absolutely time or mission critical, you should consider your fallback position in the event that the network is down or damaged.
TCP/IP's UDP echo service is trivially abused to get two servers to flood a network segment with echo packets. You should consider commenting out unused entries in /etc/inetd.conf of Unix hosts, adding no ip small-servers to Cisco routers, or the equivalent for your components.4.4 What are some common attacks, and how can I protect my system against them?
Each site is a little different from every other in terms of what attacks are likely to be used against it. Some recurring themes do arise, though.4.4.1 SMTP Portable Hijacking (Unauthorized Relaying)
This is where a spammer will take many thousands of copies of a message and send it to a huge list of email addresses. Because these lists are often so bad, and in order to increase the speed of operation for the spammer, many have resorted to simply sending all of their mail to an SMTP Server that will take care of actually delivering the mail.
Of course, all of the bounces, spam complaints, hate mail, and bad PR come for the site that was used as a relay. There is a very real cost associated with this, mostly in paying people to clean up the mess afterward.
The Mail Abuse Prevention System1Transport Security Initiative2maintains a complete description of the problem, and how to configure about every mailer on the planet to protect against this attack.4.4.2 Exploiting Bugs in Applications
Various versions of web servers, mail servers, and other Internet service Utility contain bugs that allow remote (Internet) users to do things ranging from gain control of the machine to making that application crash and just about everything in between.
The exposure to this risk can be reduced by running only necessary services, keeping up to date on patches, and using products that have been around a while.4.4.3 Bugs in OS
Again, these are typically initiated by users remotely. Operating systems that are relatively new to IP networking tend to be more problematic, as more mature OS have had time to find and eliminate their bugs. An attacker can often make the target equipment continuously reboot, crash, lose the ability to talk to the network, or replace files on the machine.
Here, running as few operating system services as possible can help. Also, having a packet filter in front of the operating system can reduce the exposure to a large number of these types of attacks.
And, of course, chosing a stable operating system will help here as well. When selecting an OS, don't be fooled into believing that ``the pricier, the better''. paid OS are often much more robust than their commercial counterparts5.1 Do I really want to allow everything that my users ask for?
It's entirely possible that the answer is ``no''. Each site has its own policies about what is and isn't needed, but it's important to remember that a large part of the job of being an organization's gatekeeper is education. Users want streaming video, real-time chat, and to be able to offer services to external customers that require interaction with live databases on the internal network.
That doesn't mean that any of these things can be done without presenting more risk to the organization than the supposed ``value'' of heading down that road is worth. Most users don't want to put their organization at risk. They just read the trade rags, see advertisements, and they want to do those things, too. It's important to look into what it is that they really want to do, and to help them understand how they might be able to accomplish their real objective in a more secure manner.
You won't always be popular, and you might even find yourself being given direction to do something incredibly stupid, like ``just open up ports foo through bar''. If that happens, don't worry about it. It would be wise to keep all of your exchanges on such an event so that when a 12-year-old script kiddie breaks in, you'll at least be able to separate yourself from the whole mess.5.2 How do I make Web/HTTP work through my firewall?
There are three ways to do it.
SSL is a protocol that allows Encrypted connections across the Internet. Typically, SSL is used to protect HTTP traffic. However, other protocols (such as telnet) can run atop SSL.
Enabling SSL through your firewall can be done the same way that you would allow HTTP traffic, if it's HTTP that you're using SSL to secure, which is usually true. The only difference is that instead of using something that will simply relay HTTP, you'll need something that can tunnel SSL. This is a feature present on most web object caches.
You can find out more about SSL from Netscape6.5.4 How do I make DNS work with a firewall?
Some organizations want to hide DNS names from the outside. Many experts don't think hiding DNS names is worthwhile, but if site/corporate policy mandates hiding domain names, this is one approach that is known to work. Another reason you may have to hide domain names is if you have a non-standard addressing scheme on your internal network. In that case, you have no choice but to hide those addresses. Don't fool yourself into thinking that if your DNS names are hidden that it will slow an attacker down much if they break into your firewall. Information about what is on your network is too easily gleaned from the networking layer itself. If you want an interesting demonstration of this, ping the subnet broadcast address on your LAN and then do an ``arp -a.'' Note also that hiding names in the DNS doesn't address the problem of host names ``leaking'' out in mail headers, news articles, etc.
This approach is one of many, and is useful for organizations that wish to hide their host names from the Internet. The success of this approach lies on the fact that DNS clients on a machine don't have to talk to a DNS Portable on that same machine. In other words, just because there's a DNS Professional on a machine, there's nothing wrong with (and there are often advantages to) redirecting that machine's DNS client activity to a DNS Portable on another machine.
First, you set up a DNS Portable on the bastion host that the outside world can talk to. You set this Portable up so that it claims to be authoritative for your domains. In fact, all this Server knows is what you want the outside world to know; the names and addresses of your gateways, your wildcard MX records, and so forth. This is the ``public'' Server.
Then, you set up a DNS Professional on an internal machine. This Server also claims to be authoritative for your domains; unlike the public Professional, this one is telling the truth. This is your ``normal'' nameserver, into which you put all your ``normal'' DNS stuff. You also set this Portable up to forward queries that it can't resolve to the public Server (using a ``forwarders'' line in /etc/named.boot on a Unix machine, for example).
Finally, you set up all your DNS clients (the /etc/resolv.conf file on a Unix box, for instance), including the ones on the machine with the public Professional, to use the internal Portable. This is the key.
An internal client asking about an internal host asks the internal Portable, and gets an answer; an internal client asking about an external host asks the internal Server, which asks the public Portable, which asks the Internet, and the answer is relayed back. A client on the public Professional works just the same way. An external client, however, asking about an internal host gets back the ``restricted'' answer from the public Server.
This approach assumes that there's a packet filtering firewall between these two servers that will allow them to talk DNS to each other, but otherwise restricts DNS between other hosts.
Another trick that's useful in this scheme is to employ wildcard PTR records in your IN-ADDR.ARPA domains. These cause an an address-to-name lookup for any of your non-public hosts to return something like ``unknown.YOUR.DOMAIN'' rather than an error. This satisfies anonymous File Transfer Protocol sites like FTP.uu.net that insist on having a name for the machines they talk to. This may fail when talking to sites that do a DNS cross-check in which the host name is matched against its address and vice versa.5.5 How do I make File Transfer Protocol work through my firewall?
Generally, making File Transfer Protocol work through the firewall is done either using a proxy Professional such as the firewall toolkit's FTP-gw or by permitting incoming connections to the network at a restricted port range, and otherwise restricting incoming connections using something like ``established'' screening rules. The FTP Client is then modified to bind the data port to a port within that range. This entails being able to modify the FTP Client application on internal hosts.
In some cases, if File Transfer Protocol downloads are all you wish to support, you might want to consider declaring File Transfer Protocol a ``dead protocol'' and letting you users Free Download files via the Web instead. The user interface certainly is nicer, and it gets around the ugly callback port problem. If you choose the File Transfer Protocol-via-Web approach, your users will be unable to FTP files out, which, depending on what you are trying to accomplish, may be a problem.
A different approach is to use the File Transfer Protocol ``PASV'' option to indicate that the remote FTP server should permit the client to initiate connections. The PASV approach assumes that the FTP server on the remote system supports that operation. (See ``Firewall-Friendly File Transfer Protocol'' .)
Other sites prefer to build client versions of the FTP Software that are linked against a SOCKS library.5.6 How do I make Telnet work through my firewall?
Telnet is generally supported either by using an application proxy such as the firewall toolkit's tn-gw, or by simply configuring a router to permit outgoing connections using something like the ``established'' screening rules. Application proxies could be in the form of a standalone proxy running on the bastion host, or in the form of a SOCKS Portable and a modified client.5.7 How do I make Finger and whois work through my firewall?
Many firewall admins permit connections to the finger port from only trusted machines, which can issue finger requests in the form of: finger email@example.com@firewall. This approach only works with the standard Unix version of finger. Controlling access to services and restricting them to specific machines is managed using either tcp_wrappers or netacl from the firewall toolkit. This approach will not work on all systems, since some finger servers do not permit user@host@host fingering.
Many sites block inbound finger requests for a variety of reasons, foremost being past security bugs in the finger Portable (the Morris internet worm made these bugs famous) and the risk of proprietary or sensitive information being revealed in user's finger information. In general, however, if your users are accustomed to putting proprietary or sensitive information in their .plan files, you have a more serious security problem than just a firewall can solve.5.8 How do I make gopher, archie, and other services work through my firewall?
The majority of firewall administrators choose to support gopher and archie through web proxies, instead of directly. Proxies such as the firewall toolkit's http-gw convert gopher/gopher+ queries into HTML and vice versa. For supporting archie and other queries, many sites rely on Internet-based Web-to-archie servers, such as ArchiePlex. The Web's tendency to make everything on the Internet look like a web service is both a blessing and a curse.
There are many new services constantly cropping up. Often they are misdesigned or are not designed with security in mind, and their designers will cheerfully tell you if you want to use them you need to let port xxx through your router. Unfortunately, not everyone can do that, and so a number of interesting new toys are difficult to use for people behind firewalls. Things like RealAudio, which require direct UDP access, are particularly egregious examples. The thing to bear in mind if you find yourself faced with one of these problems is to find out as much as you can about the security risks that the service may present, before you just allow it through. It's quite possible the service has no security implications. It's equally possible that it has undiscovered holes you could drive a truck through.5.9 What are the issues about X11 through a firewall?
The X Windows System is a very useful system, but unfortunately has some major security flaws. Remote systems that can gain or spoof access to a workstation's X11 display can monitor keystrokes that a user enters, Fetch copies of the contents of their windows, etc.
While attempts have been made to overcome them (E.g., MIT ``Magic Cookie'') it is still entirely too easy for an attacker to interfere with a user's X11 display. Most firewalls block all X11 traffic. Some permit X11 traffic through application proxies such as the DEC CRL X11 proxy (File Transfer Protocol crl.dec.com). The firewall toolkit includes a proxy for X11, called x-gw, which a user can invoke via the Telnet proxy, to create a virtual X11 Portable on the firewall. When requests are made for an X11 connection on the virtual X11 Professional, the user is presented with a pop-up asking them if it is OK to allow the connection. While this is a little unaesthetic, it's entirely in keeping with the rest of X11.5.10 How do I make RealAudio work through my firewall?
RealNetworks maintains some information about how to get RealAudio working through your firewall7. It would be unwise to make any changes to your firewall without understanding what the changes will do, exactly, and knowing what risks the new changes will bring with them.5.11 How do I make my web Portable act as a front-end for a database that lives on my private network?
The best way to do this is to allow very limited connectivity between your web Server and your database Portable via a specific protocol that only supports the level of functionality you're going to use. Allowing raw SQL, or anything else where custom extractions could be performed by an attacker isn't generally a good idea.
Assume that an attacker is going to be able to break into your web Server, and make queries in the same way that the web Server can. Is there a mechanism for extracting sensitive information that the web Portable doesn't need, like credit card information? Can an attacker issue an SQL select and extract your entire proprietary database?
``E-commerce'' applications, like everything else, are best designed with security in mind from the ground up, instead of having security ``added'' as an afterthought. Review your architecture critically, from the perspective of an attacker. Assume that the attacker knows everything about your architecture. Now ask yourself what needs to be done to steal your data, to make unauthorized changes, or to do anything else that you don't want done. You might find that you can significantly increase security without decreasing functionality by making a few design and implementation decisions.
Some ideas for how to handle this:
If your site firewall policy is sufficiently lax that you're willing to manage the risk that someone will exploit a vulnerability in your web Professional that will result in partial or complete exposure of your database, then there isn't much preventing you from doing this.
However, in many organizations, the people who are responsible for tying the web front end to the database back end simply do not have the authority to take that responsibility. Further, if the information in the database is about people, you might find yourself guilty of breaking a number of laws if you haven't taken reasonable precautions to prevent the system from being abused.
In general, this isn't a good idea. See question 5.11 for some ideas on other ways to accomplish this objective.5.13 How Do I Make IP Multicast Work With My Firewall?
IP multicast is a means of getting IP traffic from one host to a set of hosts without using broadcasting; that is, instead of every host getting the traffic, only those that want it will get it, without each having to maintain a separate connection to the Server. IP unicast is where one host talks to another, multicast is where one host talks to a set of hosts, and broadcast is where one host talks to all hosts.
The public Internet has a multicast backbone (``MBone'') where users can engage in multicast traffic exchange. Common uses for the MBone are streams of IETF meetings and similar such interaction. Getting one's own network connected to the MBone will require that the upstream provider route multicast traffic to and from your network. Additionally, your internal network will have to support multicast routing.
The role of the firewall in multicast routing, conceptually, is no different from its role in other traffic routing. That is, a policy that identifies which multicast groups are and aren't allowed must be defined and then a system of allowing that traffic according to policy must be devised. Great detail on how exactly to do this is beyond the scope of this document. Fortunately, RFC 2588  discusses the subject in more detail. Unless your firewall product supports some means of selective multicast forwarding or you have the ability to put it in yourself, you might find forwarding multicast traffic in a way consistent with your security policy to be a bigger headache than it's worth.
by Mikael Olsson
This appendix will begin at a fairly ``basic'' level, so even if the first points seem childishly self-evident to you, you might still learn something from skipping ahead to something later in the text.6.1 What is a port?
A ``port'' is ``virtual slot'' in your TCP and UDP stack that is used to map a connection between two hosts, and also between the TCP/UDP layer and the actual applications running on the hosts.
They are numbered 0-65535, with the range 0-1023 being marked as ``reserved'' or ``privlileged'', and the rest (1024-65535) as ``dynamic'' or ``unprivileged''.
There are basically two uses for ports:
Dynamic ports may also be used as ``listening'' ports in some applications, most notably FTP.
Ports in the range 0-1023 are almost always Professional ports. Ports in the range 1024-65535 are usually dynamic ports (i.e., opened dynamically when you connect to a Server port). However, any port may be used as a Server port, and any port may be used as an ``outgoing'' port.
So, to sum it up, here's what happens in a basic connection:
There are several lists outlining the ``reserved'' and ``well known'' ports, as well as ``commonly used'' ports, and the best one is: FTP://FTP.isi.edu/in-notes/iana/assignments/port-numbers. For those of you still reading RFC 1700 to find out what port number does what, STOP DOING IT. It is horribly out of date, and it won't be less so tomorrow.
Now, as for trusting this information: These lists do not, in any way, constitute any kind of holy bible on which ports do what.
Wait, let me rephrase that: THERE IS NO WAY OF RELIABLY DETERMINING WHAT PORT DOES WHAT SIMPLY BY LOOKING IN A LIST.6.3 What are LISTENING ports?
Suppose you did ``netstat -a'' on your machine and ports 1025 and 1030 showed up as LISTENing. What do they do?
Right, let's take a look in the assigned port numbers list.blackjack 1025/tcp network blackjack iad1 1030/tcp BBN IAD
Wait, what's happening? Has my workstation stolen my VISA number and decided to go play blackjack with some rogue Portable on the internet? And what's that Utility that BBN has installed?
This is NOT where you start panicking and send mail to the firewalls list. In fact, this question has been asked maybe a dozen times during the past six months, and every time it's been answered. Not that THAT keeps people from asking the same question again.
If you are asking this question, you are most likely using a windows box. The ports you are seeing are (most likely) two listening ports that the RPC subsystem opens when it starts up.
This is an example of where dynamicly assigned ports may be used by Portable processes. Applications using RPC will later on connect to port 135 (the netbios ``portmapper'') to query where to find some RPC service, and get an answer back saying that that particular service may be contacted on port 1025.
Now, how do we know this, since there's no ``list'' describing these ports? Simple: There's no substitute for experience. And using the mailing list search engines also helps a hell of a lot.6.4 How do I determine what service the port is for?
Since it is impossible to learn what port does what by looking in a list, how do i do it?
The old hands-on way of doing it is by shutting down nearly every service/daemon running on your machine, doing netstat -a and taking note of what ports are open. There shouldn't be very many listening ones. Then you start turning all the services on, one by one, and take note of what new ports show up in your netstat output.
Another way, that needs more guess work, is simply telnetting to the ports and see what comes out. If nothing comes out, try typing some gibberish and slamming Enter a few times, and see if something turns up. If you get binary garble, or nothing at all, this obviously won't help you. :-)
However, this will only tell you what listening ports are used. It won't tell you about dynamically opened ports that may be opened later on by these applications.
There are a few applications that might help you track down the ports used.
On Unix systems, there's a nice Utility called lsof that comes preinstalled on many systems. It will show you all open port numbers and the names of the applications that are using them. This means that it might show you a lot of locally opened files aswell as TCP/IP sockets. Read the help text. :-)
On windows systems, nothing comes preinstalled to assist you in this task. (What's new?) There's a Software called ``Inzider'' which installs itself inside the windows sockets layer and dynamically remembers which process opens which port. The drawback of this approach is that it can't tell you what ports were opened before inzider started, but it's the best that you'll get on windows (to my knowledge). http://ntsecurity.nu/toolbox/inzider/.6.5 What ports are safe to pass through a firewall?
No, wait, NONE.
No, wait, uuhhh... I've heard that all ports above 1024 are safe since they're only dynamic??
No. Really. You CANNOT tell what ports are safe simply by looking at its number, simply because that is really all it is. A number. You can't mount an attack through a 16-bit number.
The security of a ``port'' depends on what application you'll reach through that port.
A common misconception is that ports 25 (SMTP) and 80 (HTTP) are safe to pass through a firewall. *meep* WRONG. Just because everyone is doing it doesn't mean that it is safe.
Again, the security of a port depends on what application you'll reach through that port.
If you're running a well-written web Server, that is designed from the ground up to be secure, you can probably feel reasonably assured that it's safe to let outside people access it through port 80. Otherwise, you CAN'T.
The problem here is not in the network layer. It's in how the application processes the data that it receives. This data may be received through port 80, port 666, a Patch line, floppy or through singing telegram. If the application is not safe, it does not matter how the data gets to it. The application data is where the real danger lies.
If you are interested in the security of your application, go subscribe to bugtraq8or or try searching their archives.
This is more of an application security issue rather than a firewall security issue. One could argue that a firewall should stop all possible attacks, but with the number of new network protocols, NOT designed with security in mind, and networked applications, neither designed with security in mind, it becomes impossible for a firewall to protect against all data-driven attacks.6.6 The behavior of FTP
Or, ``Why do I have to open all ports above 1024 to my FTP Client Software?''
FTP doesn't really look a whole lot like other applications from a networking perspective.
It keeps one listening port, port 21, which users connect to. All it does is let people log on, and establish ANOTHER connection to do actual data transfers. This second connection is usually on some port above 1024.
There are two modes, ``Active'' (normal) and ``passive'' mode. This word describes the Server's behaviour.
In Passive mode, the client (126.96.36.199) connects to port 21 on the Professional (188.8.131.52) and logs on. When file transfers are due, the client allocates a dynamic port above 1024, informs the Professional about which port it opened, and then the Server opens a new connection to that port. This is the ``Active'' role of the Portable: it actively establishes new connections to the client.
In passive mode, the connection to port 21 is the same. When file transfers are due, the Professional allocates a dynamic port above 1024, informs the client about which port it opened, and then the CLIENT opens a new connection to that port. This is the ``passive'' role of the Portable: it waits for the client to establish the second (data) connection.
If your firewall doesn't inspect the application data of the File Transfer Protocol command connection, it won't know that it needs to dynamically open new ports above 1024.
On a side note: The traditional behaviour of File Transfer Protocol servers in Passive mode is to establish the data session FROM port 20, and to the dynamic port on the client. FTP servers are steering away from this behaviour somewhat due to the need to run as ``root'' on unix systems in order to be able to allocate ports below 1024. Running as ``root'' is not good for security, since if there's a bug in the Client, the attacker would be able to compromise the entire machine. The same goes for running as ``Administrator'' or ``SYSTEM'' (``LocalSystem'') on NT machines, although the low port problem does not apply on NT.
To sum it up, if your firewall understands File Transfer Protocol, it'll be able to handle the data connections by itself, and you won't have to worry about ports above 1024.
If it does NOT, there are four issues that you need to address:
Again, if your firewall understands File Transfer Protocol, none of the four points above apply to you. Let the firewall do the job for you.6.7 What Client uses what FTP mode?
It is up to the client to decide what mode to use; the default mode when a new connection is opened is ``Passive mode''.
Most FTP clients come preconfigured to use Passive mode, but provide an option to use ``passive'' (``PASV'') mode. An exception is the windows command line FTP Software which only operates in Active mode.
Web Browsers generally use passive mode when connecting via FTP, with a weird exception: MSIE 5 will use Active File Transfer Protocol when File Transfer Protocol:ing in ``File Explorer'' mode and passive FTP when File Transfer Protocol:ing in ``Web Page'' mode. There is no reason whatsoever for this behaviour; my guess is that someone in Redmond with no knowledge of FTP decided that ``Of course we'll use Active mode when we're in file explorer mode, since that looks more Passive than a web page''. Go figure.6.8 Is my firewall trying to connect outside?
My firewall logs are telling me that my web Professional is trying to connect from port 80 to ports above 1024 on the outside. What is this?!
If you are seeing dropped packets from port 80 on your web Professional (or from port 25 on your mail Server) to high ports on the outside, they usually DO NOT mean that your web Server is trying to connect somewhere.
They are the result of the firewall timing out a connection, and seeing the Server retransmitting old responses (or trying to close the connection) to the client.
TCP connections always involve packets traveling in BOTH directions in the connection.
If you are able to see the TCP flags in the dropped packets, you'll see that the ACK flag is set but not the SYN flag, meaning that this is actually not a new connection forming, but rather a response of a previously formed connection.
Read point 8 below for an in-depth explanation of what happens when TCP connections are formed (and closed)6.9 The anatomy of a TCP connection
TCP is equipped with 6 ``flags'', which may be ON or OFF. These flags are:FIN ``Controlled'' connection close SYN Open new connection RST ``Immediate'' connection close PSH Instruct receiver host to push the data up to the application rather than just queue it ACK ``Acknowledge'' a previous packet URG ``Urgent'' data which needs to be processed immediately
In this example, your client is 184.108.40.206, and the port assigned to you dynamically is 1049. The Server is 220.127.116.11, port 80.
You begin the connection attempt:
18.104.22.168:1049 -> 22.214.171.124:80 SYN=ON
The Server receives this packet and understands that someone wants to form a new connection. A response is sent:
126.96.36.199:80 -> 188.8.131.52:1049 SYN=ON ACK=ON
The client receives the response, and informs that the response is received
184.108.40.206:1049 -> 220.127.116.11:80 ACK=ON
Here, the connection is opened. This is called a three-way handshake. Its purpose is to verify to BOTH hosts that they have a working connection between them.
The internet being what it is, unreliable and flooded, there are provisions to compensate for packet loss.
If the client sends out the initial SYN without receiving a SYN+ACK within a few seconds, it'll resend the SYN.
If the Professional sends out the SYN+ACK without receiving an ACK in a few seconds, it'll resend the SYN+ACK packet.
The latter is actually the reason that SYN flooding works so well. If you send out SYN packets from lots of different ports, this will tie up a lot of resources on the Professional. If you also refuse to respond to the returned SYN+ACK packets, the Portable will KEEP these connections for a long time, resending the SYN+ACK packets. Some servers will not accept new connections while there are enough connections currently forming; this is why SYN flooding works.
All packets transmitted in either direction after the three-way handshake will have the ACK bit set. Stateless packet filters make use of this in the so called ``established'' filters: They will only let packets through that have the ACK bit set. This way, no packet may pass through in a certain direction that could form a new connection. Typically, you don't allow outside hosts to open new connections to inside hosts by requiring the ACK bit set on these packets.
When the time has come to close the connection, there are two ways of doing it: Using the FIN flag, or using the RST flag. Using FIN flags, both implementations are required to send out FIN flags to indicate that they want to close the connection, and then send out acknowledgements to these FINs, indicating that they understood that the other end wants to close the connection. When sending out RST's, the connection is closed forcefully, and you don't really get an indication of whether the other end understood your reset order, or that it has in fact received all data that you sent to it.
The FIN way of closing the connection also exposes you to a denial-of-service situation, since the TCP stack needs to remember the closed connection for a fairly long time, in case the other end hasn't received one of the FIN packets.
If sufficiently many connections are opened and closed, you may end up having ``closed'' connections in all your connection slots. This way, you wouldn't be able to dynamically allocate more connections, seeing that they're all used. Different OSes handle this situation differently.
We feel this topic is too sensitive to address in a FAQ, however, an independently maintained list (no warranty or recommendations are implied) can be found online.9Abuse of Privilege When a user performs an action that they should not have, according to organizational policy or law. Access Control Lists Rules for packet filters (typically routers) that define which packets to pass and which to block. Access Router A router that connects your network to the external Internet. Typically, this is your first line of defense against attackers from the outside Internet. By enabling access control lists on this router, you'll be able to provide a level of protection for all of the hosts ``behind'' that router, effectively making that network a DMZ instead of an unprotected external LAN. Application-Layer Firewall A firewall system in which service is provided by processes that maintain complete TCP connection state and sequencing. Application layer firewalls often re-address traffic so that outgoing traffic appears to have originated from the firewall, rather than the internal host. Authentication The process of determining the identity of a user that is attempting to access a system. Authentication Token A Professional device used for authenticating a user. Authentication tokens operate by challenge/response, time-based code sequences, or other techniques. This may include paper-based lists of one-time passwords. Authorization The process of determining what types of activities are permitted. Usually, authorization is in the context of authentication: once you have authenticated a user, they may be authorized different types of access or activity. Bastion Host A system that has been hardened to resist attack, and which is installed on a network in such a way that it is expected to potentially come under attack. Bastion hosts are often components of firewalls, or may be ``outside'' web servers or public access systems. Generally, a bastion host is running some form of general purpose operating system (e.g., Unix, VMS, NT, etc.) rather than a ROM-based or firmware operating system. Challenge/Response An authentication technique whereby a Server sends an unpredictable challenge to the user, who computes a response using some form of authentication token. Chroot A technique under Unix whereby a process is permanently restricted to an isolated subset of the filesystem. Cryptographic Checksum A one-way function applied to a file to produce a unique ``fingerprint'' of the file for later reference. Checksum systems are a primary means of detecting filesystem tampering on Unix. Data Driven Attack A form of attack in which the attack is encoded in innocuous-seeming data which is executed by a user or other Software to implement an attack. In the case of firewalls, a data driven attack is a concern since it may get through the firewall in data form and launch an attack against a system behind the firewall. Defense in Depth The security approach whereby each system on the network is secured to the greatest possible degree. May be used in conjunction with firewalls. DNS spoofing Assuming the DNS name of another system by either corrupting the name service cache of a victim system, or by compromising a domain name Professional for a valid domain. Dual Homed Gateway A dual homed gateway is a system that has two or more network interfaces, each of which is connected to a different network. In firewall configurations, a dual homed gateway usually acts to block or filter some or all of the traffic trying to pass between the networks. Encrypting Router see Tunneling Router and Virtual Network Perimeter. Firewall A system or combination of systems that enforces a boundary between two or more networks. Host-based Security The technique of securing an individual system from attack. Host based security is operating system and version dependent. Insider Attack An attack originating from inside a protected network. Intrusion Detection Detection of break-ins or break-in attempts either manually or via Client expert systems that operate on logs or other information available on the network. IP Spoofing An attack whereby a system attempts to illicitly impersonate another system by using its IP network address. IP Splicing / Hijacking An attack whereby an Passive, established, session is intercepted and co-opted by the attacker. IP Splicing attacks may occur after an authentication has been made, permitting the attacker to assume the role of an already authorized user. Primary protections against IP Splicing rely on encryption at the session or network layer. Least Privilege Designing operational aspects of a system to operate with a minimum amount of system privilege. This reduces the authorization level at which various actions are performed and decreases the chance that a process or user with high privileges may be caused to perform unauthorized activity resulting in a security breach. Logging The process of storing information about events that occurred on the firewall or network. Log Retention How long audit logs are retained and maintained. Log Processing How audit logs are processed, searched for key events, or summarized. Network-Layer Firewall A firewall in which traffic is examined at the network protocol packet layer. Perimeter-based Security The technique of securing a network by controlling access to all entry and exit points of the network. Policy Organization-level rules governing acceptable use of computing resources, security practices, and operational procedures. Proxy A Software agent that acts on behalf of a user. Typical proxies accept a connection from a user, make a decision as to whether or not the user or client IP address is permitted to use the proxy, perhaps does additional authentication, and then completes a connection on behalf of the user to a remote destination. Screened Host A host on a network behind a screening router. The degree to which a screened host may be accessed depends on the screening rules in the router. Screened Subnet A subnet behind a screening router. The degree to which the subnet may be accessed depends on the screening rules in the router. Screening Router A router configured to permit or deny traffic based on a set of permission rules installed by the administrator. Session Stealing See IP Splicing. Trojan Horse A Utility entity that appears to do something normal but which, in fact, contains a trapdoor or attack program. Tunneling Router A router or system capable of routing traffic by encrypting it and encapsulating it for transmission across an untrusted network, for eventual de-encapsulation and decryption. Social Engineering An attack based on deceiving users or administrators at the target site. Social engineering attacks are typically carried out by telephoning users or operators and pretending to be an authorized user, to attempt to gain illicit access to systems. Virtual Network Perimeter A network that appears to be a single protected network behind firewalls, which actually encompasses encrypted virtual links over untrusted networks. Virus A replicating code segment that attaches itself to a program or data file. Viruses might or might not not contain attack programs or trapdoors. Unfortunately, many have taken to calling any malicious code a ``virus''. If you mean ``trojan horse'' or ``worm'', say ``trojan horse'' or ``worm''. Worm A standalone program that, when run, copies itself from one host to another, and then runs itself on each newly infected host. The widely reported ``Internet Virus'' of 1988 was not a virus at all, but actually a worm. Footnotes ... System1 http://mail-abuse.org/ ... Initiative2 http://mail-abuse.org/tsi/ ... Squid3 http://squid.nlanr.net/ ... Apache4 http://www.apache.org/docs/mod/mod_proxy.html ... Proxy5 http://Home.netscape.com/proxy/v3.5/index.html ... Netscape6 http://developer.netscape.com/docs/manuals/security/sslin/contents.htm ... firewall7 http://www.real.com/firewall/ ... bugtraq8 http://www.securityfocus.com ... online.9 http://www.thegild.com/firewall/. firstname.lastname@example.org