Sunday, January 2, 2011

Firewalls

The term firewall has many definitions in the industry. The definition depends on how and to what extent a firewall is used in a network. Generally, a firewall is a network device that, based on a defined network policy, implements access control for a network.

Apart from doing this basic job, firewalls are often used as network address translating devices, because they often tend to sit on the edge of a network and serve as entry points into the network. Figure 7-1 shows the basic philosophy of a firewall setup.


Some important characteristics distinguish a serious, industrial-strength firewall from other devices that go only halfway toward providing a true security solution are:
  • Logging and notification ability
  • High-volume packet inspection
  • Ease of configuration
  • Device security and redundancy

Logging and Notification Ability


A firewall is not much good unless it has a good logging facility. Good logging not only allows network administrators to detect if attacks are being orchestrated against their networks, but it also lets them detect if what is considered normal traffic originating from trusted users is being used for ungainly purposes. Good logging allows network administrators to filter much information based on traffic tagging and get to the stuff that really matters very quickly. Obviously, good logging is different from logging everything that happens.

"Good logging" also refers to notification ability. Not only do you want the firewall to log the message, but you also want it to notify the administrator when alarm conditions are detected. Notification is often done by software that sorts through the log messages generated by the firewall device. Based on the criticality of the messages, the software generates notifications in the form of pages, e-mails, or other such means to notify a network administrator. The purpose of the notification is to let the administrator make a timely modification to either the configuration or the software image of the firewall itself to decrease the threat and impact of an attack or potential attack.


High-Volume Packet Inspection

One test of a firewall is its ability to inspect a large amount of network traffic against a configured set of rules without significantly degrading network performance. How much a firewall should be able to handle varies from network to network, but with today's demanding networks, a firewall should not become a bottleneck for the network it is sitting on. It is important to keep a firewall from becoming a bottleneck in a network because of its placement in the network. Firewalls are generally placed at the periphery of a network and are the only entry point into the network. Consequently, a slowdown at this critical place in the network can slow down the entire network.

Various factors can affect the speed at which a firewall processes the data passing through it. Most of the limitations are in hardware processor speed and in the optimization of software code that keeps track of the connections being established through the firewall. Another limiting factor is the availability of the various types of interface cards on the firewall. A firewall that can support Gigabit Ethernet in a Gigabit Ethernet environment is obviously more useful than one that can only do Fast Ethernet in a faster network such as Gigabit Ethernet.

One thing that often helps a firewall process traffic quickly is to offload some of the work to other software. This work includes notifications, URL filter-based access control, processing of firewall logs for filtering important information, and other such functions. These often-resource-intensive functions can take up a lot of the firewall's capacity and can slow it down.


Ease of Configuration

Ease of configuration includes the ability to set up the firewall quickly and to easily see configuration errors. Ease of configuration is very important in a firewall. The reason is that many network breaches that occur in spite of a firewall's being in place are not due to a bug in the firewall software or the underlying OS on which the firewall sits. They are due to an error in the firewall's configuration! Some of the "credit" for this goes to the person who configures the firewall. However, an easy-to-configure firewall mitigates many errors that might be produced in setting it up.

It is important for a firewall to have a configuration utility that allows easy translation of the site security policy into the configuration. It is very useful to have a graphical representation of the network architecture as part of the configuration utility to avoid common configuration errors. Similarly, the terminology used in the configuration utility needs to be in synch with normally accepted security site topological nomenclature, such as DMZ zones, high-security zones, and low-security zones. Use of ambiguous terminology in the configuration utility can cause human error to creep in.

Centralized administrative tools that allow for the simultaneous management of multiple security devices, including firewalls, are very useful for maintaining uniformly error-free configurations.


Device Security and Redundancy

The security of the firewall device itself is a critical component of the overall security that a firewall can provide to a network. A firewall that is insecure itself can easily allow intruders to break in and modify the configuration to allow further access into the network. There are two main areas where a firewall needs to have strength in order to avoid issues surrounding its own security:
  • The security of the underlying operating system— If the firewall software runs on a separate operating system, the vulnerabilities of that operating system have the potential to become the vulnerabilities of the firewall itself. It is important to install the firewall software on an operating system known to be robust against network security threats and to keep patching the system regularly to fill any gaps that become known.
  • Secure access to the firewall for administrative purposes— It is important for a firewall to have secure mechanisms available for allowing administrative access to it. Such methods can include encryption coupled with proper authentication mechanisms. Weakness in the implementation of such access mechanisms can allow the firewall to become an easy target for intrusions of various kinds.


An issue related to device security is the firewall's ability to have a redundant presence with another firewall in the network. Such redundancy allows the backup device to take up the operations of a faulty primary device. In the case of an attack on the primary device that leaves it nonoperational, redundancy also allows for continued operation of the network.


Types of Firewalls

In order to gain a thorough understanding of firewall technology, it is important to understand the various types of firewalls. These various types of firewalls provide more or less the same functions that were outlined earlier. However, their methods of doing so provide differentiation in terms of performance and level of security offered.

The firewalls discussed in this section are divided into five categories based on the mechanism that each uses to provide firewall functionality:
  • Circuit-level firewalls
  • Proxy server firewalls
  • Nonstateful packet filters
  • Stateful packet filters
  • Personal firewalls

These various types of firewalls gather different types of information from the data flowing through them to keep track of legitimate and illegitimate traffic and to protect against unauthorized access. The type of information they use often also determines the level of security they provide.


Circuit-Level Firewalls

These firewalls act as relays for TCP connections. They intercept TCP connections being made to a host behind them and complete the handshake on behalf of that host. Only after the connection is established is the traffic allowed to flow to the client. Also, the firewall makes sure that as soon as the connection is established, only data packets belonging to the connection are allowed to go through.

Circuit-level firewalls do not validate the payload or any other information in the packet, so they are fairly fast. These firewalls essentially are interested only in making sure that the TCP handshake is properly completed before a connection is allowed. Consequently, these firewalls do not allow access restrictions to be placed on protocols other than TCP and do not allow the use of payload information in the higher-layer protocols to restrict access.


Proxy Server Firewalls

Proxy server firewalls work by examining packets at the application layer. Essentially a proxy server intercepts the requests being made by the applications sitting behind it and performs the requested functions on behalf of the requesting application. It then forwards the results to the application. In this way it can provide a fairly high level of security to the applications, which do not have to interact directly with outside applications and servers.

Proxy servers are advantageous in the sense that they are aware of application-level protocols and they can restrict or allow access based on these protocols. They also can look into the data portions of the packets and use that information to restrict access. However, this very capability of processing the packets at a higher layer of the stack can contribute to the slowness of proxy servers. Also, because the inbound traffic has to be processed by the proxy server as well as the end-user application, further degradation in speed can occur. Proxy servers often are not transparent to end users who have to make modifications to their applications in order to use the proxy server. For each new application that must go through a proxy firewall, modifications need to be made to the firewall's protocol stack to handle that type of application.


Nonstateful Packet Filters

Nonstateful packet filters are fairly simple devices that sit on the periphery of a network and, based on a set of rules, allow some packets through while blocking others. The decisions are made based on the addressing information contained in network layer protocols such as IP and, in some cases, information contained in transport layer protocols such as TCP or UDP headers as well.

Nonstateful packet filters are fairly simple devices, but to function properly they require a thorough understanding of the usage of services required by a network to be protected. Although these filters can be fast because they do not proxy any traffic but only inspect it as it passes through, they do not have any knowledge of the application-level protocols or the data elements in the packet. Consequently, their usefulness is limited. These filters also do not retain any knowledge of the sessions established through them. Instead, they just keep tabs on what is immediately passing through.. The use of simple and extended access lists (without the established keyword) on routers are examples of such firewalls.


Stateful Packet Filters

Stateful packet filters are more intelligent than simple packet filters in that they can block pretty much all incoming traffic and still can allow return traffic for the traffic generated by machines sitting behind them. They do so by keeping a record of the transport layer connections that are established through them by the hosts behind them.

Stateful packet filters are the mechanism for implementing firewalls in most modern networks. Stateful packet filters can keep track of a variety of information regarding the packets that are traversing them, including the following:
  • Source and destination TCP and UDP port numbers
  • TCP sequence numbering
  • TCP flags
  • TCP session state based on the RFCed TCP state machine
  • UDP traffic tracking based on timers

Stateful firewalls often have built-in advanced IP layer handling features such as fragment reassembly and clearing or rejecting of IP options.

Many modern stateful packet filters are aware of application layer protocols such as FTP and HTTP and can perform access-control functions based on these protocols' specific needs.


Personal Firewalls

Personal firewalls are firewalls installed on personal computers. They are designed to protect against network attacks. These firewalls are generally aware of the applications running on the machine and allow only connections established by these applications to operate on the machine.

A personal firewall is a useful addition to any PC because it increases the level of security already offered by a network firewall. However, because many of the attacks on today's networks originate from inside the protected network, a PC firewall is an even more useful tool, because network firewalls cannot protect against these attacks. Personal firewalls come in a variety of flavors. Most are implemented to be aware of the applications running on the PC. However, they are designed to not require any changes from the user applications running on the PC, as is required in the case of proxy servers.

Wednesday, October 27, 2010

Network Address Translation and Security

Network address translation (NAT) is the mechanism by which a packet's IP addresses are modified to be something other than what they originally were. This is a requirement for networks that use the RFC 1918 addressing scheme. These IP addresses cannot be routed on the Internet and therefore need to be converted to routable IP addresses at the edge of the network before they are passed to a public network, such as the Internet. Because NAT can hide a network's IP addresses, this offers some amount of security to the network that has NAT the setup. However, you can't depend solely on NAT for security. This chapter discusses the security benefits of having NAT running on the network's periphery. It then discusses how depending solely on NAT for protection can be a dangerous choice.

While there are a few reasons for using NAT, the primary reason that networks use RFC 1918 addressing is to reduce IP address consumption. Routable IP addresses are expensive and limited in number. A specific form of NAT called Overload NAT provides a useful solution to this problem. Overload NAT, also known as Port Address Translation (PAT), works differently from normal one-to-one NAT, whereby each RFC 1918 address is converted to its own unique routable IP address. In Overload NAT, the RFC 1918 addresses are translated to a small number of routable IP addresses (often just one routable IP address, frequently that of the router's external interface).

The device doing PAT distinguishes between the traffic destined for the various RFC 1918 addresses by tracking the source TCP or UDP ports used when the connection is initiated. Figure 6-1 shows how PAT works.


NAT is sometimes confused with proxy servers. However, these are two completely different entities. NAT is a Layer 3 occurrence that uses Layer 4 information when doing PAT. On the other hand, proxy servers usually work on Layer 4 or higher of the OSI model. The most significant difference between the two mechanisms is the way they work. Although NAT is completely transparent to the source and destination devices, proxy servers require the source machine to be configured to make requests to the proxy server, which then facilitates the connection on behalf of the source machine. This is completely different from NAT, in which neither the source nor the destination machines need to know about the device doing the NAT.


Security Benefits of Network Address Translation

NAT used in PAT mode can be a source of security for the network that is using PAT to translate its private addresses.

To understand this, assume that the device doing the NAT is a router that is sitting on the edge of the network, with one interface connected to the RFC 1918 private network and another interface connected to the Internet. When a device sitting behind the router wants to go out to the Internet, it sends packets to the router. The router then translates the source address into a globally routable address and puts the source IP address and source TCP or UDP port number in its NAT tables.

When the reply packets are delivered to the router, destined for the globally routable IP address, the router looks at the destination port number (remember, the source and destination port numbers are flipped in the return packet). Based on a lookup of its NAT table, the router determines which RFC 1918 address to send the packet to. It changes the destination address to the private address found in its NAT table and forwards the packet appropriately. The important point to realize in this operation is that the router can change back the IP address only if it has an entry for the destination port number in its NAT table. If for some reason that entry got cleared out and the router received a packet destined for the globally routable address, it would simply discard it, because it would not know where to send the packet. This property is at the crux of PAT's secure nature. Unless a NAT (PAT) entry is created on the router in its NAT table that contains the port number and private to global address mapping, the router does not forward any packets to the RFC 1918 network. Therefore, any connections not initiated from the inside are not allowed through the PAT device. This is a significant measure of security. However, you will see in the next section why this is not the type of security you can really rely on.


Disadvantages of Relying on NAT for Security

Although NAT provides some level of protection to the networks sitting behind it, it is important to understand that it is by no means a comprehensive security solution.

The following sections outline some of the most obvious reasons why NAT should not be considered a security mechanism, despite the illusion of security it provides.


No Tracking of Protocol Information Other Than the Port Number Information

The NAT table, created on the NAT device that is used to track the outgoing and incoming connections, does not track any of the other information contained in the packets. Information such as the packet sequence numbers, the TCP handshake, and UDP progress-based timers are some of the pieces of information that most firewalls track in order to prevent the usage of the connections established through the firewall by attackers spoofing IP addresses. NAT does not track any of this information, because it does not need to for the purposes of creating and maintaining NAT translations. Chapter 8, "PIX Firewall," contains a detailed discussion of the algorithm that PIX Firewall uses to provide security.


No Restriction on the Type of Content Flowing Based on PAT Tables

NAT also does not concern itself with protecting the hosts from malicious data being sent on the NAT connections established by the hosts themselves. You can only protect your network from such malicious content by having a firewall and an intrusion detection system in place.


Limited Control on Initial Connections

NAT does not have any real control over who can initiate connections from the inside network to the outside network. Although an access list is configured to define which hosts can initiate NAT connections, this is a rudimentary measure at best.

By using route maps and extended access control lists, you can put further restraints on what traffic can be NATed. However, this is not the ideal way to restrict traffic. It is difficult to implement with the same granularity as standard access control mechanisms, and it can be resource-intensive for the router.

In light of this, NAT is a useful mechanism for increasing the available IP address space. It can also be a convenient tool for some other network design aspects. However, it should not be relied on to provide security. When used on a security device such as a firewall in conjunction with other security features, NAT provides definite enhancements to the security provided by the firewall. However, it should not be used in isolation as a security mechanism. It does provide some measure of security. However, this always needs to be enhanced with additional tools and products designed specifically with security in mind, such as firewalls and intrusion detection systems.

Wednesday, June 9, 2010

Secure LAN Switching

Port Authentication and Access Control Using the IEEE 802.1x Standard

802.1x is the standard developed by IEEE to provide a mechanism for authentication to occur for devices that connect to various Layer 2 devices such as switches using IEEE 802 LAN infrastructures (such as Token Ring and Ethernet).

The primary idea behind the standard is devices that need to access the LAN need to be authenticated and authorized before they can connect to the physical or logical port of the switch that is responsible for creating the LAN environment. In the case of Ethernet and Token Ring, the ports are physical entities that a device plugs into. However, in the case of setups such as the IEEE 802.11b wireless setup, the ports are logical entities known as associations. In either case, the standard's primary goal is to allow for controlled access to the LAN environment.


802.1x Entities

The 802.1x standard defines the following three main entities that take part in the access control method set up in this standard:
  • Supplicant— This device needs to access the LAN. An example is a laptop that needs to connect to a LAN.
  • Authenticator— This device is responsible for initiating the authentication process and then acting as a relay between the actual authentication server and the supplicant. This device is generally also the device that is responsible for the overall workings of the LAN. An example of this type of device is a Catalyst 6000 switch to which various supplicants can connect and be authenticated and authorized via the 802.1x standard before being allowed to use the ports on the switch for data traffic.
  • Authentication server— This device is responsible for doing the actual authentication and authorization on behalf of the authenticator. This device contains profile information for all the users of the network in a database format. It can use that information to authenticate and authorize users to connect to the ports on the authenticator. An example of an authentication server is the Cisco Secure ACS.


In addition to these three main entities, the 802.1 defines some other entities as well. One of these is the Port Access Entity (PAE). The PAE is essentially responsible for maintaining the functionality of the 802.1x standard on the authenticator or the supplicant or both. It can be viewed as the daemon that is responsible for the functioning of the 802.1x standard. For our purposes, we will assume that this entity is transparent to the network administrator as we talk about various aspects of this standard.


802.1x Communications

In order for the 802.1x standard to function, communication needs to occur between the three entities just defined. 802.1x protocol uses an RFC standard known as Extensible Authentication Protocol (EAP) to facilitate this communication. The authentication data between the three entities is exchanged using EAP packets that are carried either in EAPOL frames (between the supplicant and the authenticator, as discussed later) or in TACACS+, RADIUS, or some other such protocol's packets (between the authenticator and the authenticating server). The following sections look at each of these pieces and discuss how they come together to form the 802.1x communication infrastructure.


EAP

EAP is a fairly flexible protocol. It was originally designed to carry only PPP authentication parameters, but it also can be used by other protocols such as 802.1x for their authentication needs.

EAP can carry authentication data between two entities that want to set up authenticated communications between themselves. It supports a variety of authentication mechanisms, including one-time password, MD5 hashed username and password, and transport layer security (discussed later). EAP, using the packets described in the next section, allows the authenticator, the supplicant, and the authentication server to exchange the information they need to exchange to authenticate the supplicant.

RFC 2284 defines the EAP packet format as shown in Figure 5-4.


EAPOL

We have looked at EAP, which is the underlying 802.1x protocol. But we have not looked at how EAP messages are actually framed and transported from the supplicant to the authenticator. The 802.1x defines an encapsulating/framing standard to allow communication between the supplicant and the authenticator to take place. This encapsulation mechanism is known as EAP Over LANs (EAPOL). EAPOL encapsulation is defined separately for both the Token Ring and Ethernet environments. EAPOL allows the EAP messages to be encapsulated using the EAPOL frames for transport between the supplicant and the authenticator. As soon as these frames reach the authenticator, it strips off the EAPOL headers, puts the EAP packet in a RADIUS or TACACS+ (or some other similar protocol) packet, and sends it to the authenticating server. Figure 5-6 shows the relationship between the supplicant and the authenticator using EAPOL.




802.1x Functionality

This section puts together all the pieces of the 802.1x protocol discussed in the preceding sections and summarizes how 802.1x provides port authentication to supplicants.

The 802.1x functionality is based on a series of exchanges between the supplicant, authenticator, and authentication server. The authenticator plays an important role in these exchanges because it not only acts as a go-between for the supplicant and the authenticating server but also is responsible for enabling the port to which the supplicant is trying to connect for normal data traffic if the authentication is indeed successful.

The authentication process starts with the supplicant trying to connect to one of the ports on the authenticator. At this point, the port is open only for EAPOL traffic. The authenticator sees the port's operational state change to enable due to the supplicant's connecting to it and requests authentication from the supplicant. This is done by sending an EAP-Request/Identity frame to the supplicant. This message is sent encapsulated in an EAPOL frame. The supplicant responds by sending back an EAP-Response/Identity frame containing information about its identity, such as username/password. This message is also sent encapsulated in an EAPOL frame. The authenticator decapsulates the EAP message from the EAPOL frame and repackages this EAP frame in a RADIUS or TACACS+ packet and forwards it to the authentication server. The authentication server, upon receiving this packet, responds with an EAP-Response message that is based on the authentication method that the authentication server wants to use for this particular supplicant. This message is encapsulated in a TACACS+ or RADIUS packet. Upon receiving this message, the authenticator strips off the TACACS+/RADIUS header, encapsulates the EAP message in an EAPOL frame, and forwards it to the supplicant. This back-and-forth EAP exchange between the supplicant and the authentication server via the authenticator continues until the authentication either succeeds or fails, as indicated by an EAP-Success or EAP-Failure message sent by the authentication server to the supplicant. Upon seeing an EAP-Success message, the authenticator enables the port on which the supplicant is connected for normal data traffic. In addition to enabling the port for this type of traffic, the authenticator can place the port in a specific VLAN based on the information it receives from the authentication server. Figure 5-9 shows the overall 802.1x architecture and flow using EAP over EAPOL and EAP over TACACS+/RADIUS.

Thursday, May 6, 2010

Secure LAN Switching


Protocol Filtering and Controlling LAN Floods

Attackers can cause broadcast floods to disrupt communications over the LAN. You saw an example of this in the section "MAC Address Floods and Port Security." Therefore, it is important to control flooding on the switches. There are two main ways to do this:
  • Set up threshold limits for broadcast/multicast traffic on ports
  • Use protocol filtering to limit broadcasts/multicasts for certain protocols

Catalyst switches allow thresholds for broadcast traffic to be set up on a per-port basis. These thresholds can be set up either in terms of bandwidth consumed by broadcasts on a port or in terms of the number of broadcast packets being sent across a port. It is best to use the first method in most cases, because it is done in hardware and also because variable-length packets can render the second method meaningless.

The following command sets the threshold for broadcast and multicast packets on ports 1 to 6 of module 2 at 75%. This implies that as soon as 75% bandwidth of the port on a per-second basis is consumed by broadcast/multicast traffic, all additional broadcast/multicast traffic for that 1-second period is dropped.

Console> (enable) set port broadcast 2/1-6 75%

Protocol filtering provides another very useful mechanism for isolating and controlling environments that are susceptible to flooding attacks. Using the protocol-filtering feature on Catalyst switches, you can define protocol groups. Each group has certain protocols associated with it. It also has a set of ports that belong to it. Only the broadcast or multicast traffic for the protocols associated with a group is allowed to be sent to the ports that belong to that group. You should realize that although VLANs also create broadcast domains for the ports associated with them, protocol-filtering groups allow these domains to be created based on various protocols as well. Using protocol filtering, ports that have hosts on them that do not need to participate in the broadcast traffic for a certain protocol can be made part of a group that does not allow broadcast traffic for that protocol.

With the Catalyst 5000 family of switches, packets are classified into the following protocol groups:
  • IP (ip)
  • IPX (ipx)
  • AppleTalk, DECnet, and Banyan VINES (group)
  • Packets not belonging to any of these protocols
A port can be configured to belong to one or more of these four groups and be in any one of the following states for that group:
  • On
  • Off
  • Auto
If the configuration is set to on, the port receives all the flood (broadcast/multicast traffic) traffic for that protocol. If the configuration is set to off, the port does not receive any flood traffic for that protocol. If the configuration is set to auto, a port becomes a member of the protocol group only after the device connected to the port transmits packets of the specific protocol group. The switch detects the traffic, adds the port to the protocol group, and begins forwarding flood traffic for that protocol group to that port. Autoconfigured ports are removed from the protocol group if the attached device does not transmit packets for that protocol within 60 minutes. Ports are also removed from the protocol group when the supervisor engine detects that the link on the port is down.


Private VLANs on the Catalyst 6000

The Catalyst 6000 product line has introduced some enhancements to the switching arena for security purposes. We will discuss some of these in this section and see how they can be a useful security element in Layer 2 design.

A normal VLAN does not allow devices connected to it to be segregated from each other on Layer 2. This means that if a device on a VLAN becomes compromised, other devices on the same VLAN can also be attacked from that compromised device.

Private VLANs allow restrictions to be placed on the Layer 2 traffic on a VLAN.

There are three types of private VLAN ports:
  • Promiscuous ports— Communicates with all other private VLAN ports. This is generally the port used to communicate with the router/gateway on a segment.
  • Isolated ports— Has complete Layer 2 isolation from other ports within the same private VLAN, with the exception of the promiscuous port.
  • Community ports— Communicate among themselves and with their promiscuous ports. These ports are isolated at Layer 2 from all other ports in other communities or isolated ports within their private VLAN.
In essence, isolating a port stops any other machine on the same logical or physical segment as the machines on the isolated portfrom sending any traffic to this port.

When a port is isolated, all machines connected to the network using this port are provided complete isolation from traffic in all other ports, except for the promiscuous port. This means that no machines located on any of the other ports on the switch can send any traffic to the machines located on the isolated VLAN port. It is similar to placing two ports in two separate VLANs. The isolated ports communicate with the rest of the world through the promiscuous VLAN port, which can send traffic to and receive traffic from the isolated VLAN ports. Figure 5-2 gives a graphical view of which ports can communicate with which other ports in a private VLAN setup.


ARP Spoofing, Sticky ARP, and Private VLANs

A security problem that private VLANs resolve is that of ARP spoofing. Network devices often send out what is known as a gratuitous ARP or courtesy ARP to let other machines on their broadcast domain known their IP address and the corresponding MAC address. This generally happens at bootup, but it can also occur at regular intervals after that. An attacker who has gained access to a compromised machine on the LAN can force the compromised machine to send out gratuitous ARPs for IP addresses that do not belong to it. This results in the rest of the machines sending their frames intended for those IP addresses to the compromised machine. This type of attack can have two consequences:
  • It can result in a DoS attack if the attacker spoofs the IP address/MAC address of the network's default gateway in its gratuitous ARPs. This causes all the machines on the broadcast domain to send the traffic destined for the default gateway to the compromised host, which in turn can simply drop this traffic, resulting in a DoS.
  • The attacker can analyze the traffic being sent to it and use the information found therein for various malicious activities.

Private VLANs offer protection from this type of attack by providing isolation between various ports on a VLAN. This stops an attacker from receiving traffic from the machines, sitting on all the other ports on a switch, on a port that has a compromised machine sitting on it.

Another feature, known as sticky ARP, which is available in conjunction with private VLANs, can also help mitigate these types of attacks. The sticky ARP feature makes sure that the ARP entries that are learned by the switch on the private VLANs do not age out and cannot be changed. Suppose an attacker somehow compromises and takes control of a machine on a private VLAN. He tries to do ARP spoofing by sending out gratuitous ARPs, announcing the machine as the owner of a certain MAC address/IP address mapping that it does not own. The switch ignores these ARPs and doesn't update its CAM tables to reflect these mappings. If there is a genuine need to change a port's MAC address, the administrator must do so manually.

Monday, March 15, 2010

Secure LAN Switching

In order to provide comprehensive security on a network, it is important take the concept of security to the last step and ensure that the Layer 2 devices such as the switches that manage the LANs are also operating in a secure manner.

This chapter focuses on the Cisco Catalyst 5000/5500 series switches. We will discuss private VLANs in the context of the 6000 series switches. Generally, similar concepts can be implemented in other types of switches (such as the 1900, 2900, 3000, and 4000 series switches) as well.

Security on the LAN is important because some security threats can be initiated on Layer 2 rather than at Layer 3 and above. An example of one such attack is one in which a compromised server on a DMZ LAN is used to connect to another server on the same segment despite access control lists on the firewall connected on the DMZ. Because the connection occurs at Layer 2, without suitable measures to restrict traffic on this layer, this type of access attempt cannot be blocked.


General Switch and Layer 2 Security

Some of the basic rules to keep in mind when setting up a secure Layer 2 switching environment are as follows:

  • VLANs should be set up in ways that clearly separate the network's various logical components from each other. VLANs lend themselves to providing segregation between logical workgroups. This is a first step toward segregating portions of the network needing more security from portions needing lesser security. It is important to have a good understanding of what VLANs are. VLANs are a logical grouping of devices that might or might not be physically located close to each other.
  • If some ports are not being used, it is prudent to turn them off as well as place them in a special VLAN used to collect unused ports. This VLAN should have no Layer 3 access.
  • Although devices on a particular VLAN cannot access devices on another VLAN unless specific mechanisms for doing so (such as trunking or a device routing between the VLANs) are set up, VLANs should not be used as the sole mechanism for providing security to a particular group of devices on a VLAN. VLAN protocols are not constructed with security as the primary motivator behind them. The protocols that are used to establish VLANs can be compromised rather easily from a security perspective and allow loopholes into the network. As such, other mechanisms such as those discussed next should be used to secure them.
  • Because VLANs are not a security feature, devices at different security levels should be isolated on separate Layer 2 devices. For example, having the same switch chassis on both the inside and outside of a firewall is not recommended. Two separate switches should be used for the secure and insecure sides of the firewall.
  • Unless it is critical, Layer 3 connectivity such as Telnets and HTTP connections to a Layer 2 switch should be restricted and very limited.
  • It is important to make sure that trunking does not become a security risk in the switching environment. Trunks should not use port numbers that belong to a VLAN that is in use anywhere on the switched network. This can erroneously allow packets from the trunk port to reach other ports located in the same VLAN. Ports that do not require trunking should have trunking disabled. An attacker can use trunking to hop from one VLAN to another. The attacker can do this by pretending to be another switch with ISL or 802.1q signaling along with Dynamic Trunking Protocol (DTP). This allows the attacker's machine to become a part of all the VLANs on the switch being attacked. It is generally a good idea to set DTP on all ports not being used for trunking. It's also a good idea to use dedicated VLAN IDs for all trunks rather than using VLAN IDs that are also being used for nontrunking ports. This can allow an attacker to make itself part of a trunking VLAN rather easily and then use trunking to hop onto other VLANs as well.

Generally, it is difficult to protect against attacks launched from hosts sitting on a LAN. These hosts are often considered trusted entities. As such, if one of these hosts is used to launch an attack, it becomes difficult to stop it. Therefore, it is important to make sure that access to the LAN is secured and is provided only to trusted people.

Some of the features we will discuss in the upcoming sections show you ways to further secure the switching environment.

The discussion in this chapter revolves around the use of Catalyst 5xxx and 6xxx switches. The same principles can be applied to setting up security on other types of switches.


Port Security

Port security is a mechanism available on the Catalyst switches to restrict the MAC addresses that can connect via a particular port of the switch. This feature allows a specific MAC address or a range of MAC addresses to be defined and specified for a particular port. A port set up for port security only allows machines with a MAC address belonging to the range configured on it to connect to the LAN. The port compares the MAC address of any frame arriving on it with the MAC addresses configured in its allowed list. If the address matches, it allows the packet to go through, assuming that all other requirements are met. However, if the MAC address does not belong to the configured list, the port can either simply drop the packet (restrictive mode) or shut itself down for a configurable amount of time. This feature also lets you specify the number of MAC addresses that can connect to a certain port.


MAC Address Floods and Port Security

Port security is especially useful in the face of MAC address flooding attacks. In these attacks, an attacker tries to fill up a switch's CAM tables by sending a large number of frames to it with source MAC addresses that the switch is unaware of at that time. The switch learns about these MAC addresses and puts them in its CAM table, thinking that these MAC addresses actually exist on the port on which it is receiving them. In reality, this port is under the attacker's control and a machine connected to this port is being used to send frames with spoofed MAC addresses to the switch. If the attacker keeps sending these frames in a large-enough quantity, and the switch continues to learn of them, eventually the switch's CAM table becomes filled with entries for these bogus MAC addresses mapped to the compromised port.

Under normal operations, when a machine receiving a frame responds to it, the switch learns that the MAC address associated with that machine sits on the port on which it has received the response frame. It puts this mapping in its CAM table, allowing it to send any future frames destined for this MAC address directly to this port rather than flood all the ports on the VLAN. However, in a situation where the CAM table is filled up, the switch is unable to create this CAM entry. At this point, when the switch receives a legitimate frame for which it does not know which port to forward the frame to, the switch floods all the connected ports belonging to the VLAN on which it has received the frame. The switch continues to flood the frames with destination addresses that do not have an entry in the CAM tables to all the ports on the VLAN associated with the port it is receiving the frame on. This causes two main problems:
  • Network traffic increases significantly due to the flooding done by the switch. This can result in a denial of service (DoS) for legitimate users of the switched network.
  • The attacker can receive frames that are being flooded by the switch and use the information contained in them for various types of attacks.

Figure 5-1 shows how MAC address flooding can cause CAM overflow and subsequent DoS and traffic analysis attacks.


Figure 5-1 shows a series of steps that take place to orchestrate a MAC address flooding attack. Given below is the list of steps that takes place as shown in the Figure 5-1:

Step 1. A compromised machine is attached to port 4. Frames sourced from fictitious MAC address denoted by G, H, E and F etc. are sent on the port 4. The actual MAC address of the compromised machine is denoted by D.


Step 2. Due to the flooding of frames on port 4, the CAM table of the switch fills up and it is unable to 'learn' any more MAC address and port mappings.


Step 3. A host situated on port 1 with a MAC address denoted by A, sends a frame sourced from the MAC address A to MAC address B. The switch is unable to learn and associate port 1 with the MAC address A since its CAM table is full.


Step 4. Host on port 3 with a MAC address denoted by C sends a frame to MAC address A. Since the switch does not have an entry in its CAM table for A, it floods the frame to all its ports in that VLAN. This results in flooding causing DOS as well as an opportunity for traffic analysis by the attacker who receives the flooded frames on port 4 as well.


IP Permit Lists

IP permit lists are used to restrict Telnet, SSH, HTTP, and SNMP traffic from entering the switch. This feature allows IP addresses to be specified that are allowed to send these kinds of traffic to the switch.

The configuration shown in Example 5-3 on a switch enables the ip permit list feature and then restricts Telnet access to the switch from the 172.16.0.0/16 subnet and SNMP access from 172.20.52.2 only. The host, 172.20.52.3, is allowed to have both types of access to the switch.


IP permit lists are an essential feature to configure on a switch in situations where Layer 3 access to the switch is needed. As stated earlier, Layer 3 access to a switch should remain fairly limited and controlled.

Wednesday, February 24, 2010

Secure Routing

Unicast Reverse Path Forwarding

Unicast Reverse Path Forwarding (URPF) is a tool implemented on routers to thwart attempts to send packets with spoofed source IP addresses. A spoofed source IP address makes tracking the real source of an attack very difficult. For example, if site A is getting attacked with ICMP floods coming from a source IP address in the range 150.1.1.0/24, the only place for that site to look to stop this kind of attack is the network that contains the 150.1.1.0/24 subnet (site B). However, more than likely, the packets are actually coming from some other network (site C), often compromised too, that does not contain the 150.1.1.0/24 subnet. However, other than tracking the source of the packets one hop at a time, the attacked entity has no way of determining this. In this situation, it would be great if site C's network administrators (and, ideally, the administrators of all the other sites on the Internet) had some sort of mechanism in place on their routers that does not allow packets with source IP addresses not in the range belonging to their respective sites to go out.

URPF works by looking for the source IP address of any packet arriving inbound on an interface of a router in its routing table. Logically, if the source IP address belongs to the network behind the router and is not a spoofed address, the routing table contains an entry showing the router a way to get to that address via the interface on which the packet arrived. However, if the address is spoofed, there probably isn't an entry in the routing table, because the address does not lie behind the router, but is stolen from some other network on the Internet (site B in our example). If the router does not find the source IP address when it does the lookup, it drops the packet.

One thing to note here is that URPF needs to have Cisco Express Forwarding (CEF) enabled on the router. URPF looks at the Forwarding Information Base (FIB) that is generated by CEF rather than looking directly at the routing table. This is a more efficient way of doing the lookup. Figure 4-2 demonstrates how URPF works.


Figure 4-2 shows to two scenarios. In Scenario 1, a packet is allowed to pass through the router after it successfully passes the URPF check. In Scenario 2, a packet is dropped because it fails the URPF check. Let's look at each scenario separately, and in sequence:

Scenario 1:

1. The packet arrives on S0 with a source IP address of 90.1.1.15.

2. URPF does a reverse rate lookup on the source IP address and finds it can be routed back through S0.

3. URPF allows the packet to pass through.


Scenario 2:

1. The packet arrives on S1 with a source IP address of 90.1.1.19.

2. URPF does a reverse rate lookup on the source IP address and finds it can be routed back through S0 and not S1.

3. Because the interface on which the packet arrived is not the same one through which it can be routed back, URPF causes the packet to be dropped.


Configuring URPF is fairly simple. However, you should be careful when choosing the right place to configure it. It should not be set up on routers that might have asymmetric routes.

Asymmetric routing is said to occur when the interface through which the router sends return traffic for a packet is not the interface on which the original packet was received. For example, if the original packet is received on interface X, the return traffic for it is sent out via interface Y. Although this might be a perfectly legitimate arrangement for a network, this situation is incompatible with URPF. The reason is that URPF assumes that all routing occurring on a router is symmetric. It drops any traffic received on the router for which the return path is not through the same interface as the one on which the traffic is being received.

Generally, the best place to apply URPF is on the edge of a network. The reason is that this allows URPF's antispoofing capabilities to be available to the entire network rather than just a component of it.


Path Integrity

After routing protocols have been set up in a secure fashion, it is important to ensure that all traffic is routed based on the paths calculated as optimum by the routing protocols. However, some features in IP can let changes be made to the routing decisions that routers would make if they were left alone to rely on the routing protocols themselves. Two of the most important features in this regard are ICMP redirects and IP source routing.


ICMP Redirects

ICMP redirects are a way for a router to let another router or host (let's call it A) on its local segment know that the next hop on the same local segment it is using to reach another host (B) is not optimal. In other words, the path should not go through it. Instead, host A should send the traffic directly to the next hop in the optimal path to host B. Although the router forwards the first packet to the optimal next hop, it expects the sending host A to install a route in its routing table to ensure that next time it wants to send a packet to B, it sends it to the optimal next hop. If the router receives a similar packet again, it simply drops it.

Cisco routers send ICMP redirects when all the following conditions are met:
  • The interface on which the packet comes into the router is the same interface on which the packet gets routed out.
  • The subnet/network of the source IP address is the same subnet/network of the routed packet's next-hop IP address.
  • The datagram is not source-routed.
  • The router kernel is configured to send redirects.

Although redirects are a useful feature to have, a properly set-up network should not have much use for them. And it is possible for attackers to use redirects to change routing in ways that suit their purposes. So it is generally desirable to turn off ICMP redirects. By default, Cisco routers send ICMP redirects. You can use the interface subcommand no ip redirects to disable ICMP redirects.


IP Source Routing

IP source routing is an IP feature that allows a user to set a field in the IP packet specifying the path he or she wants the packet to take. Source routing can be used to subvert the workings of normal routing protocols, giving attackers the upper hand. Although there are a few ways of using source routing, by far the most well-known is loose source record route (LSRR), in which the sender defines one or more hops that the packet must go through to reach a destination.

Tuesday, February 9, 2010

Secure Routing

Building Security into Routing Design

In order to have a secure network, it is essential that you build security into how traffic flows in the network. Because routing protocols determine how traffic flows in the network, it is essential to make sure that the routing protocols are chosen and implemented in a manner that is in line with the security requirements of the network. Needless to say, a network with a secure routing architecture is less vulnerable to attacks and oversights than a network with a poorly designed routing structure. A properly designed routing infrastructure can also help reduce the downtime a network suffers during a network attack.


Route Filtering

Proper route filtering is important to any well-implemented network. It is especially important in a private network with routing links to the outside world. It is important in these networks to ensure that route filtering is used to filter out any bogus or undesired routes coming into the private network as well as make sure that only the routes actually contained on the internal network are allowed to be advertised. It is also important to make sure that the only advertised networks are those for which access from outside the private network is desired.

On any private network connected through an ISP to the Internet or a larger public network, the following routes should be filtered from entering the network in most situations (this filtering can also be carried out on the ISP routers):
  • 0.0.0.0/0 and 0.0.0.0/8— Default and network 0 (unique and now historical properties)
  • 127.0.0.0/8— Host loopback
  • 192.0.2.0/24— TEST-NET, generally used for examples in vendor documentation
  • 10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16— RFC 1918 private addresses
  • 169.254.0.0/16— End node auto-config for DHCP
  • 224.0.0.0/3— Multicast addresses
The addresses belonging to the address space reserved by IANA can also be blocked. See the following URL for IANA address space allocations: www.iana.org/assignments/ipv4-address-space.

Filters can also be set up to ensure that IP address blocks belonging to a private network are not allowed to be advertised back into the network from outside. This is a necessary precaution to protect the traffic intended for some of the hosts on the inside network from being routed somewhere unintended.

Route filtering can also be used to hide one piece of a network from another. This can be important in organizations that need varying amounts of security in different parts of the network. In addition to having firewalls and other authentication mechanisms in place, route filtering can also rule out the ability of machines in a less-secure network area to reach a more-secure area if they don't have a route to that portion of the network. However, route filtering should not be used as the sole network security measure.

Applying filtering correctly is important as well. It is a good practice to filter both incoming and outgoing routes. Ingress filtering ensures that routes not intended for a network are not flooded into it during an erroneous or malicious activity on another network. Also, if something goes wrong in one part of the network, egress filtering can stop that problem from spreading to the rest of the network.

ISPs can also consider using what is called a net police filter, whereby no routes with prefixes more specific than /20 (or perhaps up to /24) are allowed to come in. This is often done to make sure that an attack cannot be staged on a large ISP's router by increasing the size of its routing tables. Routes more specific than /20 are often not needed by large ISPs.

Therefore, an ISP can filter out these routes to keep its routing table from getting out of control in terms of size. How specific a prefix a router should accept should be determined by a network administrator who understands what is necessary for the router to perform its functions properly.


Convergence

Fast convergence is important for having a secure routing infrastructure. A network that is slow to converge takes longer to recover from network-disrupting attacks and thus aggravates problems. On an Internet-wide basis, slow convergence of BGP for interdomain routing can mean a considerable loss of revenue for a large number of people. However, even for a small network, slow convergence can mean loss of productivity for a significant number of people.

A slow-converging network is also liable to be more susceptible to a denial of service (DoS) attack. The loss of one or two nodes at a time, making the network take a long time to converge, could mean that a DoS attack confined to just one node actually spreads to the whole network.

At various points in this chapter, we will touch on convergence and how it can be improved. In general, a network's convergence speed can depend on a lot of factors, including the complexity of the network architecture, the presence of redundancy in the network, the parameters set up for route calculation engines on the various routers, and the presence of loops in the network. The best way to improve convergence speed is for the network administrator to thoroughly understand the workings of the network and then to improve its convergence speed by designing the network around the aspect of faster convergence.


Static Routes

Static routes are a useful means of ensuring security in some circumstances. Static routes might not scale to all situations, but where they do, they can be used to hard code information in the routing tables such that this information is unaffected by a network attack or occurrences on other parts of the network. Static routes are also a useful way to define default route information.


Router and Route Authentication

The reason for having router and route authentication and route integrity arises from the risk of an attacker who configures his or her machine or router to share incorrect routing information with another router that is part of the network being attacked. The attacked router can be tricked into not only sending data to the incorrect destination, but through clever maneuvering can be completely put out of commission as well. Routing changes can also be induced simply to redirect the traffic to a convenient place in the network for the attacker to analyze it. This can result in the attacker's being able to identify patterns of traffic and obtain information not intended for him or her.

An example of such attacks occurs in RIP environments where bogus RIP route advertisements can be sent out on a segment. These updates are conveniently accepted into their routing tables by the routers running RIP unless an authentication mechanism is in place to verify the routes' source.

Another issue that prompts router authentication, especially in BGP, is the fear of an attack wherein a rogue router acting as a BGP speaker and neighbor advertises a lot of specific routes into a core router's routing table, causing the router to stop functioning properly due to the greatly increased size of its routing table.

There are two main ways in which Cisco routers provide security in terms of exchanging routing information between routers:
  • Authenticating routers that are sharing routing information among themselves so that they share information only if they can verify, based on a password, that they are talking to a trusted source
  • Authenticating the veracity of the information being exchanged and making sure it has not been tampered with in transit
Most major routing protocols support these measures. There are two ways that routers are authenticated to each other when sharing route information:
  • By using a clear-text key (password) that is sent along with the route being sent to another router. The receiving router compares this key to its own configured key. If they are the same, it accepts the route. However, this is not a very good method of ensuring security, because the password information is sent in the clear. It is more a method to avoid misconfigurations than anything else. A skilled hacker can easily get around it.
  • By using MD5-HMAC, the key is not sent over the wire in plain text. Instead, a keyed hash is calculated using the configured key. The routing update is used as the input text along with the key into the hashing function. This hash is sent along with the route update to the receiving router. The receiving router compares the received hash with a hash it generates on the route update using the preshared key configured on it. If the two hashes are the same, the route is assumed to be from a trusted source. This is a more secure method of authentication than the clear-text password, because the preshared secret is never shared over the wire.
The second method of authentication using MD5-HMAC also allows for checking route integrity. If the route information is tampered with during transit, the receiving router upon calculating the hash on the route information finds the hash to be different from the hash sent by the original router. Even if an attacker intercepts the route information and injects a new hash after changing the route information, the attempt fails, because the attacker does not know the correct key to calculate the hash. That key is known only to the sending and receiving routers. Figure 4-1 shows how route authentication occurs on Cisco routers.

Wednesday, January 20, 2010

PIX Firewall Security

The PIX, being a security-specific device, is fairly robust from a security perspective. This section talks about some of the important techniques you can use to make the firewall even more secure from a device perspective. The earlier section "Router Security" talks about the reasons for having most of these safeguards, so I will not repeat them here but rather will concentrate on the actual implementations.


Configuration Management

Managing a configuration away from the PIX box in case of an attack is important. PIX allows configurations to be saved on a TFTP server via the write net command. The write net command writes the PIX configuration to a TFTP server specified by the tftp-server command.

The configuration should be saved regularly and after all changes are made to the PIX setup. It is prudent to save the PIX images to a server as well.

Care needs to be taken with where the TFTP server resides, because the PIX as of version 6.2.1 does not have the concept of a source interface. Therefore, it is possible to misconfigure the PIX and send management-related traffic through a lower-security interface and possibly over an untrusted network.


Controlling Access to the PIX

The PIX Firewall can be accessed in two primary ways:
  • vty port
  • TTY console

vty access via Telnet ports is the most common way to access a PIX Firewall for administrative purposes. PIX can be accessed from the inside network via plain-text Telnet. However, to access it from the outside interface, an IPsec client needs to be set up to initiate a VPN connection to the PIX using IPsec.

Telnet access needs to be restricted to certain addresses to ensure security. Example 3-28 shows how restricted Telnet access can be set up on a PIX Firewall.


Switch Security

For the purpose of our discussion here, I will concentrate on the Catalyst 5500 switches. Similar mechanisms can be used to set up security on other types of switches. Switches perform most of their functions at Layer 2 of the OSI model. They often do not participate in Layer 3 and above operations actively. Consequently, access to switches through various Layer 3 and above functions such as Telnet and rsh is very limited. This provides for switch security as well. This section looks at some of the mechanisms you can put into place to further strengthen switch security.


Summary

Ensuring that the devices that are responsible for regulating traffic in a network are themselves secure is critical to ensuring the security of the overall network infrastructure. This chapter looked at some of the basic physical and logical measures you can take to ensure the security of network devices. Special consideration was given to three main components of a secure network: routers, switches, and PIX Firewalls. Specific features available to protect routers, switches, and firewalls were discussed. The use and abuse of various features available on these devices were also described. Having discussed the features that protect these devices from attacks, this chapter built the foundation for discussing the various security features available on these devices to protect the network of which they are a component.

Monday, January 4, 2010

Password Management

The best place for passwords is on an authentication server. But some passwords still might need to be configured on the router itself. It is important to ensure that these passwords are properly encrypted to be secure from prying eyes (people looking over the network administrator's shoulder as he or she works on a configuration). It is important to configure an enable secret on the router (rather than a plain password known simply as the enable password) to get administrative access to the box. The enable secret uses MD5 to encrypt the password, and the hash is extremely difficult to reverse. Example 3-12 shows its usage.

Cisco IOS version 12.2(8) T introduced the enhanced password security feature. It allows MD5 encryption for username passwords to be configured. Before this feature was introduced, two types of passwords were associated with usernames: type 0, which is a clear-text password visible to any user who has access to privileged mode on the router, and type 7, which has a password with a weak type of encryption. Type 7 passwords can be retrieved from encrypted text by using publicly available tools. Example 3-12 shows how this new feature can be implemented.


It is also important to ensure that the rest of the passwords on the box, such as CHAP passwords, are also encrypted so that a casual view of the configuration does not reveal them. You can do this using the service password-encryption command, as shown in Example 3-12. The catch with this command is that it uses type 7 encryption rather than the MD5 hash used by enable secret commands. This type of encryption is weaker and easier to crack than MD5 encryption. Password encryption set up using this command is applied to all passwords, including username passwords, authentication key passwords, the privileged command password, console and virtual terminal line access passwords, and Border Gateway Protocol (BGP) neighbor passwords. However, note that with the introduction of the new feature discussed in the preceding section, usernames and their corresponding passwords can now be hidden using MD5 hashing.


Using Loopback Interfaces

Loopback interfaces can play an important part in securing a device against attacks. Generally, any router is dependent on a series of services for which it has to access other routers and servers. It is important to make sure that those servers to which the router goes to get certain information accept connections only from a very small block of trusted IP addresses. Considering the entire private addressing scheme as secure can be dangerous as well. Loopbacks can play a vital role in making this happen. A block of IP addresses can be assigned to be used by loopback, and then all routers can be forced to use these loopback IP addresses as source addresses when accessing the servers. The servers can then also be locked down to allow access only from this block of IP addresses.

Some examples of servers to which access can be restricted in this manner are SNMP, TFTP, TACACS, RADIUS, Telnet, and syslog servers. Example 3-14 lists the commands required to force the router to use the IP address on the loopback0 interface as the source address when sending packets to the respective servers.


Controlling SNMP as a Management Protocol

Device and network management protocols are important to maintain any network. However, these services can be used as back doors to gain access to routers and/or get information about the devices. The attacker can then use this information to stage an attack.

SNMP is the most commonly used network management protocol. However, it is important to restrict SNMP access to the routers on which it is enabled. On routers on which it is not being used, you should turn it off using the command shown in Example 3-15.

Example 3-15. Disabling SNMP on a Router

no snmp-server


SNMP v3

SNMP v3 as defined in RFCs 2271 through 2275 provides guidelines for secure implementation of the SNMP protocol. RFC 2271 defines the following as the four major threats against SNMP that SNMP v3 attempts to provide some level of protection against:

  • Modification of information— The modification threat is the danger that some unauthorized entity might alter in-transit SNMP messages generated on behalf of an authorized user in such a way as to effect unauthorized management operations, including falsifying an object's value.
  • Masquerade— The masquerade threat is the danger that management operations not authorized for a certain user might be attempted by assuming the identity of a user who has the appropriate authorization.
  • Disclosure— The disclosure threat is the danger of eavesdropping on exchanges between managed agents and a management station. Protecting against this threat might be required as a matter of local policy.
  • Message stream modification— The SNMP protocol is typically based on a connectionless transport service that may operate over any subnetwork service. The reordering, delay, or replay of messages can and does occur through the natural operation of many such subnetwork services. The message stream modification threat is the danger that messages might be maliciously reordered, delayed, or replayed to a greater extent than can occur through the natural operation of a subnetwork service to effect unauthorized management operations.

Protection Against Attacks

SNMP v3 aims to protect against these types of attacks by providing the following security elements:

  • Message integrity— Ensuring that a packet has not been tampered with in transit.
  • Authentication— Determining that the message is from a valid source.
  • Encryption— Scrambling a packet's contents to prevent it from being seen by an unauthorized source.
Table 3-2 compares the levels of security provided by each of the SNMP protocols available on networks today.



Login Banners

A login banner is a useful place to put information that can help make the system more secure. Here are some do's and don'ts of what to put in a login banner:

  • A login banner should advertise the fact that unauthorized access to the system is prohibited. You can discuss the specific wording with legal counsel.
  • A login banner can also advertise the fact that access to the device will be tracked and monitored. This is a legal requirement in certain places. Again, legal counsel can help.
  • It is advisable not to include the word "welcome" in a banner.
  • It is inappropriate to include information that says anything about the operating system, hardware, or any logical configuration of the device. The least amount of information about the system's ownership and identity should be revealed.
  • Other notices to ward off criminal activity may also be included.

When a user connects to the router, the MOTD banner appears before the login prompt. After the user logs in to the router, the EXEC banner or incoming banner is displayed, depending on the type of connection. For a reverse Telnet login, the incoming banner is displayed. For all other connections, the router displays the EXEC banner.

Wednesday, December 23, 2009

Router Security

This section discusses how security can be improved on routers so that any attempts to disable the router, gain unauthorized access, or otherwise impair the functioning of the box can be stopped. It is important to note that these measures in most cases only secure the device itself and do not secure the whole network to which the device is connected. However, a device's security is critical to the network's security. A compromised device can cause the network to be compromised on a larger scale.

The following sections discuss some of the steps you can take to secure a router against attacks aimed at compromising the router itself.


Configuration Management

It is critical to keep copies of the router's configurations in a location other than the router's NVRAM. This is important in the event of an attack that leads to the configuration's being corrupted or changed in some manner. A backed-up configuration can allow the network to come back up very quickly in the manner in which it was supposed to function. This can be achieved by copying the router configurations to an FTP server at regular intervals or whenever the configuration is changed. Cron jobs can also be set up to pull configurations from the routers at regular intervals. Also, many freeware tool sets are available for this functionality, as well as a number of robust commercial packages, such as CiscoWorks2000. This is important in the event of an attack in which a router loses its configuration or has its configuration changed and needs to be restored to its original setting.

You can use the commands described next to copy a router's configuration to an FTP server. Although TFTP can be used as well, FTP is a more secure means of transporting this information.

The copy command, shown in Example 3-4, not only defines the IP address of the FTP server to move the file to but also specifies the username (user) and the password (password) to use to log in to the FTP server.


The ip ftp username and ip ftp password commands can also be used to set up the username and password on the router for FTP.

It is also useful to have a backup of the software images running on a router in case of a network attack that removes the software from the router or corrupts it.


Controlling Access to the Router

It is important to control the accessibility to a router. There are two main mechanisms to gain access to a router for administrative purposes:
  • vty ports
  • TTY console and auxiliary ports
vty ports are generally used to gain remote interactive access to the router. The most commonly used methods of vty access are Telnet, SSH, and rlogin.

TTY lines in the form of console and auxiliary ports are generally used to gain access when a physical connection is available to the router in the form of a terminal connected to the router or a modem hooked to it. The console port is used to log in to the router by physically connecting a terminal to the router's console port. The aux port can be used to attach to an RS-232 port of a CSU/DSU, a protocol analyzer, or a modem to the router.

vty access to a router using Telnet is by far the most common router administration tool. Console access and access through the aux port using a modem are out-of-band methods often used as a last resort on most networks. However, using a mechanism known as reverse Telnet, it might be possible for remote users to gain access to a router through the auxiliary or console ports. This needs to be protected against as well, as described next.


Controlling vty Access

At a minimum, you can follow these steps to control vty access into a router:

Step 1. Restrict access only via the protocols that will be used by the network administrators.

The commands shown in Example 3-5 set up vty lines 0 through 4 for Telnet and SSH access only. In Cisco IOS Release 11.1, the none keyword was added and became the default. Before Cisco IOS Release 11.1, the default keyword was all, allowing all types of protocols to connect via the vty lines by default.

It is important to realize that although Telnet is by far the most popular way of accessing a router for administrative purposes, it is also the most insecure. SSH provides an encrypted mechanism for accessing a router. It is advisable to set up SSH on a router and then disable Telnet access to it.


Step 2. Configure access lists to allow vty access only from a restricted set of addresses.

In Example 3-6, for the vty lines 0 to 3, access list 5 is used. This access list allows access from a restricted set of IP addresses only. However, for the last vty line, line 4, the more-restrictive access list 6 is used. This helps prevent DoS attacks aimed at stopping Telnet access to the router for administrative purposes. Only one session to a vty port can occur at any given time. So an attacker can leave all the ports dangling at the login prompt, denying legitimate use. The restrictive access list on line 4 is an effort to keep at least the last vty line available in such an eventuality. Note that the command service tcp-keepalives-in can also be used to track such left-to-idle TCP sessions to the router. This command basically turns on a TCP keepalive mechanism for the router to use for all its TCP connections.

It is also a good idea to set up logging for the access lists used to allow Telnet access.


Step 3. Set up short timeouts.

This is an important precaution needed to protect against Telnet DoS attacks, hijacking attacks, and Telnet sessions left unattended, consuming unnecessary resources. The command shown in Example 3-7 sets the timeout value to 5 minutes and 30 seconds. The default is 10 minutes.


Step 4. Set up authentication for vty access.

It is critical to have user authentication enabled for vty access. This can be done using local or RADIUS/TACACS authentication. Example 3-8 shows local authentication, but RADIUS/TACACS is a more scalable method of setting this up. See Chapters 16 and 19 for more examples of how to use the AAA commands to achieve scalable security.


Controlling TTY Access

A lot of effort spent controlling access through the vty lines can go to waste if the TTY lines are not controlled for access. The TTY lines are harder to use to gain access, because they generally require some sort of physical access to allow access. However, having a number to dial in to the modem hooked to a router's aux port or using reverse Telnet to get into the console port of a router hooked up to a terminal server remotely are both methods still used to gain easy illegitimate access to routers without physical proximity.

Some of the methods that can be used on the vty ports to control access, such as using access lists, cannot be used on TTY lines. However, some other techniques, such as user authentication and disabling protocol access using the transport command, are still valid and can be set up in a fashion similar to how vty configurations are done.

If appropriate, the use of TTY lines remotely via reverse Telnet should be disabled. You can do this using the command shown in Example 3-9.

Starting in Cisco IOS version 12.2(2) T, you can access a router's console port using the SSH protocol. This is an important feature, because it gives users much more security. Example 3-10 shows how this is set up. Note that a separate rotary group needs to be defined for each line that will be accessed via SSH. See the next section for the rest of the command needed to allow a router to act as an SSH server and accept connections.