Network address translation (NAT) is the mechanism by which a packet's IP addresses are modified to be something other than what they originally were. This is a requirement for networks that use the RFC 1918 addressing scheme. These IP addresses cannot be routed on the Internet and therefore need to be converted to routable IP addresses at the edge of the network before they are passed to a public network, such as the Internet. Because NAT can hide a network's IP addresses, this offers some amount of security to the network that has NAT the setup. However, you can't depend solely on NAT for security. This chapter discusses the security benefits of having NAT running on the network's periphery. It then discusses how depending solely on NAT for protection can be a dangerous choice.
While there are a few reasons for using NAT, the primary reason that networks use RFC 1918 addressing is to reduce IP address consumption. Routable IP addresses are expensive and limited in number. A specific form of NAT called Overload NAT provides a useful solution to this problem. Overload NAT, also known as Port Address Translation (PAT), works differently from normal one-to-one NAT, whereby each RFC 1918 address is converted to its own unique routable IP address. In Overload NAT, the RFC 1918 addresses are translated to a small number of routable IP addresses (often just one routable IP address, frequently that of the router's external interface).
The device doing PAT distinguishes between the traffic destined for the various RFC 1918 addresses by tracking the source TCP or UDP ports used when the connection is initiated. Figure 6-1 shows how PAT works.
NAT is sometimes confused with proxy servers. However, these are two completely different entities. NAT is a Layer 3 occurrence that uses Layer 4 information when doing PAT. On the other hand, proxy servers usually work on Layer 4 or higher of the OSI model. The most significant difference between the two mechanisms is the way they work. Although NAT is completely transparent to the source and destination devices, proxy servers require the source machine to be configured to make requests to the proxy server, which then facilitates the connection on behalf of the source machine. This is completely different from NAT, in which neither the source nor the destination machines need to know about the device doing the NAT.
Security Benefits of Network Address Translation
NAT used in PAT mode can be a source of security for the network that is using PAT to translate its private addresses.
To understand this, assume that the device doing the NAT is a router that is sitting on the edge of the network, with one interface connected to the RFC 1918 private network and another interface connected to the Internet. When a device sitting behind the router wants to go out to the Internet, it sends packets to the router. The router then translates the source address into a globally routable address and puts the source IP address and source TCP or UDP port number in its NAT tables.
When the reply packets are delivered to the router, destined for the globally routable IP address, the router looks at the destination port number (remember, the source and destination port numbers are flipped in the return packet). Based on a lookup of its NAT table, the router determines which RFC 1918 address to send the packet to. It changes the destination address to the private address found in its NAT table and forwards the packet appropriately. The important point to realize in this operation is that the router can change back the IP address only if it has an entry for the destination port number in its NAT table. If for some reason that entry got cleared out and the router received a packet destined for the globally routable address, it would simply discard it, because it would not know where to send the packet. This property is at the crux of PAT's secure nature. Unless a NAT (PAT) entry is created on the router in its NAT table that contains the port number and private to global address mapping, the router does not forward any packets to the RFC 1918 network. Therefore, any connections not initiated from the inside are not allowed through the PAT device. This is a significant measure of security. However, you will see in the next section why this is not the type of security you can really rely on.
Disadvantages of Relying on NAT for Security
Although NAT provides some level of protection to the networks sitting behind it, it is important to understand that it is by no means a comprehensive security solution.
The following sections outline some of the most obvious reasons why NAT should not be considered a security mechanism, despite the illusion of security it provides.
No Tracking of Protocol Information Other Than the Port Number Information
The NAT table, created on the NAT device that is used to track the outgoing and incoming connections, does not track any of the other information contained in the packets. Information such as the packet sequence numbers, the TCP handshake, and UDP progress-based timers are some of the pieces of information that most firewalls track in order to prevent the usage of the connections established through the firewall by attackers spoofing IP addresses. NAT does not track any of this information, because it does not need to for the purposes of creating and maintaining NAT translations. Chapter 8, "PIX Firewall," contains a detailed discussion of the algorithm that PIX Firewall uses to provide security.
No Restriction on the Type of Content Flowing Based on PAT Tables
NAT also does not concern itself with protecting the hosts from malicious data being sent on the NAT connections established by the hosts themselves. You can only protect your network from such malicious content by having a firewall and an intrusion detection system in place.
Limited Control on Initial Connections
NAT does not have any real control over who can initiate connections from the inside network to the outside network. Although an access list is configured to define which hosts can initiate NAT connections, this is a rudimentary measure at best.
By using route maps and extended access control lists, you can put further restraints on what traffic can be NATed. However, this is not the ideal way to restrict traffic. It is difficult to implement with the same granularity as standard access control mechanisms, and it can be resource-intensive for the router.
In light of this, NAT is a useful mechanism for increasing the available IP address space. It can also be a convenient tool for some other network design aspects. However, it should not be relied on to provide security. When used on a security device such as a firewall in conjunction with other security features, NAT provides definite enhancements to the security provided by the firewall. However, it should not be used in isolation as a security mechanism. It does provide some measure of security. However, this always needs to be enhanced with additional tools and products designed specifically with security in mind, such as firewalls and intrusion detection systems.
Wednesday, October 27, 2010
Wednesday, June 9, 2010
Secure LAN Switching
Port Authentication and Access Control Using the IEEE 802.1x Standard
802.1x is the standard developed by IEEE to provide a mechanism for authentication to occur for devices that connect to various Layer 2 devices such as switches using IEEE 802 LAN infrastructures (such as Token Ring and Ethernet).
The primary idea behind the standard is devices that need to access the LAN need to be authenticated and authorized before they can connect to the physical or logical port of the switch that is responsible for creating the LAN environment. In the case of Ethernet and Token Ring, the ports are physical entities that a device plugs into. However, in the case of setups such as the IEEE 802.11b wireless setup, the ports are logical entities known as associations. In either case, the standard's primary goal is to allow for controlled access to the LAN environment.
802.1x Entities
The 802.1x standard defines the following three main entities that take part in the access control method set up in this standard:
In addition to these three main entities, the 802.1 defines some other entities as well. One of these is the Port Access Entity (PAE). The PAE is essentially responsible for maintaining the functionality of the 802.1x standard on the authenticator or the supplicant or both. It can be viewed as the daemon that is responsible for the functioning of the 802.1x standard. For our purposes, we will assume that this entity is transparent to the network administrator as we talk about various aspects of this standard.
802.1x Communications
In order for the 802.1x standard to function, communication needs to occur between the three entities just defined. 802.1x protocol uses an RFC standard known as Extensible Authentication Protocol (EAP) to facilitate this communication. The authentication data between the three entities is exchanged using EAP packets that are carried either in EAPOL frames (between the supplicant and the authenticator, as discussed later) or in TACACS+, RADIUS, or some other such protocol's packets (between the authenticator and the authenticating server). The following sections look at each of these pieces and discuss how they come together to form the 802.1x communication infrastructure.
EAP
EAP is a fairly flexible protocol. It was originally designed to carry only PPP authentication parameters, but it also can be used by other protocols such as 802.1x for their authentication needs.
EAP can carry authentication data between two entities that want to set up authenticated communications between themselves. It supports a variety of authentication mechanisms, including one-time password, MD5 hashed username and password, and transport layer security (discussed later). EAP, using the packets described in the next section, allows the authenticator, the supplicant, and the authentication server to exchange the information they need to exchange to authenticate the supplicant.
RFC 2284 defines the EAP packet format as shown in Figure 5-4.
EAPOL
We have looked at EAP, which is the underlying 802.1x protocol. But we have not looked at how EAP messages are actually framed and transported from the supplicant to the authenticator. The 802.1x defines an encapsulating/framing standard to allow communication between the supplicant and the authenticator to take place. This encapsulation mechanism is known as EAP Over LANs (EAPOL). EAPOL encapsulation is defined separately for both the Token Ring and Ethernet environments. EAPOL allows the EAP messages to be encapsulated using the EAPOL frames for transport between the supplicant and the authenticator. As soon as these frames reach the authenticator, it strips off the EAPOL headers, puts the EAP packet in a RADIUS or TACACS+ (or some other similar protocol) packet, and sends it to the authenticating server. Figure 5-6 shows the relationship between the supplicant and the authenticator using EAPOL.
802.1x Functionality
This section puts together all the pieces of the 802.1x protocol discussed in the preceding sections and summarizes how 802.1x provides port authentication to supplicants.
The 802.1x functionality is based on a series of exchanges between the supplicant, authenticator, and authentication server. The authenticator plays an important role in these exchanges because it not only acts as a go-between for the supplicant and the authenticating server but also is responsible for enabling the port to which the supplicant is trying to connect for normal data traffic if the authentication is indeed successful.
The authentication process starts with the supplicant trying to connect to one of the ports on the authenticator. At this point, the port is open only for EAPOL traffic. The authenticator sees the port's operational state change to enable due to the supplicant's connecting to it and requests authentication from the supplicant. This is done by sending an EAP-Request/Identity frame to the supplicant. This message is sent encapsulated in an EAPOL frame. The supplicant responds by sending back an EAP-Response/Identity frame containing information about its identity, such as username/password. This message is also sent encapsulated in an EAPOL frame. The authenticator decapsulates the EAP message from the EAPOL frame and repackages this EAP frame in a RADIUS or TACACS+ packet and forwards it to the authentication server. The authentication server, upon receiving this packet, responds with an EAP-Response message that is based on the authentication method that the authentication server wants to use for this particular supplicant. This message is encapsulated in a TACACS+ or RADIUS packet. Upon receiving this message, the authenticator strips off the TACACS+/RADIUS header, encapsulates the EAP message in an EAPOL frame, and forwards it to the supplicant. This back-and-forth EAP exchange between the supplicant and the authentication server via the authenticator continues until the authentication either succeeds or fails, as indicated by an EAP-Success or EAP-Failure message sent by the authentication server to the supplicant. Upon seeing an EAP-Success message, the authenticator enables the port on which the supplicant is connected for normal data traffic. In addition to enabling the port for this type of traffic, the authenticator can place the port in a specific VLAN based on the information it receives from the authentication server. Figure 5-9 shows the overall 802.1x architecture and flow using EAP over EAPOL and EAP over TACACS+/RADIUS.
802.1x is the standard developed by IEEE to provide a mechanism for authentication to occur for devices that connect to various Layer 2 devices such as switches using IEEE 802 LAN infrastructures (such as Token Ring and Ethernet).
The primary idea behind the standard is devices that need to access the LAN need to be authenticated and authorized before they can connect to the physical or logical port of the switch that is responsible for creating the LAN environment. In the case of Ethernet and Token Ring, the ports are physical entities that a device plugs into. However, in the case of setups such as the IEEE 802.11b wireless setup, the ports are logical entities known as associations. In either case, the standard's primary goal is to allow for controlled access to the LAN environment.
802.1x Entities
The 802.1x standard defines the following three main entities that take part in the access control method set up in this standard:
- Supplicant— This device needs to access the LAN. An example is a laptop that needs to connect to a LAN.
- Authenticator— This device is responsible for initiating the authentication process and then acting as a relay between the actual authentication server and the supplicant. This device is generally also the device that is responsible for the overall workings of the LAN. An example of this type of device is a Catalyst 6000 switch to which various supplicants can connect and be authenticated and authorized via the 802.1x standard before being allowed to use the ports on the switch for data traffic.
- Authentication server— This device is responsible for doing the actual authentication and authorization on behalf of the authenticator. This device contains profile information for all the users of the network in a database format. It can use that information to authenticate and authorize users to connect to the ports on the authenticator. An example of an authentication server is the Cisco Secure ACS.
In addition to these three main entities, the 802.1 defines some other entities as well. One of these is the Port Access Entity (PAE). The PAE is essentially responsible for maintaining the functionality of the 802.1x standard on the authenticator or the supplicant or both. It can be viewed as the daemon that is responsible for the functioning of the 802.1x standard. For our purposes, we will assume that this entity is transparent to the network administrator as we talk about various aspects of this standard.
802.1x Communications
In order for the 802.1x standard to function, communication needs to occur between the three entities just defined. 802.1x protocol uses an RFC standard known as Extensible Authentication Protocol (EAP) to facilitate this communication. The authentication data between the three entities is exchanged using EAP packets that are carried either in EAPOL frames (between the supplicant and the authenticator, as discussed later) or in TACACS+, RADIUS, or some other such protocol's packets (between the authenticator and the authenticating server). The following sections look at each of these pieces and discuss how they come together to form the 802.1x communication infrastructure.
EAP
EAP is a fairly flexible protocol. It was originally designed to carry only PPP authentication parameters, but it also can be used by other protocols such as 802.1x for their authentication needs.
EAP can carry authentication data between two entities that want to set up authenticated communications between themselves. It supports a variety of authentication mechanisms, including one-time password, MD5 hashed username and password, and transport layer security (discussed later). EAP, using the packets described in the next section, allows the authenticator, the supplicant, and the authentication server to exchange the information they need to exchange to authenticate the supplicant.
RFC 2284 defines the EAP packet format as shown in Figure 5-4.
EAPOL
We have looked at EAP, which is the underlying 802.1x protocol. But we have not looked at how EAP messages are actually framed and transported from the supplicant to the authenticator. The 802.1x defines an encapsulating/framing standard to allow communication between the supplicant and the authenticator to take place. This encapsulation mechanism is known as EAP Over LANs (EAPOL). EAPOL encapsulation is defined separately for both the Token Ring and Ethernet environments. EAPOL allows the EAP messages to be encapsulated using the EAPOL frames for transport between the supplicant and the authenticator. As soon as these frames reach the authenticator, it strips off the EAPOL headers, puts the EAP packet in a RADIUS or TACACS+ (or some other similar protocol) packet, and sends it to the authenticating server. Figure 5-6 shows the relationship between the supplicant and the authenticator using EAPOL.
802.1x Functionality
This section puts together all the pieces of the 802.1x protocol discussed in the preceding sections and summarizes how 802.1x provides port authentication to supplicants.
The 802.1x functionality is based on a series of exchanges between the supplicant, authenticator, and authentication server. The authenticator plays an important role in these exchanges because it not only acts as a go-between for the supplicant and the authenticating server but also is responsible for enabling the port to which the supplicant is trying to connect for normal data traffic if the authentication is indeed successful.
The authentication process starts with the supplicant trying to connect to one of the ports on the authenticator. At this point, the port is open only for EAPOL traffic. The authenticator sees the port's operational state change to enable due to the supplicant's connecting to it and requests authentication from the supplicant. This is done by sending an EAP-Request/Identity frame to the supplicant. This message is sent encapsulated in an EAPOL frame. The supplicant responds by sending back an EAP-Response/Identity frame containing information about its identity, such as username/password. This message is also sent encapsulated in an EAPOL frame. The authenticator decapsulates the EAP message from the EAPOL frame and repackages this EAP frame in a RADIUS or TACACS+ packet and forwards it to the authentication server. The authentication server, upon receiving this packet, responds with an EAP-Response message that is based on the authentication method that the authentication server wants to use for this particular supplicant. This message is encapsulated in a TACACS+ or RADIUS packet. Upon receiving this message, the authenticator strips off the TACACS+/RADIUS header, encapsulates the EAP message in an EAPOL frame, and forwards it to the supplicant. This back-and-forth EAP exchange between the supplicant and the authentication server via the authenticator continues until the authentication either succeeds or fails, as indicated by an EAP-Success or EAP-Failure message sent by the authentication server to the supplicant. Upon seeing an EAP-Success message, the authenticator enables the port on which the supplicant is connected for normal data traffic. In addition to enabling the port for this type of traffic, the authenticator can place the port in a specific VLAN based on the information it receives from the authentication server. Figure 5-9 shows the overall 802.1x architecture and flow using EAP over EAPOL and EAP over TACACS+/RADIUS.
Thursday, May 6, 2010
Secure LAN Switching
Protocol Filtering and Controlling LAN Floods
Attackers can cause broadcast floods to disrupt communications over the LAN. You saw an example of this in the section "MAC Address Floods and Port Security." Therefore, it is important to control flooding on the switches. There are two main ways to do this:
- Set up threshold limits for broadcast/multicast traffic on ports
- Use protocol filtering to limit broadcasts/multicasts for certain protocols
Catalyst switches allow thresholds for broadcast traffic to be set up on a per-port basis. These thresholds can be set up either in terms of bandwidth consumed by broadcasts on a port or in terms of the number of broadcast packets being sent across a port. It is best to use the first method in most cases, because it is done in hardware and also because variable-length packets can render the second method meaningless.
The following command sets the threshold for broadcast and multicast packets on ports 1 to 6 of module 2 at 75%. This implies that as soon as 75% bandwidth of the port on a per-second basis is consumed by broadcast/multicast traffic, all additional broadcast/multicast traffic for that 1-second period is dropped.
Console> (enable) set port broadcast 2/1-6 75%
Protocol filtering provides another very useful mechanism for isolating and controlling environments that are susceptible to flooding attacks. Using the protocol-filtering feature on Catalyst switches, you can define protocol groups. Each group has certain protocols associated with it. It also has a set of ports that belong to it. Only the broadcast or multicast traffic for the protocols associated with a group is allowed to be sent to the ports that belong to that group. You should realize that although VLANs also create broadcast domains for the ports associated with them, protocol-filtering groups allow these domains to be created based on various protocols as well. Using protocol filtering, ports that have hosts on them that do not need to participate in the broadcast traffic for a certain protocol can be made part of a group that does not allow broadcast traffic for that protocol.
With the Catalyst 5000 family of switches, packets are classified into the following protocol groups:
- IP (ip)
- IPX (ipx)
- AppleTalk, DECnet, and Banyan VINES (group)
- Packets not belonging to any of these protocols
- On
- Off
- Auto
Private VLANs on the Catalyst 6000
The Catalyst 6000 product line has introduced some enhancements to the switching arena for security purposes. We will discuss some of these in this section and see how they can be a useful security element in Layer 2 design.
A normal VLAN does not allow devices connected to it to be segregated from each other on Layer 2. This means that if a device on a VLAN becomes compromised, other devices on the same VLAN can also be attacked from that compromised device.
Private VLANs allow restrictions to be placed on the Layer 2 traffic on a VLAN.
There are three types of private VLAN ports:
- Promiscuous ports— Communicates with all other private VLAN ports. This is generally the port used to communicate with the router/gateway on a segment.
- Isolated ports— Has complete Layer 2 isolation from other ports within the same private VLAN, with the exception of the promiscuous port.
- Community ports— Communicate among themselves and with their promiscuous ports. These ports are isolated at Layer 2 from all other ports in other communities or isolated ports within their private VLAN.
When a port is isolated, all machines connected to the network using this port are provided complete isolation from traffic in all other ports, except for the promiscuous port. This means that no machines located on any of the other ports on the switch can send any traffic to the machines located on the isolated VLAN port. It is similar to placing two ports in two separate VLANs. The isolated ports communicate with the rest of the world through the promiscuous VLAN port, which can send traffic to and receive traffic from the isolated VLAN ports. Figure 5-2 gives a graphical view of which ports can communicate with which other ports in a private VLAN setup.
ARP Spoofing, Sticky ARP, and Private VLANs
A security problem that private VLANs resolve is that of ARP spoofing. Network devices often send out what is known as a gratuitous ARP or courtesy ARP to let other machines on their broadcast domain known their IP address and the corresponding MAC address. This generally happens at bootup, but it can also occur at regular intervals after that. An attacker who has gained access to a compromised machine on the LAN can force the compromised machine to send out gratuitous ARPs for IP addresses that do not belong to it. This results in the rest of the machines sending their frames intended for those IP addresses to the compromised machine. This type of attack can have two consequences:
- It can result in a DoS attack if the attacker spoofs the IP address/MAC address of the network's default gateway in its gratuitous ARPs. This causes all the machines on the broadcast domain to send the traffic destined for the default gateway to the compromised host, which in turn can simply drop this traffic, resulting in a DoS.
- The attacker can analyze the traffic being sent to it and use the information found therein for various malicious activities.
Private VLANs offer protection from this type of attack by providing isolation between various ports on a VLAN. This stops an attacker from receiving traffic from the machines, sitting on all the other ports on a switch, on a port that has a compromised machine sitting on it.
Another feature, known as sticky ARP, which is available in conjunction with private VLANs, can also help mitigate these types of attacks. The sticky ARP feature makes sure that the ARP entries that are learned by the switch on the private VLANs do not age out and cannot be changed. Suppose an attacker somehow compromises and takes control of a machine on a private VLAN. He tries to do ARP spoofing by sending out gratuitous ARPs, announcing the machine as the owner of a certain MAC address/IP address mapping that it does not own. The switch ignores these ARPs and doesn't update its CAM tables to reflect these mappings. If there is a genuine need to change a port's MAC address, the administrator must do so manually.
Monday, March 15, 2010
Secure LAN Switching
In order to provide comprehensive security on a network, it is important take the concept of security to the last step and ensure that the Layer 2 devices such as the switches that manage the LANs are also operating in a secure manner.
This chapter focuses on the Cisco Catalyst 5000/5500 series switches. We will discuss private VLANs in the context of the 6000 series switches. Generally, similar concepts can be implemented in other types of switches (such as the 1900, 2900, 3000, and 4000 series switches) as well.
Security on the LAN is important because some security threats can be initiated on Layer 2 rather than at Layer 3 and above. An example of one such attack is one in which a compromised server on a DMZ LAN is used to connect to another server on the same segment despite access control lists on the firewall connected on the DMZ. Because the connection occurs at Layer 2, without suitable measures to restrict traffic on this layer, this type of access attempt cannot be blocked.
General Switch and Layer 2 Security
Some of the basic rules to keep in mind when setting up a secure Layer 2 switching environment are as follows:
Generally, it is difficult to protect against attacks launched from hosts sitting on a LAN. These hosts are often considered trusted entities. As such, if one of these hosts is used to launch an attack, it becomes difficult to stop it. Therefore, it is important to make sure that access to the LAN is secured and is provided only to trusted people.
Some of the features we will discuss in the upcoming sections show you ways to further secure the switching environment.
The discussion in this chapter revolves around the use of Catalyst 5xxx and 6xxx switches. The same principles can be applied to setting up security on other types of switches.
Port Security
Port security is a mechanism available on the Catalyst switches to restrict the MAC addresses that can connect via a particular port of the switch. This feature allows a specific MAC address or a range of MAC addresses to be defined and specified for a particular port. A port set up for port security only allows machines with a MAC address belonging to the range configured on it to connect to the LAN. The port compares the MAC address of any frame arriving on it with the MAC addresses configured in its allowed list. If the address matches, it allows the packet to go through, assuming that all other requirements are met. However, if the MAC address does not belong to the configured list, the port can either simply drop the packet (restrictive mode) or shut itself down for a configurable amount of time. This feature also lets you specify the number of MAC addresses that can connect to a certain port.
MAC Address Floods and Port Security
Port security is especially useful in the face of MAC address flooding attacks. In these attacks, an attacker tries to fill up a switch's CAM tables by sending a large number of frames to it with source MAC addresses that the switch is unaware of at that time. The switch learns about these MAC addresses and puts them in its CAM table, thinking that these MAC addresses actually exist on the port on which it is receiving them. In reality, this port is under the attacker's control and a machine connected to this port is being used to send frames with spoofed MAC addresses to the switch. If the attacker keeps sending these frames in a large-enough quantity, and the switch continues to learn of them, eventually the switch's CAM table becomes filled with entries for these bogus MAC addresses mapped to the compromised port.
Under normal operations, when a machine receiving a frame responds to it, the switch learns that the MAC address associated with that machine sits on the port on which it has received the response frame. It puts this mapping in its CAM table, allowing it to send any future frames destined for this MAC address directly to this port rather than flood all the ports on the VLAN. However, in a situation where the CAM table is filled up, the switch is unable to create this CAM entry. At this point, when the switch receives a legitimate frame for which it does not know which port to forward the frame to, the switch floods all the connected ports belonging to the VLAN on which it has received the frame. The switch continues to flood the frames with destination addresses that do not have an entry in the CAM tables to all the ports on the VLAN associated with the port it is receiving the frame on. This causes two main problems:
Figure 5-1 shows how MAC address flooding can cause CAM overflow and subsequent DoS and traffic analysis attacks.
Figure 5-1 shows a series of steps that take place to orchestrate a MAC address flooding attack. Given below is the list of steps that takes place as shown in the Figure 5-1:
Step 1. A compromised machine is attached to port 4. Frames sourced from fictitious MAC address denoted by G, H, E and F etc. are sent on the port 4. The actual MAC address of the compromised machine is denoted by D.
Step 2. Due to the flooding of frames on port 4, the CAM table of the switch fills up and it is unable to 'learn' any more MAC address and port mappings.
Step 3. A host situated on port 1 with a MAC address denoted by A, sends a frame sourced from the MAC address A to MAC address B. The switch is unable to learn and associate port 1 with the MAC address A since its CAM table is full.
Step 4. Host on port 3 with a MAC address denoted by C sends a frame to MAC address A. Since the switch does not have an entry in its CAM table for A, it floods the frame to all its ports in that VLAN. This results in flooding causing DOS as well as an opportunity for traffic analysis by the attacker who receives the flooded frames on port 4 as well.
IP Permit Lists
IP permit lists are used to restrict Telnet, SSH, HTTP, and SNMP traffic from entering the switch. This feature allows IP addresses to be specified that are allowed to send these kinds of traffic to the switch.
The configuration shown in Example 5-3 on a switch enables the ip permit list feature and then restricts Telnet access to the switch from the 172.16.0.0/16 subnet and SNMP access from 172.20.52.2 only. The host, 172.20.52.3, is allowed to have both types of access to the switch.
IP permit lists are an essential feature to configure on a switch in situations where Layer 3 access to the switch is needed. As stated earlier, Layer 3 access to a switch should remain fairly limited and controlled.
This chapter focuses on the Cisco Catalyst 5000/5500 series switches. We will discuss private VLANs in the context of the 6000 series switches. Generally, similar concepts can be implemented in other types of switches (such as the 1900, 2900, 3000, and 4000 series switches) as well.
Security on the LAN is important because some security threats can be initiated on Layer 2 rather than at Layer 3 and above. An example of one such attack is one in which a compromised server on a DMZ LAN is used to connect to another server on the same segment despite access control lists on the firewall connected on the DMZ. Because the connection occurs at Layer 2, without suitable measures to restrict traffic on this layer, this type of access attempt cannot be blocked.
General Switch and Layer 2 Security
Some of the basic rules to keep in mind when setting up a secure Layer 2 switching environment are as follows:
- VLANs should be set up in ways that clearly separate the network's various logical components from each other. VLANs lend themselves to providing segregation between logical workgroups. This is a first step toward segregating portions of the network needing more security from portions needing lesser security. It is important to have a good understanding of what VLANs are. VLANs are a logical grouping of devices that might or might not be physically located close to each other.
- If some ports are not being used, it is prudent to turn them off as well as place them in a special VLAN used to collect unused ports. This VLAN should have no Layer 3 access.
- Although devices on a particular VLAN cannot access devices on another VLAN unless specific mechanisms for doing so (such as trunking or a device routing between the VLANs) are set up, VLANs should not be used as the sole mechanism for providing security to a particular group of devices on a VLAN. VLAN protocols are not constructed with security as the primary motivator behind them. The protocols that are used to establish VLANs can be compromised rather easily from a security perspective and allow loopholes into the network. As such, other mechanisms such as those discussed next should be used to secure them.
- Because VLANs are not a security feature, devices at different security levels should be isolated on separate Layer 2 devices. For example, having the same switch chassis on both the inside and outside of a firewall is not recommended. Two separate switches should be used for the secure and insecure sides of the firewall.
- Unless it is critical, Layer 3 connectivity such as Telnets and HTTP connections to a Layer 2 switch should be restricted and very limited.
- It is important to make sure that trunking does not become a security risk in the switching environment. Trunks should not use port numbers that belong to a VLAN that is in use anywhere on the switched network. This can erroneously allow packets from the trunk port to reach other ports located in the same VLAN. Ports that do not require trunking should have trunking disabled. An attacker can use trunking to hop from one VLAN to another. The attacker can do this by pretending to be another switch with ISL or 802.1q signaling along with Dynamic Trunking Protocol (DTP). This allows the attacker's machine to become a part of all the VLANs on the switch being attacked. It is generally a good idea to set DTP on all ports not being used for trunking. It's also a good idea to use dedicated VLAN IDs for all trunks rather than using VLAN IDs that are also being used for nontrunking ports. This can allow an attacker to make itself part of a trunking VLAN rather easily and then use trunking to hop onto other VLANs as well.
Generally, it is difficult to protect against attacks launched from hosts sitting on a LAN. These hosts are often considered trusted entities. As such, if one of these hosts is used to launch an attack, it becomes difficult to stop it. Therefore, it is important to make sure that access to the LAN is secured and is provided only to trusted people.
Some of the features we will discuss in the upcoming sections show you ways to further secure the switching environment.
The discussion in this chapter revolves around the use of Catalyst 5xxx and 6xxx switches. The same principles can be applied to setting up security on other types of switches.
Port Security
Port security is a mechanism available on the Catalyst switches to restrict the MAC addresses that can connect via a particular port of the switch. This feature allows a specific MAC address or a range of MAC addresses to be defined and specified for a particular port. A port set up for port security only allows machines with a MAC address belonging to the range configured on it to connect to the LAN. The port compares the MAC address of any frame arriving on it with the MAC addresses configured in its allowed list. If the address matches, it allows the packet to go through, assuming that all other requirements are met. However, if the MAC address does not belong to the configured list, the port can either simply drop the packet (restrictive mode) or shut itself down for a configurable amount of time. This feature also lets you specify the number of MAC addresses that can connect to a certain port.
MAC Address Floods and Port Security
Port security is especially useful in the face of MAC address flooding attacks. In these attacks, an attacker tries to fill up a switch's CAM tables by sending a large number of frames to it with source MAC addresses that the switch is unaware of at that time. The switch learns about these MAC addresses and puts them in its CAM table, thinking that these MAC addresses actually exist on the port on which it is receiving them. In reality, this port is under the attacker's control and a machine connected to this port is being used to send frames with spoofed MAC addresses to the switch. If the attacker keeps sending these frames in a large-enough quantity, and the switch continues to learn of them, eventually the switch's CAM table becomes filled with entries for these bogus MAC addresses mapped to the compromised port.
Under normal operations, when a machine receiving a frame responds to it, the switch learns that the MAC address associated with that machine sits on the port on which it has received the response frame. It puts this mapping in its CAM table, allowing it to send any future frames destined for this MAC address directly to this port rather than flood all the ports on the VLAN. However, in a situation where the CAM table is filled up, the switch is unable to create this CAM entry. At this point, when the switch receives a legitimate frame for which it does not know which port to forward the frame to, the switch floods all the connected ports belonging to the VLAN on which it has received the frame. The switch continues to flood the frames with destination addresses that do not have an entry in the CAM tables to all the ports on the VLAN associated with the port it is receiving the frame on. This causes two main problems:
- Network traffic increases significantly due to the flooding done by the switch. This can result in a denial of service (DoS) for legitimate users of the switched network.
- The attacker can receive frames that are being flooded by the switch and use the information contained in them for various types of attacks.
Figure 5-1 shows how MAC address flooding can cause CAM overflow and subsequent DoS and traffic analysis attacks.
Figure 5-1 shows a series of steps that take place to orchestrate a MAC address flooding attack. Given below is the list of steps that takes place as shown in the Figure 5-1:
Step 1. A compromised machine is attached to port 4. Frames sourced from fictitious MAC address denoted by G, H, E and F etc. are sent on the port 4. The actual MAC address of the compromised machine is denoted by D.
Step 2. Due to the flooding of frames on port 4, the CAM table of the switch fills up and it is unable to 'learn' any more MAC address and port mappings.
Step 3. A host situated on port 1 with a MAC address denoted by A, sends a frame sourced from the MAC address A to MAC address B. The switch is unable to learn and associate port 1 with the MAC address A since its CAM table is full.
Step 4. Host on port 3 with a MAC address denoted by C sends a frame to MAC address A. Since the switch does not have an entry in its CAM table for A, it floods the frame to all its ports in that VLAN. This results in flooding causing DOS as well as an opportunity for traffic analysis by the attacker who receives the flooded frames on port 4 as well.
IP Permit Lists
IP permit lists are used to restrict Telnet, SSH, HTTP, and SNMP traffic from entering the switch. This feature allows IP addresses to be specified that are allowed to send these kinds of traffic to the switch.
The configuration shown in Example 5-3 on a switch enables the ip permit list feature and then restricts Telnet access to the switch from the 172.16.0.0/16 subnet and SNMP access from 172.20.52.2 only. The host, 172.20.52.3, is allowed to have both types of access to the switch.
IP permit lists are an essential feature to configure on a switch in situations where Layer 3 access to the switch is needed. As stated earlier, Layer 3 access to a switch should remain fairly limited and controlled.
Wednesday, February 24, 2010
Secure Routing
Unicast Reverse Path Forwarding
Unicast Reverse Path Forwarding (URPF) is a tool implemented on routers to thwart attempts to send packets with spoofed source IP addresses. A spoofed source IP address makes tracking the real source of an attack very difficult. For example, if site A is getting attacked with ICMP floods coming from a source IP address in the range 150.1.1.0/24, the only place for that site to look to stop this kind of attack is the network that contains the 150.1.1.0/24 subnet (site B). However, more than likely, the packets are actually coming from some other network (site C), often compromised too, that does not contain the 150.1.1.0/24 subnet. However, other than tracking the source of the packets one hop at a time, the attacked entity has no way of determining this. In this situation, it would be great if site C's network administrators (and, ideally, the administrators of all the other sites on the Internet) had some sort of mechanism in place on their routers that does not allow packets with source IP addresses not in the range belonging to their respective sites to go out.
URPF works by looking for the source IP address of any packet arriving inbound on an interface of a router in its routing table. Logically, if the source IP address belongs to the network behind the router and is not a spoofed address, the routing table contains an entry showing the router a way to get to that address via the interface on which the packet arrived. However, if the address is spoofed, there probably isn't an entry in the routing table, because the address does not lie behind the router, but is stolen from some other network on the Internet (site B in our example). If the router does not find the source IP address when it does the lookup, it drops the packet.
One thing to note here is that URPF needs to have Cisco Express Forwarding (CEF) enabled on the router. URPF looks at the Forwarding Information Base (FIB) that is generated by CEF rather than looking directly at the routing table. This is a more efficient way of doing the lookup. Figure 4-2 demonstrates how URPF works.
Figure 4-2 shows to two scenarios. In Scenario 1, a packet is allowed to pass through the router after it successfully passes the URPF check. In Scenario 2, a packet is dropped because it fails the URPF check. Let's look at each scenario separately, and in sequence:
Scenario 1:
1. The packet arrives on S0 with a source IP address of 90.1.1.15.
2. URPF does a reverse rate lookup on the source IP address and finds it can be routed back through S0.
3. URPF allows the packet to pass through.
Scenario 2:
1. The packet arrives on S1 with a source IP address of 90.1.1.19.
2. URPF does a reverse rate lookup on the source IP address and finds it can be routed back through S0 and not S1.
3. Because the interface on which the packet arrived is not the same one through which it can be routed back, URPF causes the packet to be dropped.
Configuring URPF is fairly simple. However, you should be careful when choosing the right place to configure it. It should not be set up on routers that might have asymmetric routes.
Asymmetric routing is said to occur when the interface through which the router sends return traffic for a packet is not the interface on which the original packet was received. For example, if the original packet is received on interface X, the return traffic for it is sent out via interface Y. Although this might be a perfectly legitimate arrangement for a network, this situation is incompatible with URPF. The reason is that URPF assumes that all routing occurring on a router is symmetric. It drops any traffic received on the router for which the return path is not through the same interface as the one on which the traffic is being received.
Generally, the best place to apply URPF is on the edge of a network. The reason is that this allows URPF's antispoofing capabilities to be available to the entire network rather than just a component of it.
Path Integrity
After routing protocols have been set up in a secure fashion, it is important to ensure that all traffic is routed based on the paths calculated as optimum by the routing protocols. However, some features in IP can let changes be made to the routing decisions that routers would make if they were left alone to rely on the routing protocols themselves. Two of the most important features in this regard are ICMP redirects and IP source routing.
ICMP Redirects
ICMP redirects are a way for a router to let another router or host (let's call it A) on its local segment know that the next hop on the same local segment it is using to reach another host (B) is not optimal. In other words, the path should not go through it. Instead, host A should send the traffic directly to the next hop in the optimal path to host B. Although the router forwards the first packet to the optimal next hop, it expects the sending host A to install a route in its routing table to ensure that next time it wants to send a packet to B, it sends it to the optimal next hop. If the router receives a similar packet again, it simply drops it.
Cisco routers send ICMP redirects when all the following conditions are met:
Although redirects are a useful feature to have, a properly set-up network should not have much use for them. And it is possible for attackers to use redirects to change routing in ways that suit their purposes. So it is generally desirable to turn off ICMP redirects. By default, Cisco routers send ICMP redirects. You can use the interface subcommand no ip redirects to disable ICMP redirects.
IP Source Routing
IP source routing is an IP feature that allows a user to set a field in the IP packet specifying the path he or she wants the packet to take. Source routing can be used to subvert the workings of normal routing protocols, giving attackers the upper hand. Although there are a few ways of using source routing, by far the most well-known is loose source record route (LSRR), in which the sender defines one or more hops that the packet must go through to reach a destination.
Unicast Reverse Path Forwarding (URPF) is a tool implemented on routers to thwart attempts to send packets with spoofed source IP addresses. A spoofed source IP address makes tracking the real source of an attack very difficult. For example, if site A is getting attacked with ICMP floods coming from a source IP address in the range 150.1.1.0/24, the only place for that site to look to stop this kind of attack is the network that contains the 150.1.1.0/24 subnet (site B). However, more than likely, the packets are actually coming from some other network (site C), often compromised too, that does not contain the 150.1.1.0/24 subnet. However, other than tracking the source of the packets one hop at a time, the attacked entity has no way of determining this. In this situation, it would be great if site C's network administrators (and, ideally, the administrators of all the other sites on the Internet) had some sort of mechanism in place on their routers that does not allow packets with source IP addresses not in the range belonging to their respective sites to go out.
URPF works by looking for the source IP address of any packet arriving inbound on an interface of a router in its routing table. Logically, if the source IP address belongs to the network behind the router and is not a spoofed address, the routing table contains an entry showing the router a way to get to that address via the interface on which the packet arrived. However, if the address is spoofed, there probably isn't an entry in the routing table, because the address does not lie behind the router, but is stolen from some other network on the Internet (site B in our example). If the router does not find the source IP address when it does the lookup, it drops the packet.
One thing to note here is that URPF needs to have Cisco Express Forwarding (CEF) enabled on the router. URPF looks at the Forwarding Information Base (FIB) that is generated by CEF rather than looking directly at the routing table. This is a more efficient way of doing the lookup. Figure 4-2 demonstrates how URPF works.
Figure 4-2 shows to two scenarios. In Scenario 1, a packet is allowed to pass through the router after it successfully passes the URPF check. In Scenario 2, a packet is dropped because it fails the URPF check. Let's look at each scenario separately, and in sequence:
Scenario 1:
1. The packet arrives on S0 with a source IP address of 90.1.1.15.
2. URPF does a reverse rate lookup on the source IP address and finds it can be routed back through S0.
3. URPF allows the packet to pass through.
Scenario 2:
1. The packet arrives on S1 with a source IP address of 90.1.1.19.
2. URPF does a reverse rate lookup on the source IP address and finds it can be routed back through S0 and not S1.
3. Because the interface on which the packet arrived is not the same one through which it can be routed back, URPF causes the packet to be dropped.
Configuring URPF is fairly simple. However, you should be careful when choosing the right place to configure it. It should not be set up on routers that might have asymmetric routes.
Asymmetric routing is said to occur when the interface through which the router sends return traffic for a packet is not the interface on which the original packet was received. For example, if the original packet is received on interface X, the return traffic for it is sent out via interface Y. Although this might be a perfectly legitimate arrangement for a network, this situation is incompatible with URPF. The reason is that URPF assumes that all routing occurring on a router is symmetric. It drops any traffic received on the router for which the return path is not through the same interface as the one on which the traffic is being received.
Generally, the best place to apply URPF is on the edge of a network. The reason is that this allows URPF's antispoofing capabilities to be available to the entire network rather than just a component of it.
Path Integrity
After routing protocols have been set up in a secure fashion, it is important to ensure that all traffic is routed based on the paths calculated as optimum by the routing protocols. However, some features in IP can let changes be made to the routing decisions that routers would make if they were left alone to rely on the routing protocols themselves. Two of the most important features in this regard are ICMP redirects and IP source routing.
ICMP Redirects
ICMP redirects are a way for a router to let another router or host (let's call it A) on its local segment know that the next hop on the same local segment it is using to reach another host (B) is not optimal. In other words, the path should not go through it. Instead, host A should send the traffic directly to the next hop in the optimal path to host B. Although the router forwards the first packet to the optimal next hop, it expects the sending host A to install a route in its routing table to ensure that next time it wants to send a packet to B, it sends it to the optimal next hop. If the router receives a similar packet again, it simply drops it.
Cisco routers send ICMP redirects when all the following conditions are met:
- The interface on which the packet comes into the router is the same interface on which the packet gets routed out.
- The subnet/network of the source IP address is the same subnet/network of the routed packet's next-hop IP address.
- The datagram is not source-routed.
- The router kernel is configured to send redirects.
Although redirects are a useful feature to have, a properly set-up network should not have much use for them. And it is possible for attackers to use redirects to change routing in ways that suit their purposes. So it is generally desirable to turn off ICMP redirects. By default, Cisco routers send ICMP redirects. You can use the interface subcommand no ip redirects to disable ICMP redirects.
IP Source Routing
IP source routing is an IP feature that allows a user to set a field in the IP packet specifying the path he or she wants the packet to take. Source routing can be used to subvert the workings of normal routing protocols, giving attackers the upper hand. Although there are a few ways of using source routing, by far the most well-known is loose source record route (LSRR), in which the sender defines one or more hops that the packet must go through to reach a destination.
Tuesday, February 9, 2010
Secure Routing
Building Security into Routing Design
In order to have a secure network, it is essential that you build security into how traffic flows in the network. Because routing protocols determine how traffic flows in the network, it is essential to make sure that the routing protocols are chosen and implemented in a manner that is in line with the security requirements of the network. Needless to say, a network with a secure routing architecture is less vulnerable to attacks and oversights than a network with a poorly designed routing structure. A properly designed routing infrastructure can also help reduce the downtime a network suffers during a network attack.
Route Filtering
Proper route filtering is important to any well-implemented network. It is especially important in a private network with routing links to the outside world. It is important in these networks to ensure that route filtering is used to filter out any bogus or undesired routes coming into the private network as well as make sure that only the routes actually contained on the internal network are allowed to be advertised. It is also important to make sure that the only advertised networks are those for which access from outside the private network is desired.
On any private network connected through an ISP to the Internet or a larger public network, the following routes should be filtered from entering the network in most situations (this filtering can also be carried out on the ISP routers):
Filters can also be set up to ensure that IP address blocks belonging to a private network are not allowed to be advertised back into the network from outside. This is a necessary precaution to protect the traffic intended for some of the hosts on the inside network from being routed somewhere unintended.
Route filtering can also be used to hide one piece of a network from another. This can be important in organizations that need varying amounts of security in different parts of the network. In addition to having firewalls and other authentication mechanisms in place, route filtering can also rule out the ability of machines in a less-secure network area to reach a more-secure area if they don't have a route to that portion of the network. However, route filtering should not be used as the sole network security measure.
Applying filtering correctly is important as well. It is a good practice to filter both incoming and outgoing routes. Ingress filtering ensures that routes not intended for a network are not flooded into it during an erroneous or malicious activity on another network. Also, if something goes wrong in one part of the network, egress filtering can stop that problem from spreading to the rest of the network.
ISPs can also consider using what is called a net police filter, whereby no routes with prefixes more specific than /20 (or perhaps up to /24) are allowed to come in. This is often done to make sure that an attack cannot be staged on a large ISP's router by increasing the size of its routing tables. Routes more specific than /20 are often not needed by large ISPs.
Therefore, an ISP can filter out these routes to keep its routing table from getting out of control in terms of size. How specific a prefix a router should accept should be determined by a network administrator who understands what is necessary for the router to perform its functions properly.
Convergence
Fast convergence is important for having a secure routing infrastructure. A network that is slow to converge takes longer to recover from network-disrupting attacks and thus aggravates problems. On an Internet-wide basis, slow convergence of BGP for interdomain routing can mean a considerable loss of revenue for a large number of people. However, even for a small network, slow convergence can mean loss of productivity for a significant number of people.
A slow-converging network is also liable to be more susceptible to a denial of service (DoS) attack. The loss of one or two nodes at a time, making the network take a long time to converge, could mean that a DoS attack confined to just one node actually spreads to the whole network.
At various points in this chapter, we will touch on convergence and how it can be improved. In general, a network's convergence speed can depend on a lot of factors, including the complexity of the network architecture, the presence of redundancy in the network, the parameters set up for route calculation engines on the various routers, and the presence of loops in the network. The best way to improve convergence speed is for the network administrator to thoroughly understand the workings of the network and then to improve its convergence speed by designing the network around the aspect of faster convergence.
Static Routes
Static routes are a useful means of ensuring security in some circumstances. Static routes might not scale to all situations, but where they do, they can be used to hard code information in the routing tables such that this information is unaffected by a network attack or occurrences on other parts of the network. Static routes are also a useful way to define default route information.
Router and Route Authentication
The reason for having router and route authentication and route integrity arises from the risk of an attacker who configures his or her machine or router to share incorrect routing information with another router that is part of the network being attacked. The attacked router can be tricked into not only sending data to the incorrect destination, but through clever maneuvering can be completely put out of commission as well. Routing changes can also be induced simply to redirect the traffic to a convenient place in the network for the attacker to analyze it. This can result in the attacker's being able to identify patterns of traffic and obtain information not intended for him or her.
An example of such attacks occurs in RIP environments where bogus RIP route advertisements can be sent out on a segment. These updates are conveniently accepted into their routing tables by the routers running RIP unless an authentication mechanism is in place to verify the routes' source.
Another issue that prompts router authentication, especially in BGP, is the fear of an attack wherein a rogue router acting as a BGP speaker and neighbor advertises a lot of specific routes into a core router's routing table, causing the router to stop functioning properly due to the greatly increased size of its routing table.
There are two main ways in which Cisco routers provide security in terms of exchanging routing information between routers:
In order to have a secure network, it is essential that you build security into how traffic flows in the network. Because routing protocols determine how traffic flows in the network, it is essential to make sure that the routing protocols are chosen and implemented in a manner that is in line with the security requirements of the network. Needless to say, a network with a secure routing architecture is less vulnerable to attacks and oversights than a network with a poorly designed routing structure. A properly designed routing infrastructure can also help reduce the downtime a network suffers during a network attack.
Route Filtering
Proper route filtering is important to any well-implemented network. It is especially important in a private network with routing links to the outside world. It is important in these networks to ensure that route filtering is used to filter out any bogus or undesired routes coming into the private network as well as make sure that only the routes actually contained on the internal network are allowed to be advertised. It is also important to make sure that the only advertised networks are those for which access from outside the private network is desired.
On any private network connected through an ISP to the Internet or a larger public network, the following routes should be filtered from entering the network in most situations (this filtering can also be carried out on the ISP routers):
- 0.0.0.0/0 and 0.0.0.0/8— Default and network 0 (unique and now historical properties)
- 127.0.0.0/8— Host loopback
- 192.0.2.0/24— TEST-NET, generally used for examples in vendor documentation
- 10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16— RFC 1918 private addresses
- 169.254.0.0/16— End node auto-config for DHCP
- 224.0.0.0/3— Multicast addresses
Filters can also be set up to ensure that IP address blocks belonging to a private network are not allowed to be advertised back into the network from outside. This is a necessary precaution to protect the traffic intended for some of the hosts on the inside network from being routed somewhere unintended.
Route filtering can also be used to hide one piece of a network from another. This can be important in organizations that need varying amounts of security in different parts of the network. In addition to having firewalls and other authentication mechanisms in place, route filtering can also rule out the ability of machines in a less-secure network area to reach a more-secure area if they don't have a route to that portion of the network. However, route filtering should not be used as the sole network security measure.
Applying filtering correctly is important as well. It is a good practice to filter both incoming and outgoing routes. Ingress filtering ensures that routes not intended for a network are not flooded into it during an erroneous or malicious activity on another network. Also, if something goes wrong in one part of the network, egress filtering can stop that problem from spreading to the rest of the network.
ISPs can also consider using what is called a net police filter, whereby no routes with prefixes more specific than /20 (or perhaps up to /24) are allowed to come in. This is often done to make sure that an attack cannot be staged on a large ISP's router by increasing the size of its routing tables. Routes more specific than /20 are often not needed by large ISPs.
Therefore, an ISP can filter out these routes to keep its routing table from getting out of control in terms of size. How specific a prefix a router should accept should be determined by a network administrator who understands what is necessary for the router to perform its functions properly.
Convergence
Fast convergence is important for having a secure routing infrastructure. A network that is slow to converge takes longer to recover from network-disrupting attacks and thus aggravates problems. On an Internet-wide basis, slow convergence of BGP for interdomain routing can mean a considerable loss of revenue for a large number of people. However, even for a small network, slow convergence can mean loss of productivity for a significant number of people.
A slow-converging network is also liable to be more susceptible to a denial of service (DoS) attack. The loss of one or two nodes at a time, making the network take a long time to converge, could mean that a DoS attack confined to just one node actually spreads to the whole network.
At various points in this chapter, we will touch on convergence and how it can be improved. In general, a network's convergence speed can depend on a lot of factors, including the complexity of the network architecture, the presence of redundancy in the network, the parameters set up for route calculation engines on the various routers, and the presence of loops in the network. The best way to improve convergence speed is for the network administrator to thoroughly understand the workings of the network and then to improve its convergence speed by designing the network around the aspect of faster convergence.
Static Routes
Static routes are a useful means of ensuring security in some circumstances. Static routes might not scale to all situations, but where they do, they can be used to hard code information in the routing tables such that this information is unaffected by a network attack or occurrences on other parts of the network. Static routes are also a useful way to define default route information.
Router and Route Authentication
The reason for having router and route authentication and route integrity arises from the risk of an attacker who configures his or her machine or router to share incorrect routing information with another router that is part of the network being attacked. The attacked router can be tricked into not only sending data to the incorrect destination, but through clever maneuvering can be completely put out of commission as well. Routing changes can also be induced simply to redirect the traffic to a convenient place in the network for the attacker to analyze it. This can result in the attacker's being able to identify patterns of traffic and obtain information not intended for him or her.
An example of such attacks occurs in RIP environments where bogus RIP route advertisements can be sent out on a segment. These updates are conveniently accepted into their routing tables by the routers running RIP unless an authentication mechanism is in place to verify the routes' source.
Another issue that prompts router authentication, especially in BGP, is the fear of an attack wherein a rogue router acting as a BGP speaker and neighbor advertises a lot of specific routes into a core router's routing table, causing the router to stop functioning properly due to the greatly increased size of its routing table.
There are two main ways in which Cisco routers provide security in terms of exchanging routing information between routers:
- Authenticating routers that are sharing routing information among themselves so that they share information only if they can verify, based on a password, that they are talking to a trusted source
- Authenticating the veracity of the information being exchanged and making sure it has not been tampered with in transit
- By using a clear-text key (password) that is sent along with the route being sent to another router. The receiving router compares this key to its own configured key. If they are the same, it accepts the route. However, this is not a very good method of ensuring security, because the password information is sent in the clear. It is more a method to avoid misconfigurations than anything else. A skilled hacker can easily get around it.
- By using MD5-HMAC, the key is not sent over the wire in plain text. Instead, a keyed hash is calculated using the configured key. The routing update is used as the input text along with the key into the hashing function. This hash is sent along with the route update to the receiving router. The receiving router compares the received hash with a hash it generates on the route update using the preshared key configured on it. If the two hashes are the same, the route is assumed to be from a trusted source. This is a more secure method of authentication than the clear-text password, because the preshared secret is never shared over the wire.
Wednesday, January 20, 2010
PIX Firewall Security
The PIX, being a security-specific device, is fairly robust from a security perspective. This section talks about some of the important techniques you can use to make the firewall even more secure from a device perspective. The earlier section "Router Security" talks about the reasons for having most of these safeguards, so I will not repeat them here but rather will concentrate on the actual implementations.
Configuration Management
Managing a configuration away from the PIX box in case of an attack is important. PIX allows configurations to be saved on a TFTP server via the write net command. The write net command writes the PIX configuration to a TFTP server specified by the tftp-server command.
The configuration should be saved regularly and after all changes are made to the PIX setup. It is prudent to save the PIX images to a server as well.
Care needs to be taken with where the TFTP server resides, because the PIX as of version 6.2.1 does not have the concept of a source interface. Therefore, it is possible to misconfigure the PIX and send management-related traffic through a lower-security interface and possibly over an untrusted network.
Controlling Access to the PIX
The PIX Firewall can be accessed in two primary ways:
vty access via Telnet ports is the most common way to access a PIX Firewall for administrative purposes. PIX can be accessed from the inside network via plain-text Telnet. However, to access it from the outside interface, an IPsec client needs to be set up to initiate a VPN connection to the PIX using IPsec.
Telnet access needs to be restricted to certain addresses to ensure security. Example 3-28 shows how restricted Telnet access can be set up on a PIX Firewall.
Switch Security
For the purpose of our discussion here, I will concentrate on the Catalyst 5500 switches. Similar mechanisms can be used to set up security on other types of switches. Switches perform most of their functions at Layer 2 of the OSI model. They often do not participate in Layer 3 and above operations actively. Consequently, access to switches through various Layer 3 and above functions such as Telnet and rsh is very limited. This provides for switch security as well. This section looks at some of the mechanisms you can put into place to further strengthen switch security.
Summary
Ensuring that the devices that are responsible for regulating traffic in a network are themselves secure is critical to ensuring the security of the overall network infrastructure. This chapter looked at some of the basic physical and logical measures you can take to ensure the security of network devices. Special consideration was given to three main components of a secure network: routers, switches, and PIX Firewalls. Specific features available to protect routers, switches, and firewalls were discussed. The use and abuse of various features available on these devices were also described. Having discussed the features that protect these devices from attacks, this chapter built the foundation for discussing the various security features available on these devices to protect the network of which they are a component.
Configuration Management
Managing a configuration away from the PIX box in case of an attack is important. PIX allows configurations to be saved on a TFTP server via the write net command. The write net command writes the PIX configuration to a TFTP server specified by the tftp-server command.
The configuration should be saved regularly and after all changes are made to the PIX setup. It is prudent to save the PIX images to a server as well.
Care needs to be taken with where the TFTP server resides, because the PIX as of version 6.2.1 does not have the concept of a source interface. Therefore, it is possible to misconfigure the PIX and send management-related traffic through a lower-security interface and possibly over an untrusted network.
Controlling Access to the PIX
The PIX Firewall can be accessed in two primary ways:
- vty port
- TTY console
vty access via Telnet ports is the most common way to access a PIX Firewall for administrative purposes. PIX can be accessed from the inside network via plain-text Telnet. However, to access it from the outside interface, an IPsec client needs to be set up to initiate a VPN connection to the PIX using IPsec.
Telnet access needs to be restricted to certain addresses to ensure security. Example 3-28 shows how restricted Telnet access can be set up on a PIX Firewall.
Switch Security
For the purpose of our discussion here, I will concentrate on the Catalyst 5500 switches. Similar mechanisms can be used to set up security on other types of switches. Switches perform most of their functions at Layer 2 of the OSI model. They often do not participate in Layer 3 and above operations actively. Consequently, access to switches through various Layer 3 and above functions such as Telnet and rsh is very limited. This provides for switch security as well. This section looks at some of the mechanisms you can put into place to further strengthen switch security.
Summary
Ensuring that the devices that are responsible for regulating traffic in a network are themselves secure is critical to ensuring the security of the overall network infrastructure. This chapter looked at some of the basic physical and logical measures you can take to ensure the security of network devices. Special consideration was given to three main components of a secure network: routers, switches, and PIX Firewalls. Specific features available to protect routers, switches, and firewalls were discussed. The use and abuse of various features available on these devices were also described. Having discussed the features that protect these devices from attacks, this chapter built the foundation for discussing the various security features available on these devices to protect the network of which they are a component.
Monday, January 4, 2010
Password Management
The best place for passwords is on an authentication server. But some passwords still might need to be configured on the router itself. It is important to ensure that these passwords are properly encrypted to be secure from prying eyes (people looking over the network administrator's shoulder as he or she works on a configuration). It is important to configure an enable secret on the router (rather than a plain password known simply as the enable password) to get administrative access to the box. The enable secret uses MD5 to encrypt the password, and the hash is extremely difficult to reverse. Example 3-12 shows its usage.
Cisco IOS version 12.2(8) T introduced the enhanced password security feature. It allows MD5 encryption for username passwords to be configured. Before this feature was introduced, two types of passwords were associated with usernames: type 0, which is a clear-text password visible to any user who has access to privileged mode on the router, and type 7, which has a password with a weak type of encryption. Type 7 passwords can be retrieved from encrypted text by using publicly available tools. Example 3-12 shows how this new feature can be implemented.
It is also important to ensure that the rest of the passwords on the box, such as CHAP passwords, are also encrypted so that a casual view of the configuration does not reveal them. You can do this using the service password-encryption command, as shown in Example 3-12. The catch with this command is that it uses type 7 encryption rather than the MD5 hash used by enable secret commands. This type of encryption is weaker and easier to crack than MD5 encryption. Password encryption set up using this command is applied to all passwords, including username passwords, authentication key passwords, the privileged command password, console and virtual terminal line access passwords, and Border Gateway Protocol (BGP) neighbor passwords. However, note that with the introduction of the new feature discussed in the preceding section, usernames and their corresponding passwords can now be hidden using MD5 hashing.
Using Loopback Interfaces
Loopback interfaces can play an important part in securing a device against attacks. Generally, any router is dependent on a series of services for which it has to access other routers and servers. It is important to make sure that those servers to which the router goes to get certain information accept connections only from a very small block of trusted IP addresses. Considering the entire private addressing scheme as secure can be dangerous as well. Loopbacks can play a vital role in making this happen. A block of IP addresses can be assigned to be used by loopback, and then all routers can be forced to use these loopback IP addresses as source addresses when accessing the servers. The servers can then also be locked down to allow access only from this block of IP addresses.
Some examples of servers to which access can be restricted in this manner are SNMP, TFTP, TACACS, RADIUS, Telnet, and syslog servers. Example 3-14 lists the commands required to force the router to use the IP address on the loopback0 interface as the source address when sending packets to the respective servers.
Controlling SNMP as a Management Protocol
Device and network management protocols are important to maintain any network. However, these services can be used as back doors to gain access to routers and/or get information about the devices. The attacker can then use this information to stage an attack.
SNMP is the most commonly used network management protocol. However, it is important to restrict SNMP access to the routers on which it is enabled. On routers on which it is not being used, you should turn it off using the command shown in Example 3-15.
Example 3-15. Disabling SNMP on a Router
no snmp-server
SNMP v3
SNMP v3 as defined in RFCs 2271 through 2275 provides guidelines for secure implementation of the SNMP protocol. RFC 2271 defines the following as the four major threats against SNMP that SNMP v3 attempts to provide some level of protection against:
Protection Against Attacks
SNMP v3 aims to protect against these types of attacks by providing the following security elements:
Login Banners
A login banner is a useful place to put information that can help make the system more secure. Here are some do's and don'ts of what to put in a login banner:
When a user connects to the router, the MOTD banner appears before the login prompt. After the user logs in to the router, the EXEC banner or incoming banner is displayed, depending on the type of connection. For a reverse Telnet login, the incoming banner is displayed. For all other connections, the router displays the EXEC banner.
Cisco IOS version 12.2(8) T introduced the enhanced password security feature. It allows MD5 encryption for username passwords to be configured. Before this feature was introduced, two types of passwords were associated with usernames: type 0, which is a clear-text password visible to any user who has access to privileged mode on the router, and type 7, which has a password with a weak type of encryption. Type 7 passwords can be retrieved from encrypted text by using publicly available tools. Example 3-12 shows how this new feature can be implemented.
It is also important to ensure that the rest of the passwords on the box, such as CHAP passwords, are also encrypted so that a casual view of the configuration does not reveal them. You can do this using the service password-encryption command, as shown in Example 3-12. The catch with this command is that it uses type 7 encryption rather than the MD5 hash used by enable secret commands. This type of encryption is weaker and easier to crack than MD5 encryption. Password encryption set up using this command is applied to all passwords, including username passwords, authentication key passwords, the privileged command password, console and virtual terminal line access passwords, and Border Gateway Protocol (BGP) neighbor passwords. However, note that with the introduction of the new feature discussed in the preceding section, usernames and their corresponding passwords can now be hidden using MD5 hashing.
Using Loopback Interfaces
Loopback interfaces can play an important part in securing a device against attacks. Generally, any router is dependent on a series of services for which it has to access other routers and servers. It is important to make sure that those servers to which the router goes to get certain information accept connections only from a very small block of trusted IP addresses. Considering the entire private addressing scheme as secure can be dangerous as well. Loopbacks can play a vital role in making this happen. A block of IP addresses can be assigned to be used by loopback, and then all routers can be forced to use these loopback IP addresses as source addresses when accessing the servers. The servers can then also be locked down to allow access only from this block of IP addresses.
Some examples of servers to which access can be restricted in this manner are SNMP, TFTP, TACACS, RADIUS, Telnet, and syslog servers. Example 3-14 lists the commands required to force the router to use the IP address on the loopback0 interface as the source address when sending packets to the respective servers.
Controlling SNMP as a Management Protocol
Device and network management protocols are important to maintain any network. However, these services can be used as back doors to gain access to routers and/or get information about the devices. The attacker can then use this information to stage an attack.
SNMP is the most commonly used network management protocol. However, it is important to restrict SNMP access to the routers on which it is enabled. On routers on which it is not being used, you should turn it off using the command shown in Example 3-15.
Example 3-15. Disabling SNMP on a Router
no snmp-server
SNMP v3
SNMP v3 as defined in RFCs 2271 through 2275 provides guidelines for secure implementation of the SNMP protocol. RFC 2271 defines the following as the four major threats against SNMP that SNMP v3 attempts to provide some level of protection against:
- Modification of information— The modification threat is the danger that some unauthorized entity might alter in-transit SNMP messages generated on behalf of an authorized user in such a way as to effect unauthorized management operations, including falsifying an object's value.
- Masquerade— The masquerade threat is the danger that management operations not authorized for a certain user might be attempted by assuming the identity of a user who has the appropriate authorization.
- Disclosure— The disclosure threat is the danger of eavesdropping on exchanges between managed agents and a management station. Protecting against this threat might be required as a matter of local policy.
- Message stream modification— The SNMP protocol is typically based on a connectionless transport service that may operate over any subnetwork service. The reordering, delay, or replay of messages can and does occur through the natural operation of many such subnetwork services. The message stream modification threat is the danger that messages might be maliciously reordered, delayed, or replayed to a greater extent than can occur through the natural operation of a subnetwork service to effect unauthorized management operations.
Protection Against Attacks
SNMP v3 aims to protect against these types of attacks by providing the following security elements:
- Message integrity— Ensuring that a packet has not been tampered with in transit.
- Authentication— Determining that the message is from a valid source.
- Encryption— Scrambling a packet's contents to prevent it from being seen by an unauthorized source.
Login Banners
A login banner is a useful place to put information that can help make the system more secure. Here are some do's and don'ts of what to put in a login banner:
- A login banner should advertise the fact that unauthorized access to the system is prohibited. You can discuss the specific wording with legal counsel.
- A login banner can also advertise the fact that access to the device will be tracked and monitored. This is a legal requirement in certain places. Again, legal counsel can help.
- It is advisable not to include the word "welcome" in a banner.
- It is inappropriate to include information that says anything about the operating system, hardware, or any logical configuration of the device. The least amount of information about the system's ownership and identity should be revealed.
- Other notices to ward off criminal activity may also be included.
When a user connects to the router, the MOTD banner appears before the login prompt. After the user logs in to the router, the EXEC banner or incoming banner is displayed, depending on the type of connection. For a reverse Telnet login, the incoming banner is displayed. For all other connections, the router displays the EXEC banner.
Subscribe to:
Posts (Atom)