Wednesday, December 23, 2009

Router Security

This section discusses how security can be improved on routers so that any attempts to disable the router, gain unauthorized access, or otherwise impair the functioning of the box can be stopped. It is important to note that these measures in most cases only secure the device itself and do not secure the whole network to which the device is connected. However, a device's security is critical to the network's security. A compromised device can cause the network to be compromised on a larger scale.

The following sections discuss some of the steps you can take to secure a router against attacks aimed at compromising the router itself.


Configuration Management

It is critical to keep copies of the router's configurations in a location other than the router's NVRAM. This is important in the event of an attack that leads to the configuration's being corrupted or changed in some manner. A backed-up configuration can allow the network to come back up very quickly in the manner in which it was supposed to function. This can be achieved by copying the router configurations to an FTP server at regular intervals or whenever the configuration is changed. Cron jobs can also be set up to pull configurations from the routers at regular intervals. Also, many freeware tool sets are available for this functionality, as well as a number of robust commercial packages, such as CiscoWorks2000. This is important in the event of an attack in which a router loses its configuration or has its configuration changed and needs to be restored to its original setting.

You can use the commands described next to copy a router's configuration to an FTP server. Although TFTP can be used as well, FTP is a more secure means of transporting this information.

The copy command, shown in Example 3-4, not only defines the IP address of the FTP server to move the file to but also specifies the username (user) and the password (password) to use to log in to the FTP server.


The ip ftp username and ip ftp password commands can also be used to set up the username and password on the router for FTP.

It is also useful to have a backup of the software images running on a router in case of a network attack that removes the software from the router or corrupts it.


Controlling Access to the Router

It is important to control the accessibility to a router. There are two main mechanisms to gain access to a router for administrative purposes:
  • vty ports
  • TTY console and auxiliary ports
vty ports are generally used to gain remote interactive access to the router. The most commonly used methods of vty access are Telnet, SSH, and rlogin.

TTY lines in the form of console and auxiliary ports are generally used to gain access when a physical connection is available to the router in the form of a terminal connected to the router or a modem hooked to it. The console port is used to log in to the router by physically connecting a terminal to the router's console port. The aux port can be used to attach to an RS-232 port of a CSU/DSU, a protocol analyzer, or a modem to the router.

vty access to a router using Telnet is by far the most common router administration tool. Console access and access through the aux port using a modem are out-of-band methods often used as a last resort on most networks. However, using a mechanism known as reverse Telnet, it might be possible for remote users to gain access to a router through the auxiliary or console ports. This needs to be protected against as well, as described next.


Controlling vty Access

At a minimum, you can follow these steps to control vty access into a router:

Step 1. Restrict access only via the protocols that will be used by the network administrators.

The commands shown in Example 3-5 set up vty lines 0 through 4 for Telnet and SSH access only. In Cisco IOS Release 11.1, the none keyword was added and became the default. Before Cisco IOS Release 11.1, the default keyword was all, allowing all types of protocols to connect via the vty lines by default.

It is important to realize that although Telnet is by far the most popular way of accessing a router for administrative purposes, it is also the most insecure. SSH provides an encrypted mechanism for accessing a router. It is advisable to set up SSH on a router and then disable Telnet access to it.


Step 2. Configure access lists to allow vty access only from a restricted set of addresses.

In Example 3-6, for the vty lines 0 to 3, access list 5 is used. This access list allows access from a restricted set of IP addresses only. However, for the last vty line, line 4, the more-restrictive access list 6 is used. This helps prevent DoS attacks aimed at stopping Telnet access to the router for administrative purposes. Only one session to a vty port can occur at any given time. So an attacker can leave all the ports dangling at the login prompt, denying legitimate use. The restrictive access list on line 4 is an effort to keep at least the last vty line available in such an eventuality. Note that the command service tcp-keepalives-in can also be used to track such left-to-idle TCP sessions to the router. This command basically turns on a TCP keepalive mechanism for the router to use for all its TCP connections.

It is also a good idea to set up logging for the access lists used to allow Telnet access.


Step 3. Set up short timeouts.

This is an important precaution needed to protect against Telnet DoS attacks, hijacking attacks, and Telnet sessions left unattended, consuming unnecessary resources. The command shown in Example 3-7 sets the timeout value to 5 minutes and 30 seconds. The default is 10 minutes.


Step 4. Set up authentication for vty access.

It is critical to have user authentication enabled for vty access. This can be done using local or RADIUS/TACACS authentication. Example 3-8 shows local authentication, but RADIUS/TACACS is a more scalable method of setting this up. See Chapters 16 and 19 for more examples of how to use the AAA commands to achieve scalable security.


Controlling TTY Access

A lot of effort spent controlling access through the vty lines can go to waste if the TTY lines are not controlled for access. The TTY lines are harder to use to gain access, because they generally require some sort of physical access to allow access. However, having a number to dial in to the modem hooked to a router's aux port or using reverse Telnet to get into the console port of a router hooked up to a terminal server remotely are both methods still used to gain easy illegitimate access to routers without physical proximity.

Some of the methods that can be used on the vty ports to control access, such as using access lists, cannot be used on TTY lines. However, some other techniques, such as user authentication and disabling protocol access using the transport command, are still valid and can be set up in a fashion similar to how vty configurations are done.

If appropriate, the use of TTY lines remotely via reverse Telnet should be disabled. You can do this using the command shown in Example 3-9.

Starting in Cisco IOS version 12.2(2) T, you can access a router's console port using the SSH protocol. This is an important feature, because it gives users much more security. Example 3-10 shows how this is set up. Note that a separate rotary group needs to be defined for each line that will be accessed via SSH. See the next section for the rest of the command needed to allow a router to act as an SSH server and accept connections.

Wednesday, December 2, 2009

Device Security

Device Redundancy

Redundancy is an important component of any secure system. Although securing a system can eliminate much vulnerability to an attack, in reality, no number of measures can totally protect a device against all known and to-be-discovered attacks and vulnerabilities. Therefore, it becomes important to have a suitable redundancy mechanism in place. A redundancy mechanism allows a backup device to take over the functionality of a device that has stopped performing its responsibilities due to an attack. Although the backup device might be susceptible to a similar type of attack, it can buy the network administrator valuable time to set up mechanisms to protect against the attack.

There are two primary means of achieving redundancy for a network device:
  • Use routing to ensure that an alternative path is chosen in case one or more of the devices on a particular path becomes unavailable.
  • Use a redundancy protocol such as Hot Standby Router Protocol (HSRP), Virtual Router Redundancy Protocol (VRRP), or failover between any two devices. This ensures that if one of the two devices goes down, the other device takes over the functionality of the first device. These protocols are especially useful for providing redundancy on the LAN, where the end hosts do not participate in routing protocols. Running a dynamic routing protocol on every end host might be infeasible for a number of reasons, including administrative overhead, processing overhead, security issues, or lack of a routing protocol implementation for some platforms.

The following sections look at the various types of redundancy methods and protocols deployed in networks to ensure security through redundancy.


Routing-Enabled Redundancy

Routing protocols can be set up to allow redundancy between devices. The main philosophy behind this kind of setup is to set up routing in such a way that the routing protocols converge to one set of routes when everything is functioning normally and a different set of routes when some of the devices are out of order.

There are many different ways to achieve routing-based redundancy. We will discuss only two:
  • Statically— You use static routes with varying weights.
  • Dynamically— You build a network in a manner that allows a suboptimal path to become an optimal path when a device outage occurs.

Building Network Redundancy Dynamically

Perhaps the most effective way of guarding against device failures is to design the network in such a way that the routing protocols can find an alternative path to connect any two given parts of the network in case a device fails anywhere on the network.

An example of such a network is a fully meshed network. Variations of the fully meshed network to provide redundancy in the most critical portions of the network can be a suitable alternative to having a completely meshed topology. The idea is for the routing protocol to converge on a different set of available routes when the original set of routes is no longer available due to a device or path failure. Figure 3-3 shows how this works using the RIP routing protocol.


HSRP

HSRP, defined in RFC 2281, is a protocol that is implemented on Cisco routers to allow a failed device to be taken over by another device on a LAN. HSRP allows hosts to view a single router as their default gateway with multiple routers available to take over the functionality of that router in case it fails, without any indication of such a failure to the end hosts. The hosts use a single IP address and MAC address to communicate with their default gateway. However, multiple routers, if they have been set up with HSRP, have the ability to respond to frames sent to this MAC address or to packets destined for this IP address, in case of the failure of what is known as the active router. At any given time, a router known as the active router is the one that assumes ownership of this IP address and MAC address. All other routers participating in HSRP are said to be in standby mode until the active router fails. At that point, a standby router assumes the ownership of the IP address and the MAC address the hosts consider their default gateway. This allows the hosts to continue sending traffic to their default gateway without any disruption. The IP address and MAC address are often said to belong to a virtual router because in effect they do not belong to any physical router but are still used by hosts to communicate with the default gateway.

Process of Determining the Active Router

The active router is one that assumes the identity of the virtual router, meaning that it assumes responsibility for forwarding the packets that hosts send it. All routers that can become active routers are said to form an HSRP group or a standby group. When a router is configured to do HSRP, it is configured with the virtual router's virtual IP address. The virtual router MAC address is the MAC address 0x00 0x00 0x0C 0x07 0xAC XX, where XX represents the HSRP group number. This MAC address does not need to be configured by the router's administrator, but rather should be built into the router's software code.

The router then goes into a state known as speak state, in which it sends out HSRP messages called hellos containing its priority. All the routers in the HSRP group that are configured with a virtual address send out HSRP messages containing this information. Packets are sent to multicast address 224.0.0.2, which all the routers set up to be part of the HSRP group listen to. When a router does not see a hello message with a priority higher then the one it is set up with, it assumes the role of the active router. It goes into a corresponding state known as active state. The router with the second-highest priority becomes the group's standby router. At any given time, an HSRP group cannot have more than one active and one standby router.

As soon as a router assumes the responsibility of being the active router, it starts sending out hello messages indicating that it is the active router. The standby router starts sending out corresponding messages. These hellos are sent out periodically. To minimize network traffic, only the active and standby routers send periodic HSRP messages when the protocol has completed the election process. If at some point the standby router receives a message from the active router that has a lower priority than its priority, it can take over the role of the primary router by sending out a hello packet with its priority and containing parameters indicating that it wants to take over as the active router. This is known as a coup hello message.


Detecting a Failure

A failure is detected through the exchange of periodic hello message between the active and standby routers. (Because these messages are sent to a multicast address, other routers in the HSRP group also listen to them.) Each hello message from the active router contains a holdtime, or a holdtime can be configured on each router in the HSRP group. Upon receiving this message, the standby router starts its active timer. The active timer expires after an amount of time equal to the hold time has passed. If the standby router does not receive another hello from the active router before this timer expires, the active router is considered to have failed. If the standby router does not receive another hello message from the active router within this time, it goes into speak state again and starts to announce its priority to all the HSRP routers belonging to the multicast group. If another router in the group also has a virtual IP address configured, it participates in the election process by sending out hello messages with its own priority. The router with the highest priority takes over as the active router, and the next-highest router becomes the standby router.

Similarly, if the standby router fails to send a periodic hello message to the active router within the expiration of the standby timer on the active router, the active router goes into speak state, and the HSRP group goes through an election process to determine the active and standby routers.


HSRP Packet Format

HSRP uses User Datagram Protocol (UDP) port 1985 to send its hello messages. These messages are sent to the multicast address 224.0.0.2 with a TTL of 1 during the transition phases when active and standby routers are being elected. The source address is always the router's actual IP address rather than its virtual IP address.

The packet format as given in RFC 2281 is shown in Figure 3-4.

HSRP Security

HSRP does not provide very strong mechanisms for providing security against attacks using this protocol as a tool. For example, an attacker who has gained access to the internal network can force the routers to choose a nonexistent router as the active router, creating a black hole and causing a resultant DoS attack. The authentication field in the HSRP message is more useful for protecting against misconfigurations rather than against attacks. It contains a password that is sent in clear text in the HSRP messages sent across the network. You will see how VRRP provides a better way of ensuring security in the implementation of a functionality very similar to HSRP.

HSRP Implementations

A typical example of the use of HSRP is illustrated in the following scenario (as documented on Cisco.com). Figure 3-5 shows the network topology for this scenario.


If Router A fails, Router B takes over the functioning on Router A and allows Pat to continue communicating with the Paris network. It is interesting to note that even if the routing converges so that all traffic is routed over a link that is up, the end hosts still might not be able to use the new routes, because they do not participate in any routing. Most hosts use a default gateway configured to point to a router to figure out where to send packets for machines not on the local LAN. Therefore, if Router A goes down, even if the routing protocols figure out another way to get to Router C from the Tokyo network, Pat's machine still sends all packets destined for Marceau's machine to Router A.

Some end hosts use ARP to figure out where to send their packets. If Pat wants to send a packet to Marceau's machine, Pat's machine ARPs for the IP address of Marceau's machine. Router A replies with its own MAC address to Pat's machine, telling it that it needs to send the packets to Router A. This is known as proxy ARP. In case the link between Router A and Router C goes down, routing might be able to figure out that the way to get the packets from Pat to Marceau is from Router B to Router C. However, Pat's machine does not know about this and continues sending packets to Router A's MAC address.


How HSRP provides redundancy

HSRP gets around the default gateway situation by defining a virtual IP address that is used as the default gateway for all machines instead of the actual address of the primary router, Router A. So when Router B takes over the responsibilities of Router A, the default gateway in all the end hosts does not need to be changed. Rather, Router B simply takes over the responsibility of taking care of packets sent to the virtual IP address.

Similarly, with HSRP configured, the end hosts using proxy ARP send packets to a virtual MAC address rather than the MAC address of the interface of Router A connected toward them. So when Router B takes over from Router A, it assumes the responsibility of taking care of the packets sent to this virtual MAC address.

Friday, November 13, 2009

Device Security

Device security has two main aspects:
  • Physical security
  • Logical security

Physical Security

Physical security involves figuring out the potential physical threats to devices and then devising ways to prevent them from affecting network operations. Although it is difficult to provide a comprehensive list of measures to take to ensure this kind of security, the following sections address some important issues to consider when locating a network.


Redundant Locations

Although this might be overkill for some networks, for networks with rigorous security measures, it is often necessary to have a backup or redundant network in a physical location that is completely separate from the primary network. This can also take the shape of splitting up the load on the primary system and routing some of the services to a secondary system that is geographically far away from the primary system. In the case of an outage of the primary system, the secondary system can take over the functioning of the primary system, and vice versa.

Ideally, the physical locations should be separated sufficiently from each other to ensure that natural calamities such as earthquakes and floods affect only one of them at a time rather than hitting both of them at once. However, because distance can also add a certain element of uncertainty in the connection between the two sites, such geographically distant systems need to be extensively tested before deployment and periodically tested afterward to ensure efficient switchover during a failure event.


Network Topographical Design

A network's topographical design can mean a lot to its survival in case of a physical attack on it. It is desirable to have a star topology for networks with a redundant core to minimize the effect of an attack carried out on a link between two components of the network. If all the network's components are connected in series to each other, disrupting service between any two means disrupting it between two potentially large segments of the network. Perhaps the most resilient design is that of a fully meshed network in which every network node is directly connected to every other node. However, this type of network can be expensive to build. When set up in this way, a network node can still have connectivity to the rest of the network even if one or more of its direct links goes down. The redundancy built into the network topology ensures a great deal of stability and consequent security. Figure 3-1 shows three main types of network topological designs seen from the perspective of network resilience.


Secure Location of the Network

There are two main aspects to consider when choosing a secure location to put the main components of a network:
  • Finding a location that is sufficiently segregated from the rest of the office infrastructure to make physical intrusions obvious
  • A location that is contained within a larger facility so that the security aspects of the larger facility can be used
These two guidelines seem to be at a tangent to each other. However, a good secure location often is a compromise between complete segregation (expensive) and complete integration (security risks).

To secure a location, you can follow these guidelines, among others:
  • Restrict access to all networking equipment. Use locks and digital access authorization mechanisms to authenticate people before entering. Log access.
  • Use monitoring cameras at entrances as well as in wiring closets of data centers.
  • Conduct regular physical security audits to ensure that security breaches are not being risked. Trivial habits such as propping open a door instead of letting it lock can be a substantial security risk. It is important to realize that although a closed door might not be the only means to stop access to devices, it is an important line of defense.

Choosing Secure Media

Perhaps the days are gone when attackers needed physical access to attack a network. Presently, attackers find it much easier to compromise a trusted system and then use that system to eavesdrop on a network. However, physical eavesdropping on a cable can still be used to listen in on privileged communication or as a means to get further access. Among the current cabling mechanisms in place, perhaps the most difficult to eavesdrop on is the optical fiber. Coaxial cables and twisted pairs are easier to wiretap and also radiate energy that can be used to eavesdrop. Any type of cable can be made more secure by enclosing it in a secure medium and wiring it such that it is not possible to damage or access the cabling easily.


Power Supply

Although data is the lifeblood of a network, it can flow only if there is power to run the machines through which it passes. It is important to do the following:
  • Properly design the network locations' power supply so that all equipment gets adequate power without overburdening any power systems.
  • Have a backup power supply source not only to manage an outage for the whole facility but also to have redundant power supplies for individual devices.

Environmental Factors

It is important to secure a network facility against environmental factors. Attackers can exploit these factors to cause significant disruption to a network. Here are some of the environmental factors you should keep in mind while scrutinizing a network facility for security vulnerabilities:
  • Fire
  • Earthquakes, storms, and other such natural calamities
Although some of these factors, such as fire, can be guarded against to some extent, the only real solution to protecting the network functionality and data is to have a redundant solution in place, ready to take over form and function in case one of these calamities strikes.

Thursday, November 5, 2009

Creating Zones Using the PIX Firewall

The PIX Firewall allows up to ten interfaces with varying security levels to be configured (PIX 535 running 6.X can support up to ten interfaces. PIX 525 running 5.3 and above can support up to eight interfaces). One interface needs to be connected to the inside or private network, and one needs to be connected to the public network. The rest of the interfaces can be connected to other networks, each with its own level of security. Thus, the PIX allows up to ten (eight in the case of PIX 525) distinct security zones to be supported on one firewall.

On the PIX Firewall, each interface is configured to have a security level. Essentially, a machine sitting on a low-security interface cannot access a device sitting on a high-security interface unless configuration is specifically done to allow this to occur. However, a device sitting on a high-security interface can access a low-security interface device as long as certain other requirements are met, such as the presence of Network Address Translation for the higher-security network devices. This leads to the obvious conclusion that on the PIX Firewall the DMZ interfaces should be kept at a security level lower than the inside/private zone interface's security level. This allows the machines on the inside network to access the servers on the DMZ interface. However, the machines on the DMZ interface by default cannot access the hosts on the inside interface.

It should be noted that it is indeed possible to configure the PIX to allow the machines on the DMZ interface to access the inside interface machines, but this requires specific configuration to be done on the PIX, including opening a "hole" in the PIX to allow such traffic through.

PIX Firewall uses a numbering scheme to denote the security level of each interface and its associated zone. The numbering scheme goes from 0 to 100. By default, the inside interface has the number 100 associated with it, which means it has the highest level of security. The outside interface has the number 0 associated with it, which is the lowest level of security. The rest of the interfaces have numbers ranging from 1 to 99. Ideally, all interfaces should have unique security levels. Devices sitting on interfaces that have the same security levels cannot communicate across the PIX even if configured to do so.

The commands described next are used to specify the security levels of the interfaces on the PIX. In this example, Ethernet 0 on the PIX is the outside or public interface, and Ethernet 1 is the inside interface. Ethernet 2 is the DMZ interface. Figure 2-5 shows how a PIX is set up with DMZ and other interfaces in the case study.

Saturday, October 17, 2009

Building Security into the Network

An Introduction to Security Zones

Although the security features available in the various networking devices play an important part in thwarting network attacks, in reality one of the best defenses against network attacks is the network's secure topological design. A network topology designed with security in mind goes a long way in forestalling network attacks and allowing the security features of the various devices to be most effective in their use.

One of the most critical ideas used in modern secure network design is using zones to segregate various areas of the network from each other. Devices placed in the various zones have varying security needs, and the zones provide protection based on these needs. Also, the roles that some devices play (for example, Web servers) leave them especially vulnerable to network attacks and make them more difficult to secure. Therefore, segregating these devices in zones of lesser security dislocated from zones containing more-sensitive and less-attackable devices plays a critical role in the overall network security scheme.

Zoning also allows networks to scale better and consequently leads to more stable networks. Stability is one of the cornerstones of security. A network that is more stable than others is likely also more secure during a stressful attack on its bandwidth resources.

The basic strategy behind setting up zones is as follows:
  • The devices with the greatest security needs (the private network) are within the network's most-secure zone. This is generally the zone where little to no access from the public or other networks is allowed. Access is generally controlled using a firewall or other security functions, such as secure remote access (SRA). Strict control of authentication and authorization is often desired in such a zone.
  • Servers that need to be accessed only internally are put in a separate private and secure zone. Controlled access to these devices is provided using a firewall. Access to these servers is often closely monitored and logged.
  • Servers that need to be accessed from the public network are put in a segregated zone with no access to the network's more-secure zones. This is done to avoid endangering the rest of the network in case one of these servers gets compromised. In addition, if possible, each of these servers is also segregated from the others so that if one of them gets compromised, the others cannot be attacked. Separate zones for each server or each type of server are in order in the securest type of setup. This means that a Web server is segregated from the FTP server by being put in a zone completely separate from the FTP server. This way, if the web server becomes compromised, the chances of the FTP server being accessed and possibly compromised through the privileges gained by the attacker on the Web server are limited. (This type of segregation can also be achieved using the private VLANs available in the 6509 switches from Cisco). These zones are known as DMZs. Access into and out of them is controlled using firewalls.
  • Zoning is done in such a way that layered firewalls can be placed in the path to the most sensitive or vulnerable part of the network. This can avoid configuration mistakes in one firewall that allow the private network to be compromised. Many large networks with security needs use different types of firewalls at the network layer to keep the network from becoming compromised due to a bug in the firewall software. Using a PIX Firewall and a proxy server firewall in tandem is one such example. This is also sometimes called the Defense in Depth principle.

Designing a Demilitarized Zone

DMZ is one of the most important zoning term used in network security. A DMZ is the zone in the network that is segregated from the rest of the network due to the nature of the devices contained on it. These devices, often servers that need to be accessed from the public network, do not allow a very stringent security policy to be implemented in the area where they are kept. Therefore, there is a need to separate this zone from the rest of the network.

DMZ is often a subnet that typically resides between the private network and the public network. Connections from the public network terminate on DMZ devices. These servers can oftenalso be accessed relatively securely by private network devices.

There are quite a few ways to create a DMZ. How a DMZ is created depends on the network's security requirements, as well as the budgetary constraints placed on it. Here are some of the most common ways of creating DMZs:

  • Using a three-legged firewall to create the DMZ
  • Placing the DMZ outside the firewall between the public network and the firewall
  • Placing the DMZ outside the firewall but not in the path between the public network and the firewall (also called a "dirty DMZ")
  • Creating a DMZ between stacked firewalls

Designing a Demilitarized Zone

DMZ is one of the most important zoning term used in network security. A DMZ is the zone in the network that is segregated from the rest of the network due to the nature of the devices contained on it. These devices, often servers that need to be accessed from the public network, do not allow a very stringent security policy to be implemented in the area where they are kept. Therefore, there is a need to separate this zone from the rest of the network.

DMZ is often a subnet that typically resides between the private network and the public network. Connections from the public network terminate on DMZ devices. These servers can oftenalso be accessed relatively securely by private network devices.

There are quite a few ways to create a DMZ. How a DMZ is created depends on the network's security requirements, as well as the budgetary constraints placed on it. Here are some of the most common ways of creating DMZs:

  • Using a three-legged firewall to create the DMZ
  • Placing the DMZ outside the firewall between the public network and the firewall
  • Placing the DMZ outside the firewall but not in the path between the public network and the firewall (also called a "dirty DMZ")
  • Creating a DMZ between stacked firewalls

Using a Three-Legged Firewall to Create the DMZ

This is perhaps the most common method of creating a DMZ. This method uses a firewall with three interfaces to create separate zones, each sitting on its own firewall interface. The firewall provides separation between the zones. This mechanism provides a great deal of control over the DMZ's security. This is important because a compromised DMZ can be the first stage of a well-orchestrated attack. Figure 2-1 shows how a DMZ using a three-legged firewall can be set up. Note that a firewall can have many more than three interfaces, allowing a number of DMZs to be created. Each DMZ can have its own special security requirements.


Placing the DMZ Outside the Firewall Between the Public Network and the Firewall

In this setup, the DMZ is exposed to the public side of the firewall. Traffic that needs to pass through the firewall passes through the DMZ first. This setup is not recommended, because you can exercise very little control over the security of the devices sitting on the DMZ. These devices are practically part of the public domain, with no real protection of their own. Figure 2-2 shows how a DMZ can be created outside a firewall between the public network and the firewall.


Obviously, this is a fairly insecure way of setting up a DMZ, because the firewall's security capabilities are not used at all in this setup. However, the router on the edge of the network toward the public network can be set up to provide some basic form of security to the machines on the DMZ. This security can be in the form of using access control lists to allow access to the machines sitting on the DMZ for certain port numbers only and denying all other access.


Placing the DMZ Outside the Firewall but not in the Path Between the Public Network and the Firewall
A "dirty DMZ" is very similar to the DMZ described in the preceding section. The only difference is that instead of being located between the firewall and the public network, the DMZ is located off a separate interface of the edge router connecting the firewall to the public network (see Figure 2-3). This type of setup provides very little security to the devices sitting on the DMZ network. However, this setup gives the firewall a little more isolation from the unprotected and vulnerable DMZ network than the setup described in the preceding section. The edge router in this setup can be used to deny all access from the DMZ subnet to the subnet on which the firewall is located. Also, separate VLANs can allow for further Layer 2 isolation between the subnet on which the firewall is located and the DMZ subnet. This is useful in situations where a host on the DMZ subnet becomes compromised and the attacker starts using that host to launch further attacks against the firewall and the network. The added layer of isolation can help slow the advance of the attack toward the firewall in these situations.


Dirty DMZs are often set up because the firewall is unable to handle the traffic load put on it as it tries to cater to all the traffic that is intended for the internal network as well as the traffic that is intended for the servers on a properly set up DMZ (one created using, for example, the three legged firewall technique) . Because the traffic to the servers on the DMZ (which are often public servers) can be considerable, network administrators are forced to locate the servers outside the firewall on a DMZ so that the firewall does not have to process this traffic.

Network administrators often go to significant lengths to make sure that the hosts that are located on the dirty DMZ are particularly strong in the face of most common network attacks. A host that is exposed to a public network and is strengthened to face network attacks is called a bastion host. These hosts often have all unnecessary services turned off to prevent an attacker from using these services to gain further access to these hosts. Similarly, any unnecessary ports and communication mechanisms are also removed or disabled to enhance the security of these hosts. An attempt is made to install all necessary patches and hot fixes for the OS that the bastion host is running. Most tools and configuration utilities that can be used to manipulate the host are removed from the host. In addition, the host has extensive logging turned on in order to capture any attempts to compromise it. This can often be an invaluable tool in further improving a host's security. Even after putting all these safeguards in place, an attempt is made to make sure that even if the host becomes compromised, the firewall and the internal network cannot be accessed through the access privileges gained on the bastion host by the attacker. This also often means that the bastion host and the internal private network do not share the same authentication system.


Creating a DMZ Between Stacked Firewalls

In this mechanism of forming DMZs, two firewalls are stacked so that all traffic that needs to go to the private network behind the firewall farthest from the public network must go through both the firewalls. In this scenario, the network between the two firewalls is used as the DMZ. A fair deal of security is available to the DMZ in this case because of the firewall in front of it. However, one drawback is that all traffic going from the private network to the public network must pass through the DMZ network. In this case, a compromised DMZ device can allow an attacker easy access to hijacking or attacking this traffic in various ways. This risk can be mitigated by using private VLANs for the devices between the two firewalls. One of the main drawbacks of this setup is the cost of having two firewalls in place. Figure 2-4 shows how a DMZ stacked between firewalls is set up.

Wednesday, October 7, 2009

Network Security

Network security is the process through which a network is secured against internal and external threats of various forms. In order to develop a thorough understanding of what network security is, you must understand the threats against which network security aims to protect a network. It is equally important to develop a high-level understanding of the main mechanisms that can be put into place to thwart these attacks.

Generally, the ultimate goal of implementing security on a network is achieved by following a series of steps, each aimed at clarifying the relationship between the attacks and the measures that protect against them.

Step 1. Identify what you are trying to protect.
Step 2. Determine what you are trying to protect it from.
Step 3. Determine how likely the threats are.
Step 4. Implement measures that protect your assets in a cost-effective manner.
Step 5. Review the process continuously, and make improvements each time you find a weakness.


Network Security Architecture Implementation

As soon as the security policy has been defined, the next step is implementing the policy in the form of a network security design. We will discuss various security principles and design issues throughout this book. The first step to take after a security policy has been created is to translate it into procedures. These procedures are typically laid out as a set of tasks that must be completed to successfully implement the policy. These procedures upon execution result in a network design that can be implemented using various devices and their associated features.

Generally, the following are the elements of a network security design:
  • Device security features such as administrative passwords and SSH on the various network components
  • Firewalls
  • Remote-access VPN concentrators
  • Intrusion detection
  • Security AAA servers and related AAA services for the rest of the network
  • Access-control and access-limiting mechanisms on various network devices, such as ACLs and CAR
All or some of these components come together in a design setup to implement the requirements of the network security policy.

Monday, September 28, 2009

The OSI Reference Model

One of the greatest functions of the OSI specifications is to assist in data transfer between disparate hosts—meaning, for example, that they enable us to transfer data between a Unix host and a PC or a Mac.

The OSI isn’t a physical model, though. Rather, it’s a set of guidelines that application developers can use to create and implement applications that run on a network. It also provides a framework for creating and implementing networking standards, devices, and internetworking schemes.

The OSI has seven different layers, divided into two groups. The top three layers define how the applications within the end stations will communicate with each other and with users. The bottom four layers define how data is transmitted from end to end. Figure 1.7 shows the three upper layers and their functions, and Figure 1.8 shows the four lower layers and their functions.

When you study Figure 1.7, understand that the user interfaces with the computer at the Application layer and also that the upper layers are responsible for applications communicating between hosts. Remember that none of the upper layers knows anything about networking or network addresses. That’s the responsibility of the four bottom layers.

In Figure 1.8, you can see that it’s the four bottom layers that define how data is transferred through a physical wire or through switches and routers. These bottom layers also determine how to rebuild a data stream from a transmitting host to a destination host’s application.


Figure 1.9 shows a summary of the functions defined at each layer of the OSI model. With this in hand, you’re now ready to explore each layer’s function in detail.



Voice over IP and Video over IP on a network

The main purpose of the Host-to-Host layer is to shield the upper-layer applications from the complexities of the network. This layer says to the upper layer, “Just give me your data stream, with any instructions, and I’ll begin the process of getting your information ready to send.”

The following sections describe the two protocols at this layer:
  • Transmission Control Protocol (TCP)
  • User Datagram Protocol (UDP)

By understanding how TCP and UDP work, you can interpret the impact of applications on networks when using Voice and Video Over IP.


Transmission Control Protocol (TCP)

Transmission Control Protocol (TCP) takes large blocks of information from an application and breaks them into segments. It numbers and sequences each segment so that the destination’s TCP stack can put the segments back into the order the application intended. After these segments are sent, TCP (on the transmitting host) waits for an acknowledgment of the receiving end’s TCP virtual circuit session, retransmitting those that aren’t acknowledged.

Before a transmitting host starts to send segments down the model, the sender’s TCP stack contacts the destination’s TCP stack to establish a connection. What is created is known as a virtual circuit. This type of communication is called connection-oriented. During this initial handshake, the two TCP layers also agree on the amount of information that’s going to be sent before the recipient’s TCP sends back an acknowledgment. With everything agreed upon in advance, the path is paved for reliable communication to take place.

TCP is a full-duplex, connection-oriented, reliable, and accurate protocol, but establishing all these terms and conditions, in addition to error checking, is no small task. TCP is very complicated and, not surprisingly, costly in terms of network overhead. And since today’s networks are much more reliable than those of yore, this added reliability is often unnecessary.


User Datagram Protocol (UDP)

If you were to compare the User Datagram Protocol (UDP) with TCP, the former is basically the scaled-down economy model that’s sometimes referred to as a thin protocol. Like a thin person on a park bench, a thin protocol doesn’t take up a lot of room—or in this case, much
bandwidth on a network.

UDP doesn’t offer all the bells and whistles of TCP either, but it does do a fabulous job of transporting information that doesn’t require reliable delivery—and it does so using far fewer network resources. (UDP is covered thoroughly in Request for Comments 768.)

There are some situations in which it would definitely be wise for developers to opt for UDP rather than TCP. Remember the watchdog SNMP up there at the Process/Application layer? SNMP monitors the network, sending intermittent messages and a fairly steady flow of status updates and alerts, especially when running on a large network. The cost in overhead to establish, maintain, and close a TCP connection for each one of those little messages would reduce what would be an otherwise healthy, efficient network to a dammed-up bog in no time!

Another circumstance calling for UDP over TCP is when reliability is already handled at the Process/Application layer. Network File System (NFS) handles its own reliability issues, making the use of TCP both impractical and redundant. But ultimately, it’s up to the application developer to decide whether to use UDP or TCP, not the user who wants to transfer data faster.

UDP does not sequence the segments and does not care in which order the segments arrive at the destination. But after that, UDP sends the segments off and forgets about them. It doesn’t follow through, check up on them, or even allow for an acknowledgment of safe arrival—complete abandonment. Because of this, it’s referred to as an unreliable protocol. This does not mean that UDP is ineffective, only that it doesn’t handle issues of reliability.

Further, UDP doesn’t create a virtual circuit, nor does it contact the destination before delivering information to it. Because of this, it’s also considered a connectionless protocol. Since UDP assumes that the application will use its own reliability method, it doesn’t use any. This gives an application developer a choice when running the Internet Protocol stack: TCP for reliability or UDP for faster transfers.

So if you’re using Voice over IP (VoIP), for example, you really don’t want to use UDP, because if the segments arrive out of order (very common in IP networks), they’ll just be passed up to the next OSI (DoD) layer in whatever order they’re received, resulting in some seriously garbled data. On the other hand, TCP sequences the segments so they get put back together in exactly the right order—something that UDP just can’t do.


Key Concepts of Host-to-Host Protocols

Since you’ve seen both a connection-oriented (TCP) and connectionless (UDP) protocol in action, it would be good to summarize the two here. Table 1.1 highlights some of the key concepts that you should keep in mind regarding these two protocols. You should memorize this table.

Friday, September 18, 2009

OSI and TCP/IP Models and Their Associated Protocols to Explain How Data Flows in a Network

Department of Defense (DoD) model is basically a condensed version of the OSI model—it’s composed of four, instead of seven, layers:
  • Process/Application layer
  • Host-to-Host layer
  • Internet layer
  • Network Access layer
Figure 1.5 shows a comparison of the DoD model and the OSI reference model. As you can see, the two are similar in concept, but each has a different number of layers with different names.


A vast array of protocols combine at the DoD model’s Process/Application layer to integrate the various activities and duties spanning the focus of the OSI’s corresponding top three layers (Application, Presentation, and Session). We’ll be looking closely at those protocols in the next part of this chapter. The Process/Application layer defines protocols for node-to-node application communication and also controls user-interface specifications.

The Host-to-Host layer parallels the functions of the OSI’s Transport layer, defining protocols for setting up the level of transmission service for applications. It tackles issues such as creating reliable end-to-end communication and ensuring the error-free delivery of data. It handles packet sequencing and maintains data integrity.

The Internet layer corresponds to the OSI’s Network layer, designating the protocols relating to the logical transmission of packets over the entire network. It takes care of the addressing of hosts by giving them an IP (Internet Protocol) address, and it handles the routing of packets among multiple networks.

At the bottom of the DoD model, the Network Access layer monitors the data exchange between the host and the network. The equivalent of the Data Link and Physical layers of the OSI model, the Network Access layer oversees hardware addressing and defines protocols for the physical transmission of data.

The DoD and OSI models are alike in design and concept and have similar functions in similar layers. Figure 1.6 shows the TCP/IP protocol suite and how its protocols relate to the DoD model layers.


In the following sections, we will look at the different protocols in more detail, starting with the Process/Application layer protocols.


Common Networked Applications

In this section, I’ll describe the different applications and services typically used in IP networks. The following protocols and applications are covered in this section:
  • Telnet
  • FTP
  • TFTP
  • NFS
  • SMTP
  • LPD
  • X Window
  • SNMP
  • DNS
  • DHCP/BootP

Telnet

Telnet is the chameleon of protocols—its specialty is terminal emulation. It allows a user on a remote client machine, called the Telnet client, to access the resources of another machine, the Telnet server. Telnet achieves this by pulling a fast one on the Telnet server and making the client machine appear as though it were a terminal directly attached to the local network. This projection is actually a software image—a virtual terminal that can interact with the chosen remote host.

These emulated terminals are of the text-mode type and can execute refined procedures such as displaying menus that give users the opportunity to choose options and access the applications on the duped server. Users begin a Telnet session by running the Telnet client software and then logging in to the Telnet server.

The problem with Telnet is that all data, even login data, is sent in clear text. This can be a security risk. And if you are having problems telnetting into a device, you should verify that both the transmitting and receiving device have telnet services enabled. Lastly, by default, Cisco devices allow five simultaneous telnet sessions.


File Transfer Protocol (FTP)

File Transfer Protocol (FTP) is the protocol that actually lets us transfer files, and it can accomplish this between any two machines using it. But FTP isn’t just a protocol; it’s also a program. Operating as a protocol, FTP is used by applications. As a program, it’s employed by users to perform file tasks by hand. FTP also allows for access to both directories and files and can accomplish certain types of directory operations, such as relocating into different ones. FTP teams up with Telnet to transparently log you in to the FTP server and then provides for the transfer of files.

Accessing a host through FTP is only the first step, though. Users must then be subjected to an authentication login that’s probably secured with passwords and usernames implemented by
system administrators to restrict access. You can get around this somewhat by adopting the username anonymous—though what you’ll gain access to will be limited.

Even when employed by users manually as a program, FTP’s functions are limited to listing and manipulating directories, typing file contents, and copying files between hosts. It can’t execute remote files as programs.


Trivial File Transfer Protocol (TFTP)

Trivial File Transfer Protocol (TFTP) is the stripped-down, stock version of FTP, but it’s the protocol of choice if you know exactly what you want and where to find it, plus it’s so easy to use and it’s fast too! It doesn’t give you the abundance of functions that FTP does, though. TFTP has no directory-browsing abilities; it can do nothing but send and receive files. This compact little protocol also skimps in the data department, sending much smaller blocks of data than FTP, and there’s no authentication as with FTP, so it’s insecure. Few sites support it because of the inherent security risks.


Network File System (NFS)

Network File System (NFS) is a jewel of a protocol specializing in file sharing. It allows two different types of file systems to interoperate. It works like this: Suppose that the NFS server software is running on an NT server and the NFS client software is running on a Unix host. NFS allows for a portion of the RAM on the NT server to transparently store Unix files, which can, in turn, be used by Unix users. Even though the NT file system and Unix file system are unlike they have different case sensitivity, filename lengths, security, and so on—both Unix users and NT users can access that same file with their normal file systems, in their normal way.


Simple Mail Transfer Protocol (SMTP)

Simple Mail Transfer Protocol (SMTP), answering our ubiquitous call to email, uses a spooled, or queued, method of mail delivery. Once a message has been sent to a destination, the message is spooled to a device—usually a disk. The server software at the destination posts a vigil, regularly checking the queue for messages. When it detects them, it proceeds to deliver them to their destination. SMTP is used to send mail; POP3 is used to receive mail.


Line Printer Daemon (LPD)

The Line Printer Daemon (LPD) protocol is designed for printer sharing. The LPD, along with the Line Printer (LPR) program, allows print jobs to be spooled and sent to the network’s
printers using TCP/IP.


X Window

Designed for client/server operations, X Window defines a protocol for writing client/server applications based on a graphical user interface (GUI). The idea is to allow a program, called a client, to run on one computer and have it display things through a window server on another computer.


Simple Network Management Protocol (SNMP)

Simple Network Management Protocol (SNMP) collects and manipulates valuable network information. It gathers data by polling the devices on the network from a management station at fixed or random intervals, requiring them to disclose certain information. When all is well, SNMP receives something called a baseline—a report delimiting the operational traits of a healthy network. This protocol can also stand as a watchdog over the network, quickly notifying managers of any sudden turn of events. These network watchdogs are called agents, and when aberrations occur, agents send an alert called a trap to the management station.


Domain Name Service (DNS)

Domain Name Service (DNS) resolves hostnames—specifically, Internet names, such as www.lammle.com. You don’t have to use DNS; you can just type in the IP address of any device you want to communicate with. An IP address identifies hosts on a network and the Internet as well. However, DNS was designed to make our lives easier. Think about this: What would happen if you wanted to move your web page to a different service provider? The IP address would change, and no one would know what the new one was. DNS allows you to use a domain name to specify an IP address. You can change the IP address as often as you want, and no one will know the difference.

DNS is used to resolve a fully qualified domain name (FQDN)—for example, www.lammle.com or todd.lammle.com. An FQDN is a hierarchy that can logically locate a system based on its domain identifier.

If you want to resolve the name todd, you either must type in the FQDN of todd.lammle.com or have a device such as a PC or router add the suffix for you. For example, on a Cisco router, you can use the command ip domain-name lammle.com to append each request with the lammle.com domain. If you don’t do that, you’ll have to type in the FQDN to get DNS to resolve the name.


Dynamic Host Configuration Protocol (DHCP)/Bootstrap Protocol (BootP)

Dynamic Host Configuration Protocol (DHCP) assigns IP addresses to hosts. It allows easier administration and works well in small to even very large network environments. All types of hardware can be used as a DHCP server, including a Cisco router.

DHCP differs from BootP in that BootP assigns an IP address to a host but the host’s hardware address must be entered manually in a BootP table. You can think of DHCP as a dynamic BootP. But remember that BootP is also used to send an operating system that a host can boot from. DHCP can’t do that.

Monday, August 24, 2009

Purpose and Functions of Various Network Devices

It is likely that at some point you’ll have to break up one large network into a bunch of smaller ones because user response will have dwindled to a slow crawl as the network grows and grows. And with all that growth, your LAN’s traffic congestion has reached epic proportions. The answer to this is breaking up a really big network into a number of smaller ones—something called network segmentation.

You do this by using devices like routers, switches, and bridges. Figure 1.1 displays a network that’s been segmented with a switch so each network segment connected to the switch is now a separate collision domain. But make note of the fact that this network is still one broadcast domain.


Keep in mind that the hub used in Figure 1.1 just extended the one collision domain from the switch port. Here’s a list of some of the things that commonly cause LAN traffic congestion:
  • Too many hosts in a broadcast domain
  • Broadcast storms
  • Multicasting
  • Low bandwidth
  • Adding hubs for connectivity to the network
  • A bunch of ARP or IPX traffic ( IPX is a Novell protocol that is like IP but really, really chatty. Typically, it is not used in today’s networks.)

Now routers are used to connect networks together and route packets of data from one network to another. Cisco became the de facto standard of routers because of its high-quality router products, great selection, and fantastic service. Routers, by default, break up a broadcast domain -the set of all devices on a network segment that hear all the broadcasts sent on that segment. Figure 1.2 shows a router in our little network that creates an internetwork and breaks up broadcast domains.

The network in Figure 1.2 shows that each host is connected to its own collision domain, and the router has created two broadcast domains. And don’t forget that the router provides connections to WAN services as well! The router uses something called a serial interface for WAN connections, specifically, a V.35 physical interface on a Cisco router.


Breaking up a broadcast domain is important because when a host or server sends a network broadcast, every device on the network must read and process that broadcast—unless you’ve got a router. When the router’s interface receives this broadcast, it can respond by basically saying, “Thanks, but no thanks,” and discard the broadcast without forwarding it on to other networks. Even though routers are known for breaking up broadcast domains by default, it’s important to remember that they break up collision domains as well. There are two advantages of using routers in your network:
  • They don’t forward broadcasts by default.
  • They can filter the network based on layer 3 (Network layer) information (e.g., IP address).
Four router functions in your network can be listed as follows:
  • Packet switching
  • Packet filtering
  • Internetwork communication
  • Path selection

Remember that routers are really switches; they’re actually what we call layer 3 switches. Unlike layer 2 switches, which forward or filter frames, routers (layer 3 switches) use logical addressing and provide what is called packet switching . Routers can also provide packet filtering by using access lists, and when routers connect two or more networks together and use logical addressing (IP or IPv6), this is called an internetwork. Last, routers use a routing table (map of the internetwork) to make path selections and to forward packets to remote networks.

Conversely, switches aren’t used to create internetworks (they do not break up broadcast domains by default); they’re employed to add functionality to a network LAN. The main purpose of a switch is to make a LAN work better—to optimize its performance—providing more bandwidth for the LAN’s users. And switches don’t forward packets to other networks as routers do. Instead, they only “switch” frames from one port to another within the switched network.

By default, switches break up collision domains . This is an Ethernet term used to describe a network scenario wherein one particular device sends a packet on a network segment, forcing every other device on that same segment to pay attention to it. At the same time, a different device tries to transmit, leading to a collision, after which both devices must retransmit, one at a time. Not very efficient! This situation is typically found in a hub environment where each host segment connects to a hub that represents only one collision domain and only one broadcast domain. By contrast, each and every port on a switch represents its own collision domain.

The term bridging was introduced before routers, switches and hubs were implemented, so it’s pretty common to hear people referring to bridges as switches. That’s because bridges and
switches basically do the same thing—break up collision domains on a LAN (in reality, you cannot buy a physical bridge these days, only LAN switches, but they use bridging technologies, so Cisco still calls them multiport bridges).

So what this means is that a switch is basically just a multiple-port bridge with more brainpower, right? Well, pretty much, but there are differences. Switches do provide this function, but they do so with greatly enhanced management ability and features. Plus, most of the time, bridges only had 2 or 4 ports. Yes, you could get your hands on a bridge with up to 16 ports, but that’s nothing compared to the hundreds available on some switches!

Sunday, August 16, 2009

Cisco Router Components

Bootstrap
Brings up the router during initialization

POST
Checks basic functionality; hardware & interfaces

ROM monitor
Manufacturing testing & troubleshooting

Mini-IOS
Loads Cisco IOS into flash memory

RAM
Holds packet buffers, routing tables, & s/w
Stores running-config

ROM
Starts & maintains the router

Flash Memory
Holds Cisco IOS
Not erased when the router is reloaded

NVRAM
Holds router (& switch) configurations
Not erased when the router is reloaded

Configuration Register
Controls how the router boots up


Boot Sequence


1: Router performs a POST

2: Bootstrap looks for & loads the Cisco IOS

3: IOS software looks for a valid configuration file

4: Startup-config file (from NVRAM) is loaded
If startup-config file is not found, the router will start the setup mode


Configuration Registers

Configuration Meanings


Boot Field Meanings


Checking the Register Value

Changing the Configuration Register
  • Force the system into the ROM monitor mode
  • Select a boot source & default boot filename
  • Enable or disable the Break function
  • Set the console terminal baud rate
  • Load operating software from ROM
  • Enable booting from a TFTP server

Changing the Configuration Register


Recovering Passwords

1: Boot the router & interrupt the boot sequence by performing a break
2: Change the configuration register to turn on bit 6 (0x2142)
3: Reload the router
4: Enter the privileged mode
5: Copy the startup-config to running-config
6: Change the password
7: Reset the configuration register to the default value
8: Reload the router


Recovering Passwords:

1: Boot the router & interrupt the boot sequence by performing a break using the Ctrl+Break key combination.

You may need to upgrade your version of hyper-terminal in order for this to work successfully.

2: Change the configuration register to turn on bit 6 (0x2142)
rommon>confreg 0x2142
You must reset or power cycle for new config to take effect

3: Reload the router
Type reset
The router will reload & ask if you want to enter setup mode
Answer NO

4: Enter the privileged mode
Router>enable
Router#

5: Copy the startup-config to running-config
Router#copy startup-config running-config

6: Change the password
Router#config t
Router(config)#enable secret cisco

7: Reset the configuration register to the default value
Router(config)#config-register 0x2102

8: Reload the router


Backing up the Cisco IOS


Restoring or Upgrading the Cisco IOS

Backing up the Configuration


Restoring the Configuration

Sunday, August 9, 2009

Cisco Router IOS

  • Carries network protocols and functions
  • Connects high-speed traffic between devices
  • Adds security to control access
  • Provides scalability for growth
  • Supplies reliability

Connecting To A Cisco Router


Cisco 2811


Cisco 1841


Bringing up a Router

Boot-up process:

1: POST
2: Looks for the Cisco IOS from Flash memory
3: IOS loads & looks for a valid configuration;
startup-config
stored in nonvolatile RAM (NVRAM)
4: If a valid config is not found in NVRAM: setup mode


Logging into the Router

User mode:
Router>
Used mostly to view statistics

Privileged mode:
Router#
Used to view & change router configuration


Overview of Router Modes


Global changes:

config terminal or config t
Changes made to running-config (DRAM)
To change the startup-config (NVRAM)

config memory or config mem

Note: Any configuration changes need to be placed into RAM. Typing config mem or config net (from a TFTP host) will append the current running-config


Editing & Help Features


Enhanced Editing Commands



Router Command History


Gathering Basic Routing Information


Administrative Functions

The administrative functions that you can configure on a router and switch are
  • Hostnames
  • Banners
  • Password
  • Interface descriptions

Hostnames & Descriptions

Hostnames
Router(config)#hostname todd
todd(config)#

Descriptions
Atlanta(config)#int e0
Atlanta(config-if)#description Sales Lan


Banners

Purpose

Types
  • exec
  • incoming
  • login
  • motd
Delimiting character


Setting the Passwords

5 passwords:

1st two used to set your enable password
Used to secure privileged mode; Router>enable

Other three are used to configure a password in user mode via:
  • console port
  • auxiliary port
  • Telnet

Passwords
Enable passwords
Router(config)#enable password cisco
Router(config)#enable secret cisco

Auxiliary Password
Console Password
Telnet Password

Encrypting Your Password
Router(config)#service password-encryption


Interface Descriptions

Setting descriptions on an interface is helpful to the administrator and, like the hostname, only locally significant. The description command is a helpful one because you can, for instance, use it to keep track of circuit numbers.

Here’s an example:
Atlanta(config)#int e0
Atlanta(config-if)#description Sales Lan
Atlanta(config-if)#int s0
Atlanta(config-if)#desc Wan to Miami circuit:6fdda4321

You can view the description of an interface either with the show running-config command or the show interface command.


Router Interfaces

Bringing up an Interface
no shutdown
shutdown
show interface

Configuring an IP Address on an Interface
Router(config)#int e0
Router(config-if)#ip address 172.16.10.2 255.255.255.0
Router(config-if)#no shut

Serial Interface Commands
clock rate & bandwidth (entered in kilobits)


Viewing, & Saving Configurations

Viewing & Saving Configurations
running-config saved in DRAM
startup-config saved in NVRAM
copy run start
sh run
sh start
erase startup-config


Verifying Your Configuration
  • show running-config
  • show startup-config
  • ping
  • show cdp nei detail
  • trace
  • telnet
Verifying with the show interface command
Router#show interface ?

Verifying with the show ip interface command
Router#show ip interface
Router#show ip interface brief
Router#show controllers