Thursday, December 13, 2012

The Evolved Packet Core (EPC model)

The Evolved Packet Core

By Frédéric Firmin, 3GPP MCC

This article looks at the Evolved Packet Core (EPC), the core network of the LTE system, giving an overview of the architecture of the core network, describing some of its key elements.
The EPC is the latest evolution of the 3GPP core network architecture.
In GSM, the architecture relies on circuit-switching (CS). This means that circuits are established between the calling and called parties throughout the telecommunication network (radio, core network of the mobile operator, fixed network). This circuit-switching mode can be seen as an evolution of the "two cans and a string". In GSM, all services are transported over circuit-switches telephony principally, but short messages (SMS) and some data is also seen.
In GPRS, packet-switching (PS) is added to the circuit-switching. With this technology, data is transported in packets without the establishment of dedicated circuits. This offers more flexibility and efficiency. In GPRS, the circuits still transport voice and SMS (in most cases). Therefore, the core network is composed of two domains: circuit and packet.
In UMTS (3G), this dual-domain concept is kept on the core network side. Some network elements have evolved but the concept remains very similar.
When designing the evolution of the 3G system, the 3GPP community decided to use IP (Internet Protocol) as the key protocol to transport all services. It was therefore agreed that the EPC would not have a circuit-switched domain anymore and that the EPC should be an evolution of the packet-switched architecture used in GPRS/UMTS.
This decision had consequences on the architecture itself but also on the way that the services were provided. Traditional use of circuits to carry voice and short messages needed to be replaced by IP-based solutions in the long term.

Architecture of the EPC

EPC was first introduced by 3GPP in Release 8 of the standard.
It was decided to have a "flat architecture". The idea is to handle the payload (the data traffic) efficiently from performance and costs perspective. Few network nodes are involved in the handling of the traffic and protocol conversion is avoided.
It was also decided to separate the user data (also known as the user plane) and the signalling (also know as the control plane) to make the scaling independent. Thanks to this functional split, the operators can dimension and adapt their network easily.
Figure 2 shows a very basic architecture of the EPS when the User Equipment (UE) is connected to the EPC over E-UTRAN (LTE access network). The Evolved NodeB (eNodeB) is the base station for LTE radio. In this figure, the EPC is composed of four network elements: the Serving Gateway (Serving GW), the PDN Gateway (PDN GW), the MME and the HSS. The EPC is connected to the external networks, which can include the IP Multimedia Core Network Subsystem (IMS).

HSS
Basically, the HSS (for Home Subscriber Server) is a database that contains user-related and subscriber-related information. It also provides support functions in mobility management, call and session setup, user authentication and access authorization.
It is based on the pre-3GPP Release 4 - Home Location Register (HLR) and Authentication Centre (AuC).
Serving GW
The gateways (Serving GW and PDN GW) deal with the user plane. They transport the IP data traffic between the User Equipment (UE) and the external networks.
The Serving GW is the point of interconnect between the radio-side and the EPC. As its name indicates, this gateway serves the UE by routing the incoming and outgoing IP packets.
It is the anchor point for the intra-LTE mobility (i.e. in case of handover between eNodeBs) and between LTE and other 3GPP accesses.
It is logically connected to the other gateway, the PDN GW.
PDN GW
The PDN GW is the point of interconnect between the EPC and the external IP networks. These networks are called PDN (Packet Data Network), hence the name. The PDN GW routes packets to and from the PDNs.
The PDN GW also performs various functions such as IP address / IP prefix allocation or policy control and charging.
3GPP specifies these gateways independently but in practice they may be combined in a single "box" by network vendors.
MME
The MME (for Mobility Management Entity) deals with the control plane. It handles the signalling related to mobility and security for E-UTRAN access.
The MME is responsible for the tracking and the paging of UE in idle-mode. It is the termination point of the Non-Access Stratum (NAS).

Support of multiple access technologies

As seen in figure 2, the UE can reach the EPC using E-UTRAN however this is not the only access technology supported.
3GPP specified support of multiple access technologies and also the handover between these accesses. The idea was to bring convergence using a unique core network providing various IP-based services over multiple access technologies.
Existing 3GPP radio access networks are supported. 3GPP specifications define how the interworking is achieved between an E-UTRAN (LTE and LTE-Advanced), GERAN (radio access network of GSM/GPRS) and UTRAN (radio access network of UMTS-based technologies WCDMA and HSPA).
The EPS also allows non-3GPP technologies to interconnect the UE and the EPC. Non-3GPP means that these accesses were not specified in the 3GPP. These technologies includes e.g. WiMAX, cdma2000®, WLAN or fixed networks.
Non-3GPP accesses can be split into two categories: the "trusted" ones and the "untrusted":
  • Trusted non-3GPP accesses can interact directly with the EPC.
  • Untrusted non-3GPP accesses interwork with the EPC via a network entity called the ePDG (for Evolved Packet Data Gateway). The main role of the ePDG is to provide security mechanisms such as IPsec tunnelling of connections with the UE over an untrusted non-3GPP access.
3GPP does not specify which non-3GPP technologies should be considered trusted or untrusted. This decision is made by the operator.

Saturday, November 17, 2012

Upgrading Cisco Router IOS

Today I upgraded the flash and system RAM in my Cisco 2651XM router.

Before upgrading the router memory, I had this in place:
C2600 platform with 65536 Kbytes of main memory

16384K bytes of processor board System flash (Read/Write)

I bought 64 MB extra main memory and 16 MB extra flash memory. When I opened up the router, the insides looked like this diagram:

I had a single 64 MB DRAM DIMM in the "Primary memory" slot with one free. I had no memory in the "System-code SIMM (Flash memory)" slot, since the 2651XM must ship with 16 MB on the motherboard. Once I snapped in the extra memory, my router recognized it without any trouble, as will be seen later.

I also decided to upgrade from a 12.2 release to 12.3. My old IOS was 12.2(11)T10, which corresponded to the c2600-ik9s-mz.122-11.T10.bin image. The 'show flash' command showed this after the memory upgrades:
gill#sh flash

System flash directory:
File  Length   Name/status
  1   14962584  c2600-ik9s-mz.122-11.T10.bin
[14962648 bytes used, 18591784 available, 33554432 total]
32768K bytes of processor board System flash (Read/Write)


First I searched for a suitable IOS from the Cisco IOS Feature Navigator and Upgrade Planner tools. I located a version of IOS which offered NetFlow and SSH v2 in the 12.3 train, 12.3(4)T4 (image c2600-a3jk9s-mz.123-4.T4.bin). I downloaded it to a TFTP server on the same network as the router, into the TFTP server's /tftpboot directory.

I did not make a copy of the existing router flash image as I already had it elsewhere for safekeeping.

Next I copied my startup-config to the TFTP server:
gill#copy startup-config tftp://172.27.20.5/gill-startup-config
Address or name of remote host [172.27.20.5]?
Destination filename [gill-startup-config]?
!!
2200 bytes copied in 0.072 secs (30556 bytes/sec)

Now I was ready to copy my new flash image to the router:
gill#copy tftp flash
Address or name of remote host []? 172.27.20.5
Source filename []? c2600-a3jk9s-mz.123-4.T4.bin
Destination filename [c2600-a3jk9s-mz.123-4.T4.bin]?
Accessing tftp://172.27.20.5/c2600-a3jk9s-mz.123-4.T4.bin...
Erase flash: before copying? [confirm]
Erasing the flash filesystem will remove all files! Continue? [confirm]
Erasing device... eeeeeeeeeeeeeeee
...edited...
Erase of flash: complete
Loading c2600-a3jk9s-mz.123-4.T4.bin from 172.27.20.5 (via FastEthernet0/1):!!!!
!!!
...edited...
[OK - 24299960 bytes]
Verifying checksum...  OK (0x5193)
24299960 bytes copied in 182.832 secs (132909 bytes/sec)

Next I checked to see if it was loaded:
gill#sh flash

System flash directory:
File  Length   Name/status
  1   24299960  c2600-a3jk9s-mz.123-4.T4.bin
[24300024 bytes used, 8730120 available, 33030144 total]
32768K bytes of processor board System flash (Read/Write)

This looked fine, so I changed the system boot parameters to use the new image, copied the running-config to startup-config, and reloaded the router:
gill#conf term
Enter configuration commands, one per line.  End with CNTL/Z.
gill(config)#no boot system flash c2600-ik9s-mz.122-11.T10.bin
gill(config)#boot system flash  c2600-a3jk9s-mz.123-4.T4.bin
gill(config)#exit
gill#
01:21:28: %SYS-5-CONFIG_I: Configured from console by console
gill#copy running-config startup-config
Destination filename [startup-config]?
Building configuration...
[OK]
gill#reload
Proceed with reload? [confirm]

01:22:32: %SYS-5-RELOAD: Reload requested by console.

I then watched the router come up:
System Bootstrap, Version 12.2(7r) [cmong 7r], RELEASE SOFTWARE (fc1)
Copyright (c) 2002 by cisco Systems, Inc.
C2600 platform with 131072 Kbytes of main memory

program load complete, entry point: 0x80008000, size: 0x172c840
Self decompressing the image : ###########
...edited...
############ [OK]

Smart Init is enabled
smart init is sizing iomem
  ID            MEMORY_REQ                 TYPE
00036F          0X00103980 C2651XM Dual Fast Ethernet
                0X000F3BB0 public buffer pools
                0X00211000 public particle pools
TOTAL:          0X00408530

If any of the above Memory Requirements are
"UNKNOWN", you may be using an unsupported
configuration or there is a software problem and
system operation may be compromised.
Rounded IOMEM up to: 5Mb.
Using 3 percent iomem. [5Mb/128Mb]
...edited...
program load complete, entry point: 0x80008000, size: 0x172c840
Self decompressing the image : ###########
...edited...
############ [OK]

Smart Init is enabled
smart init is sizing iomem
  ID            MEMORY_REQ                 TYPE
00036F          0X00103980 C2651XM Dual Fast Ethernet
                0X000F3BB0 public buffer pools
                0X00211000 public particle pools
TOTAL:          0X00408530

If any of the above Memory Requirements are
"UNKNOWN", you may be using an unsupported
configuration or there is a software problem and
system operation may be compromised.
Rounded IOMEM up to: 5Mb.
Using 3 percent iomem. [5Mb/128Mb]
...edited...
Cisco IOS Software, C2600 Software (C2600-A3JK9S-M), Version 12.3(4)T4, 
 RELEASE SOFTWARE (fc2)
Technical Support: http://www.cisco.com/techsupport
Copyright (c) 1986-2004 by Cisco Systems, Inc.
Compiled Thu 11-Mar-04 19:57 by eaarmas
Image text-base: 0x80008098, data-base: 0x8243BC1C
...edited...
Cisco 2651XM (MPC860P) processor (revision 0x100) with 125952K/5120K bytes of me
mory.
Processor board ID JAE071601DV (2514262155)
M860 processor: part number 5, mask 2
2 FastEthernet interfaces
32K bytes of NVRAM.
32768K bytes of processor board System flash (Read/Write)

Press RETURN to get started!

*Mar  1 00:00:04.695: %LINEPROTO-5-UPDOWN: Line protocol on Interface VoIP-Null0,
 changed state to up
*Mar  1 00:00:15.280: %LINK-3-UPDOWN: Interface FastEthernet0/0,
 changed state to up
*Mar  1 00:00:15.280: %LINK-3-UPDOWN: Interface FastEthernet0/1,
 changed state to up
00:00:16: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/0,
 changed state to up
00:00:16: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/1,
 changed state to up
00:00:18: %SYS-5-CONFIG_I: Configured from memory by console
00:00:19: %SYS-5-RESTART: System restarted --

Cisco IOS Software, C2600 Software (C2600-A3JK9S-M), Version 12.3(4)T4,  RELEASE
 SOFTWARE (fc2)
Technical Support: http://www.cisco.com/techsupport
Copyright (c) 1986-2004 by Cisco Systems, Inc.
Compiled Thu 11-Mar-04 19:57 by eaarmas
00:00:19: %SNMP-5-COLDSTART: SNMP agent on host gill is undergoing a cold start
00:00:19: %NTP-6-RESTART: NTP process starts
00:00:20: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/0,
 changed state to down
00:00:20: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/1,
 changed state to down
00:00:21: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/1,
 changed state to up
00:00:27: %DHCP-6-ADDRESS_ASSIGN: Interface FastEthernet0/0 assigned DHCP address edited,
 mask 255.255.254.0, hostname gill
00:00:30: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/0,
 changed state to up
00:00:33: %NTP-5-PEERSYNC: NTP synced to peer 204.152.184.72
00:00:33: %NTP-6-PEERREACH: Peer 204.152.184.72 is reachable

Everything looks fine, including the time setting via NTP. I also looked again at the flash and then the filesystem:
gill>enable
Password:
gill#sh flash detailed

System flash directory:
File  Length   Name/status
        addr      fcksum  ccksum
  1   24299960  c2600-a3jk9s-mz.123-4.T4.bin
        0x40      0x5193  0x5193
[24300024 bytes used, 8730120 available, 33030144 total]
32768K bytes of processor board System flash (Read/Write)
gill#show file systems
File Systems:

     Size(b)     Free(b)      Type  Flags  Prefixes
           -           -    opaque     rw   system:
       29688       26383     nvram     rw   nvram:
           -           -    opaque     rw   null:
           -           -    opaque     ro   xmodem:
           -           -    opaque     ro   ymodem:
           -           -   network     rw   tftp:
*   33030144     8730120     flash     rw   flash:
           -           -   network     rw   pram:
           -           -   network     rw   rcp:
           -           -   network     rw   scp:
           -           -   network     ro   http:
           -           -   network     rw   ftp:
           -           -   network     ro   https:
           -           -    opaque     ro   cns:

My upgrade was complete!

Friday, July 20, 2012

How to run topologies in GNS3 without hitting %100 utilization

How to get Good Idle-PC Value in GNS3

One of the most difficult problems new users have to come to grips with when starting out with GNS3 is the concept of an Idle-PC.
Get it right, and you will have a great GNS3 experience. Try to ignore it, and you will be forever miserable.
Here is my tedious and time consuming method of finding a good idle-pc value.
REPEAT FOR EVERY DIFFERENT IOS IMAGE YOU WISH TO RUN ON YOUR SYSTEM:
Step 1:
Windows: Open the windows task manager and sort by %CPU
Linux: Open a console window and enter the command top
Mac OS X: Open a terminal window and enter the command top -o cpu
Keep this window visible for the entire process

Step 2:

In GNS3, start a new topology with 1 router ONLY
Start the router
Open the console. When when the router is fully up, configure the following:
line con 0
exec-timeout 0
NOTE: While writing this post, I observed this step alone dropped the CPU usage from 98% to 1% on a Windows 7 install (running in a VM on OS X)

Step 3:

Back at your task manager or console window:
Take note of the amount of CPU being chewed by dynamips

Step 4:

In GNS3, right-click on the router and choose idle-pc
If NO values appear marked with *, try again
When you find a value marked with a *, WRITE IT DOWN
If MULTIPLE values appear with *, WRITE THEM ALL DOWN (in a column)
… then choose one of them

Step 5:

Check the CPU utilisation for dynamips in the task manager or console window.
Estimate the average CPU consumption for dynamips over say 15-20 seconds
WRITE IT DOWN next to the Idle-pc value you wrote down in step 4
If you have an idle-pc vlaue that shows less than 10-15% CPU, you may want to go to step 6,
Else, go back to step 4

Step 6:

Now that you have a good idle-pc, you need to know how to use it well.
Firstly, check that GNS3 has recorded you best value against the image you are using
That’s in:
Edit->IOS Images and Hypervisors
Select the image you are using and check the IDLE PCs value
Now GNS3 will automatically use that value in any NEW topologies you create.

Step 7:

If you have any saved topologies (ie .net files) that have used this IOS, open the .net file and replace the idle-pc value found there with your new “good” idle-pc value.

Step 8:

Record the results you found (IOS & idle-pc values) in a spreadsheet and keep it!
Now go back to Step 1 and repeat for the next IOS image you use

General Tips for keeping CPU under control

Always use the same image for ALL routers in your topology if possible.
This means using the same router model as well. If this is not possible, use the same image for all routers of the same model.
ALWAYS set the exec-timeout 0 under line con 0
In GNS 0.7.3 and later, you can set a base config for each IOS image under Edit->IOS Images and Hypervisors. Make sure the base config has the exec-timeout 0 under line con 0

Reference: http://www.gns3.net/phpBB/topic2873.html

Friday, April 6, 2012

NAT traversal for the SIP protocol

By Diana Cionoiu;

NAT stands for Network Address Translation. It's the technology which allows most people to have more than one computer in their home and still use a single IP address. Most of the time, a router with NAT support gets data packets from the internal network (with internal IP addresses) and sends them to Internet, changing the internal IP address of each packet to the external one.


What's RTP?

RTP stands for Real-Time Transport Protocol. Its purpose is to pass voice data between a caller and the called. The problem is that when you call someone using the RTP protocol, you need to know his IP address and port. This makes RTP quite inconvenient when used alone, since the parts have no way to find one another. This is why people invented SIP.

What's SIP?

SIP (Session Initiation Protocol) looks in syntax like HTTP, human readable text. Its purpose is to help the caller find the IP address and port of the called. It also helps the negotiation of the media types and formats. For example, when you have a PC at home and you want to call your friend from Romania using Free World Dialup (which uses the SIP protocol):

SIP sends an INVITE packet with the caller IP address and port for RTP to the FWD server, and from there, FWD forwards the call to the intended destination. The called accepts the call and sends his own IP address and port for RTP back.

SIP + NAT, an unsolvable problem?

The problem with SIP and NAT is not actually a SIP problem, but the RTP problem. SIP announces the RTP address and port, but if the client is behind NAT, it announces the client's RTP port, which can be different from the port the NAT allocates externally.

Even if a lot of SIP implementations and carriers are based on the fact that NAT will always try to allocate the same port, that assumption is false. In a production environment, you can't tell grandma that she can't talk with her grandson because some router has allocated another number.

If you are a carrier, the solution is simple because you proxy all the data, anyway. The solution is to use a SIP Session Border Controller. The SIP SBC normally stays in front of the internal SIP network of the carrier, solving the NAT traversal problem and protecting the SIP network.

The solution for NAT traversal in this case is to use some tricks.
The first trick is to keep open the hole in the NAT from the SIP client to the server. This is normally done by making all SIP clients use a two byte packet which is sent more often than 30 seconds. Some routers remove apparently unused NAT mappings after 30 seconds; GNU/Linux typically does this after three minutes.
The second trick is one we've used for our project YATE, to figure out the RTP IP and port from the first packet that arrives on the local RTP IP and port of the server, and to use it instead of using the RTP IP and address declared in SDP. This trick solves the NAT traversal problem, no matter how many NATs the client is traversing. However, the main disadvantage is that, in some cases, the client will not receive early media (since at that point, it sends out no voice packets) and it will not hear the ringing.
If you are not a carrier and you are trying to make a peer-to-peer call and both sides are behind the NAT, you must use an external SIP proxy or gateway to pass the SIP between the two points, hoping that the NATs will open the proper ports, one to another, for the RTP connection. However, there is no ultimate solution for that. Two proposed solutions are STUN and ICE, but every solution that currently exists can get in your way sometimes. Skype has found a very simple and nice solution for this problem: They use the Skype clients that are not behind NAT to proxy all the data for clients that are behind NAT.

This solution, technically speaking, is very good. However, there are some moral and political reasons not to use Skype. One of them is that if you are a Skype client outside the NAT, you don't know whose data is passing through your computer. Another is that it is using your bandwidth; after all, someone has to pay one way or another for Internet bandwidth necessary to proxy the voice stream.
My personal hope is that in the near future, most SIP implementations will use the two tricks used by YATE. Skype will probably be around for a long time for home users, but enterprise seems to move slowly to VoIP providers. With a lot of work and a little bit of luck, they will become at least as reliable as PSTN providers, since the technology is better.

Wednesday, March 28, 2012

Facetime Analysis

This blog article is based on the blog article that was written by FryGuy.
This article will explain what is happening on the low level when a Facetime call is made between 2 x iPhone 4 devices.
FryGuy tested facetime and enabled packet capturing in his ASA to see what is actually happening on the network when you make a simple facetime call.
ASA packet capturing is explained HERE.
iPhone 4 #1 = IP Private – 192.168.0.128
iPhone 4 #1 = IP NAT – 216.164.100.100
iPhone 4 #2 = IP Private 192.168.2.106
iPhone 4 #2 = IP NAT – 72.81.200.200
Apple Video Servers = 17.155.5.251 / 17.155.5.252 / 17.155.4.14
Note: NATs change to protect the guilty
1.  The call is first initiated via regular Celluar networks.  In the contact list you will see an icon called FaceTime.

2.  The phones then communicate to a server at Apple (17.155.5.251 is what he saw).  Communication is sourced from port 16402 via UDP initially and then looks to dynamically allocate ports for communication (16385 and 16386 are what appeared on his end).
1 0.000000 192.168.0.128 17.155.5.251 UDP Source port: 16402 Destination port: connected
2 0.431054 17.155.5.251 192.168.0.128 UDP Source port: connected Destination port: 16402
3 0.715713 192.168.0.128 17.155.5.251 UDP Source port: 51136 Destination port: connected
4 0.716064 192.168.0.128 17.155.5.251 UDP Source port: 51136 Destination port: 16385
5 0.717147 192.168.0.128 17.155.5.252 UDP Source port: 51136 Destination port: 16386
6 0.958285 17.155.5.252 192.168.0.128 UDP Source port: 16386 Destination port: 51136
7 0.960329 17.155.5.251 192.168.0.128 UDP Source port: 16385 Destination port: 51136
8 0.960588 17.155.5.251 192.168.0.128 UDP Source port: connected Destination port: 51136
9 1.016402 192.168.0.128 216.164.100.100 UDP Source port: 51136 Destination port: 52585
10 1.018172 192.168.0.128 216.164.100.100 UDP Source port: 51136 Destination port: 52585
3. The phone then negotiates an HTTPS connection to the servers at Apple for the setup and communication. There also seems to be some communication to other servers (in this case  RCN 208.59.216.10) – and they are FryGuys cable provider.
11 1.019912 192.168.0.128 17.155.4.14 TCP 50697 > https [SYN] Seq=0 Win=65535 Len=0 MSS=1460 WS=2 TSV=469580285 TSER=0
12 1.020140 192.168.0.128 216.164.100.100 UDP Source port: 51136 Destination port: 52585
13 1.298294 17.155.4.14 192.168.0.128 TCP https > 50697 [SYN, ACK] Seq=0 Ack=1 Win=8190 Len=0 MSS=1360 WS=4
14 1.318312 192.168.0.128 17.155.4.14 TCP 50697 > https [ACK] Seq=1 Ack=1 Win=131920 Len=0
15 1.321211 192.168.0.128 17.155.4.14 TLSv1 Client Hello
16 1.645657 192.168.0.128 17.155.5.251 UDP Source port: 51136 Destination port: connected
17 1.645978 192.168.0.128 17.155.5.251 UDP Source port: 51136 Destination port: 16385
18 1.646130 192.168.0.128 17.155.5.252 UDP Source port: 51136 Destination port: 16386
19 1.662234 192.168.0.128 208.59.216.10 TCP 50698 > http [SYN] Seq=0 Win=65535 Len=0 MSS=1460 WS=2 TSV=469580291 TSER=0
20 1.730834 17.155.4.14 192.168.0.128 TCP [TCP segment of a reassembled PDU]
21 1.731963 17.155.4.14 192.168.0.128 TLSv1 Server Hello, Certificate, Server Hello Done
22 1.808298 208.59.216.10 192.168.0.128 TCP http > 50698 [SYN, ACK] Seq=0 Ack=1 Win=5792 Len=0 MSS=1380 TSV=941715237 TSER=469580291 WS=1
23 1.832208 192.168.0.128 17.155.4.14 TCP 50697 > https [ACK] Seq=160 Ack=1361 Win=130560 Len=0
24 1.834588 192.168.0.128 17.155.4.14 TCP 50697 > https [ACK] Seq=160 Ack=2490 Win=130788 Len=0
25 1.834954 192.168.0.128 208.59.216.10 TCP 50698 > http [ACK] Seq=1 Ack=1 Win=131328 Len=0 TSV=469580293 TSER=941715237
26 1.836526 192.168.0.128 208.59.216.10 HTTP GET /WebObjects/VCInit.woa/wa/getBag?ix=1 HTTP/1.1
27 1.881018 17.155.5.252 192.168.0.128 UDP Source port: 16386 Destination port: 51136
28 1.882147 17.155.5.251 192.168.0.128 UDP Source port: connected Destination port: 51136
29 1.883124 17.155.5.251 192.168.0.128 UDP Source port: 16385 Destination port: 51136
30 1.884207 192.168.0.128 216.164.100.100 UDP Source port: 51136 Destination port: 52585
31 1.886053 192.168.0.128 216.164.100.100 UDP Source port: 51136 Destination port: 52585
32 1.886343 192.168.0.128 216.164.100.100 UDP Source port: 51136 Destination port: 52585
33 1.930729 192.168.0.128 17.155.4.14 TLSv1 Client Key Exchange
34 1.930835 192.168.0.128 17.155.4.14 TLSv1 Change Cipher Spec
35 1.931583 192.168.0.128 17.155.4.14 TLSv1 Encrypted Handshake Message
36 2.190008 208.59.216.10 192.168.0.128 TCP http > 50698 [ACK] Seq=1 Ack=229 Win=6432 Len=0 TSV=941715619 TSER=469580293
37 2.190313 208.59.216.10 192.168.0.128 TCP [TCP segment of a reassembled PDU]
38 2.191366 208.59.216.10 192.168.0.128 TCP [TCP segment of a reassembled PDU]
39 2.192312 208.59.216.10 192.168.0.128 HTTP/XML HTTP/1.1 200 OK
40 2.242678 192.168.0.128 208.59.216.10 TCP 50698 > http [ACK] Seq=229 Ack=2737 Win=128592 Len=0 TSV=469580297 TSER=941715619
41 2.243014 192.168.0.128 208.59.216.10 TCP 50698 > http [ACK] Seq=229 Ack=3506 Win=127820 Len=0 TSV=469580297 TSER=941715619
42 2.393275 17.155.4.14 192.168.0.128 TCP https > 50697 [ACK] Seq=2490 Ack=299 Win=35216 Len=0
43 2.393305 17.155.4.14 192.168.0.128 TCP https > 50697 [ACK] Seq=2490 Ack=305 Win=35216 Len=0
44 2.393351 17.155.4.14 192.168.0.128 TCP https > 50697 [ACK] Seq=2490 Ack=342 Win=35184 Len=0
45 2.394633 17.155.4.14 192.168.0.128 TLSv1 Change Cipher Spec, Encrypted Handshake Message
46 2.448112 192.168.0.128 17.155.4.14 TCP 50697 > https [ACK] Seq=342 Ack=2533 Win=131876 Len=0
47 2.449760 192.168.0.128 17.155.4.14 TLSv1 Application Data
48 2.450325 192.168.0.128 17.155.4.14 TLSv1 Application Data
49 2.511448 192.168.0.128 17.155.5.251 UDP Source port: 51136 Destination port: connected
50 2.512608 192.168.0.128 17.155.5.251 UDP Source port: 51136 Destination port: 16385
51 2.512776 192.168.0.128 17.155.5.252 UDP Source port: 51136 Destination port: 16386
52 2.905644 17.155.5.252 192.168.0.128 UDP Source port: 16386 Destination port: 51136
53 2.905690 17.155.4.14 192.168.0.128 TCP https > 50697 [ACK] Seq=2533 Ack=966 Win=34560 Len=0
54 2.905782 17.155.4.14 192.168.0.128 TCP https > 50697 [ACK] Seq=2533 Ack=1453 Win=34064 Len=0
55 2.906896 17.155.5.251 192.168.0.128 UDP Source port: 16385 Destination port: 51136
56 2.907536 17.155.5.251 192.168.0.128 UDP Source port: connected Destination port: 51136
57 2.923466 17.155.4.14 192.168.0.128 TLSv1 Application Data
58 2.923924 17.155.4.14 192.168.0.128 TLSv1 Application Data
59 3.060254 192.168.0.128 216.164.100.100 UDP Source port: 51136 Destination port: 52585
60 3.060422 192.168.0.128 216.164.100.100 UDP Source port: 51136 Destination port: 52585
61 3.062146 192.168.0.128 17.155.4.14 TCP 50697 > https [ACK] Seq=1453 Ack=2894 Win=131556 Len=0
62 3.062451 192.168.0.128 17.155.4.14 TCP 50697 > https [ACK] Seq=1453 Ack=3240 Win=131212 Len=0
63 3.062741 192.168.0.128 199.7.52.190 TCP 50699 > http [SYN] Seq=0 Win=65535 Len=0 MSS=1460 WS=2 TSV=469580305 TSER=0
64 3.063122 192.168.0.128 216.164.100.100 UDP Source port: 51136 Destination port: 52585
65 3.532458 199.7.52.190 192.168.0.128 TCP http > 50699 [SYN, ACK] Seq=0 Ack=1 Win=8190 Len=0 MSS=1380
66 3.571122 192.168.0.128 199.7.52.190 TCP 50699 > http [ACK] Seq=1 Ack=1 Win=65535 Len=0
67 3.579117 192.168.0.128 199.7.52.190 HTTP GET /EVIntl2006.cer HTTP/1.1
68 3.690690 192.168.0.128 17.155.4.14 TLSv1 Encrypted Alert
69 3.692505 192.168.0.128 17.155.5.251 UDP Source port: 51136 Destination port: connected
70 3.696701 192.168.0.128 17.155.4.14 TCP 50697 > https [FIN, ACK] Seq=1476 Ack=3240 Win=131920 Len=0
71 3.697007 192.168.0.128 208.59.216.10 TCP 50698 > http [FIN, ACK] Seq=229 Ack=3506 Win=131328 Len=0 TSV=469580312 TSER=941715619
72 3.697388 192.168.0.128 17.155.5.251 UDP Source port: 51136 Destination port: 16385
73 3.697617 192.168.0.128 17.155.5.252 UDP Source port: 51136 Destination port: 16386
74 3.809626 199.7.52.190 192.168.0.128 TCP [TCP segment of a reassembled PDU]
75 3.810572 199.7.52.190 192.168.0.128 HTTP HTTP/1.0 200 OK (text/plain)
76 3.881720 192.168.0.128 199.7.52.190 TCP 50699 > http [ACK] Seq=154 Ack=1865 Win=65535 Len=0
77 3.890585 192.168.0.128 199.7.52.190 TCP 50699 > http [FIN, ACK] Seq=154 Ack=1865 Win=65535 Len=0
78 3.952258 208.59.216.10 192.168.0.128 TCP http > 50698 [FIN, ACK] Seq=3506 Ack=230 Win=6432 Len=0 TSV=941717381 TSER=469580312
79 3.954256 192.168.0.128 208.59.216.10 TCP 50698 > http [ACK] Seq=230 Ack=3507 Win=131328 Len=0 TSV=469580314 TSER=941717381
80 4.007781 17.155.4.14 192.168.0.128 TCP https > 50697 [ACK] Seq=3240 Ack=1476 Win=40928 Len=0
81 4.007965 17.155.4.14 192.168.0.128 TCP https > 50697 [FIN, ACK] Seq=3240 Ack=1477 Win=40928 Len=0
82 4.009155 17.155.5.251 192.168.0.128 UDP Source port: 16385 Destination port: 51136
83 4.009170 17.155.5.251 192.168.0.128 UDP Source port: connected Destination port: 51136
84 4.009948 192.168.0.128 17.155.4.14 TCP 50697 > https [FIN, ACK] Seq=1476 Ack=3240 Win=131920 Len=0
85 4.014495 192.168.0.128 17.155.4.14 TCP 50697 > https [ACK] Seq=1477 Ack=3241 Win=131920 Len=0
86 4.019866 192.168.0.128 216.164.100.100 UDP Source port: 51136 Destination port: 52585
87 4.023955 17.155.5.252 192.168.0.128 UDP Source port: 16386 Destination port: 51136
88 4.025984 192.168.0.128 216.164.100.100 UDP Source port: 51136 Destination port: 52585
89 4.034971 192.168.0.128 216.164.100.100 UDP Source port: 51136 Destination port: 52585
90 4.504292 199.7.52.190 192.168.0.128 TCP http > 50699 [ACK] Seq=1865 Ack=155 Win=8190 Len=0
91 4.671800 192.168.0.128 17.155.5.251 UDP Source port: 51136 Destination port: connected
92 4.672167 192.168.0.128 17.155.5.251 UDP Source port: 51136 Destination port: 16385
93 4.672411 192.168.0.128 17.155.5.252 UDP Source port: 51136 Destination port: 16386
94 5.139092 17.155.5.252 192.168.0.128 UDP Source port: 16386 Destination port: 51136
95 5.140068 17.155.5.251 192.168.0.128 UDP Source port: 16385 Destination port: 51136
96 5.140129 17.155.5.251 192.168.0.128 UDP Source port: connected Destination port: 51136
97 5.210011 192.168.0.128 216.164.100.100 UDP Source port: 51136 Destination port: 52585
98 5.215809 192.168.0.128 216.164.100.100 UDP Source port: 51136 Destination port: 52585
99 5.216068 192.168.0.128 216.164.100.100 UDP Source port: 51136 Destination port: 52585
100 5.715774 192.168.0.128 17.155.5.251 UDP Source port: 51136 Destination port: 16385
101 6.054578 17.155.5.251 192.168.0.128 UDP Source port: 16385 Destination port: 51136
4. After Client (iPhone) and server negotiation you start to see Stun requests via the private IPs, after they fail you see them from the Public IP NAT ranges. They success via the Public peering at that point.
102 8.258196 192.168.0.128 192.168.2.106 STUN2 Binding Request
103 8.286606 192.168.0.128 192.168.2.106 STUN2 Binding Request
104 8.303893 192.168.0.128 72.81.200.200 STUN2 Binding Request
105 8.313353 192.168.0.128 192.168.2.106 STUN2 Binding Request
106 8.313582 72.81.200.200 192.168.0.128 STUN2 Binding Request
107 8.316909 192.168.0.128 72.81.200.200 STUN2 Binding Success Response
108 8.333677 192.168.0.128 72.81.200.200 STUN2 Binding Request
109 8.344419 72.81.200.200 192.168.0.128 STUN2 Binding Request
110 8.350980 192.168.0.128 72.81.200.200 STUN2 Binding Success Response
111 8.360852 192.168.0.128 72.81.200.200 STUN2 Binding Request
112 8.374294 72.81.200.200 192.168.0.128 STUN2 Binding Request
113 8.376750 192.168.0.128 72.81.200.200 STUN2 Binding Success Response
114 8.467002 192.168.0.128 192.168.2.106 STUN2 Binding Request
115 8.496083 192.168.0.128 192.168.2.106 STUN2 Binding Request
116 8.528156 72.81.200.200 192.168.0.128 STUN2 Binding Request
117 8.530139 192.168.0.128 72.81.200.200 STUN2 Binding Request
118 8.530765 192.168.0.128 72.81.200.200 STUN2 Binding Success Response
119 8.553316 72.81.200.200 192.168.0.128 STUN2 Binding Request
120 8.555467 192.168.0.128 72.81.200.200 STUN2 Binding Request
121 8.556032 192.168.0.128 72.81.200.200 STUN2 Binding Success Response
122 8.626234 72.81.200.200 192.168.0.128 STUN2 Binding Success Response
123 8.629896 72.81.200.200 192.168.0.128 STUN2 Binding Success Response
5. A SIP call is then initiated between the phones for the video portion of the call
124 8.730361 192.168.0.128 72.81.200.200 SIP/SDP Request: INVITE sip:user@72.81.200.200:50925, with session description
125 8.748746 72.81.200.200 192.168.0.128 STUN2 Binding Success Response
126 8.771618 192.168.0.128 192.168.2.106 STUN2 Binding Request
127 8.797557 192.168.0.128 192.168.2.106 STUN2 Binding Request
128 8.925571 72.81.200.200 192.168.0.128 STUN2 Binding Success Response
129 8.927723 72.81.200.200 192.168.0.128 STUN2 Binding Success Response
130 9.232700 192.168.0.128 72.81.200.200 SIP/SDP Request: INVITE sip:user@72.81.200.200:50925, with session description
131 9.258562 192.168.0.128 192.168.2.106 STUN2 Binding Request
132 9.262926 72.81.200.200 192.168.0.128 SIP Status: 100 Trying
133 9.268831 72.81.200.200 192.168.0.128 SIP Status: 180 Ringing
134 9.296692 192.168.0.128 192.168.2.106 STUN2 Binding Request
135 9.320586 72.81.200.200 192.168.0.128 SIP/SDP Status: 200 OK, with session description
136 9.326857 192.168.0.128 72.81.200.200 SIP Request: ACK sip:user@72.81.200.200:50925
137 9.334699 192.168.0.128 72.81.200.200 SIP Request: MESSAGE sip:user@72.81.200.200:50925
138 9.688477 72.81.200.200 192.168.0.128 SIP/SDP Status: 200 OK, with session description
139 9.716567 192.168.0.128 72.81.200.200 SIP Request: ACK sip:user@72.81.200.200:50925
140 9.834542 192.168.0.128 72.81.200.200 SIP Request: MESSAGE sip:user@72.81.200.200:50925
141 10.216053 72.81.200.200 192.168.0.128 SIP Status: 200 OK
142 10.230152 192.168.0.128 72.81.200.200 SIP Request: MESSAGE sip:user@72.81.200.200:50925
143 10.442848 72.81.200.200 192.168.0.128 SIP Status: 200 OK
144 10.491689 72.81.200.200 192.168.0.128 SIP Status: 200 OK
145 10.727812 192.168.0.128 72.81.200.200 SIP Request: MESSAGE sip:user@72.81.200.200:50925
146 11.229984 192.168.0.128 72.81.200.200 SIP Request: MESSAGE sip:user@72.81.200.200:50925
147 11.318007 72.81.200.200 192.168.0.128 SIP Status: 200 OK
148 11.367565 192.168.0.128 72.81.200.200 SIP Request: MESSAGE sip:user@72.81.200.200:50925
149 11.618986 72.81.200.200 192.168.0.128 SIP Status: 200 OK
150 11.866691 192.168.0.128 72.81.200.200 SIP Request: MESSAGE sip:user@72.81.200.200:50925
151 11.998932 192.168.0.128 72.81.200.200 UDP Source port: 16402 Destination port: 50925
152 12.035444 72.81.200.200 192.168.0.128 SIP Status: 200 OK
153 12.063916 192.168.0.128 72.81.200.200 UDP Source port: 16402 Destination port: 50925
154 12.129174 192.168.0.128 72.81.200.200 UDP Source port: 16402 Destination port: 50925
155 12.180258 192.168.0.128 72.81.200.200 UDP Source port: 16402 Destination port: 50925
156 12.183416 192.168.0.128 72.81.200.200 UDP Source port: 16402 Destination port: 50925
157 12.187093 72.81.200.200 192.168.0.128 SIP Status: 200 OK
158 12.195043 192.168.0.128 72.81.200.200 UDP Source port: 16402 Destination port: 50925
159 12.200932 72.81.200.200 192.168.0.128 SIP Request: BYE sip:user@192.168.0.128:16402
160 12.206181 192.168.0.128 72.81.200.200 SIP Status: 200 OK
6. So in the end, this is a Video SIP call

Sunday, March 18, 2012

VPN's: IPSec vs. SSL

From , former About.com Guide

In years gone by if a remote office needed to connect with a central computer or network at company headquarters it meant installing dedicated leased lines between the locations. These dedicated leased lines provided relatively fast and secure communications between the sites, but they were very costly. To accommodate mobile users companies would have to set up dedicated dial-in remote access servers (RAS). The RAS would have a modem, or many modems, and the company would have to have a phone line running to each modem. The mobile users could connect to the network this way, but the speed was painstakingly slow and made it difficult to do much productive work.
With the advent of the Internet much of that has changed. If a web of servers and network connections already exists, interconnecting computers around the globe, then why should a company spend money and create administrative headaches by implementing dedicated leased lines and dial-in modem banks. Why not just use the Internet?
Well, the first challenge is that you need to be able to choose who gets to see what information. If you simply open up the whole network to the Internet it would be virtually impossible to implement an effective means of keeping unauthorized users from gaining access to the corporate network. Companies spend tons of money to build firewalls and other network security measures aimed specifically at ensuring that nobody from the public Internet can get into the internal network.
How do you reconcile wanting to block the public Internet from accessing the internal network with wanting your remote users to utilize the public Internet as a means of connecting to the internal network? You implement a Virtual Private Network (VPN). A VPN creates a virtual “tunnel” connecting the two endpoints. The traffic within the VPN tunnel is encrypted so that other users of the public Internet can not readily view intercepted communications.
By implementing a VPN, a company can provide access to the internal private network to clients around the world at any location with access to the public Internet. It erases the administrative and financial headaches associated with a traditional leased line wide-area network (WAN) and allows remote and mobile users to be more productive. Best of all, if properly implemented, it does so without impacting the security and integrity of the computer systems and data on the private company network.
Traditional VPN’s rely on IPSec (Internet Protocol Security) to tunnel between the two endpoints. IPSec works on the Network Layer of the OSI Model- securing all data that travels between the two endpoints without an association to any specific application. When connected on an IPSec VPN the client computer is “virtually” a full member of the corporate network- able to see and potentially access the entire network.

The majority of IPSec VPN solutions require third-party hardware and / or software. In order to access an IPSec VPN, the workstation or device in question must have an IPSec client software application installed. This is both a pro and a con.
The pro is that it provides an extra layer of security if the client machine is required not only to be running the right VPN client software to connect to your IPSec VPN, but also must have it properly configured. These are additional hurdles that an unauthorized user would have to get over before gaining access to your network.
The con is that it can be a financial burden to maintain the licenses for the client software and a nightmare for tech support to install and configure the client software on all remote machines- especially if they can’t be on site physically to configure the software themselves.
It is this con which is generally touted as one of the largest pros for the rival SSL (Secure Sockets Layer) VPN solutions. SSL is a common protocol and most web browsers have SSL capabilities built in. Therefore almost every computer in the world is already equipped with the necessary “client software” to connect to an SSL VPN.
Another pro of SSL VPN’s is that they allow more precise access control. First of all they provide tunnels to specific applications rather than to the entire corporate LAN. So, users on SSL VPN connections can only access the applications that they are configured to access rather than the whole network. Second, it is easier to provide different access rights to different users and have more granular control over user access.
A con of SSL VPN’s though is that you are accessing the application(s) through a web browser which means that they really only work for web-based applications. It is possible to web-enable other applications so that they can be accessed through SSL VPN’s, however doing so adds to the complexity of the solution and eliminates some of the pros.
Having direct access only to the web-enabled SSL applications also means that users don’t have access to network resources such as printers or centralized storage and are unable to use the VPN for file sharing or file backups.
SSL VPN’s have been gaining in prevalence and popularity; however they are not the right solution for every instance. Likewise, IPSec VPN’s are not suited for every instance either. Vendors are continuing to develop ways to expand the functionality of the SSL VPN and it is a technology that you should watch closely if you are in the market for a secure remote networking solution. For now, it is important to carefully consider the needs of your remote users and weigh the pros and cons of each solution to determine what works best for you.

Wednesday, February 15, 2012

ATM versus Ethernet

May 18th, 1999

Tomi Mickelsson
Department of Electrical and Communications Engineering
Helsinki University of Technology
tmickels@cc.hut.fi

2. ATM overview

ATM is a cell-switching and multiplexing technology which combines the benefits of circuit-switching and packet-switching. Circuit-switching is the basis of the traditional telephone network, a technology that provides guaranteed capability and constant transmission delay. On the other hand, packet-switching technology provides flexibility and efficient utilization of the total network bandwidth. Cell-switching positions between these two and provides networks with low latency and high throughput. In addition, the simplicity of cell-switching makes it possible to implement switching in hardware which means high-speed. Current transfer rates of ATM are 25, 155 and 622 Mbps. ATM transmits data in fixed-size units called cells. Each cell carries 53 bytes of data, of which 48 bytes is dedicated to the payload and 5 bytes to the header. The small, constant size of the cell allows ATM to transmit real-time data like voice and video over the network. Real-time data is intolerant to transmission delays and small size means small delay between cells. Unfortunately, small size also means larger overhead of header data.
Asynchronity in the name implies that bandwidth is reserved on demand. In contrast to synchronous transmission, of which TDM is a good example, bandwidth is not reserved if the station has nothing to transmit. Data can be transmitted asynchronously, upon demand, and ATM switch multiplexes the cells statistically to the transmission network.
Like the traditional telephone network, ATM is fundamentally connection-oriented. A virtual circuit must be setup before communication between two hosts can take place. ATM connections are established for the duration of a call, using Virtual Channel Identifers (VCI) and Virtual Path Identifiers (VPI). These identifiers are always local to the switching node, and they are assigned during connection setup. Cells carry these identifiers in their headers as they are transmitted through the virtual channel. Cells flow along the same path, preserving their order. Upon connection close, the established identifiers at each node are released.
Quality of service is a major selling point of ATM. ATM implements four different types of service levels that provide quality of service to different data types, from constant bit rate to unspecified bit rate.
An ATM network consists of ATM end-systems (such as hosts and routers) and ATM switches. ATM switches are responsible for routing and transmitting cells through the network. There exists two interfaces to an ATM switch, user-network interface (UNI) between the end-systems and the switch and network-node interface (NNI) between the switches. Furthermore, these interfaces can be categorized as public or private interfaces depending on the type of the ATM network. Private interfaces, used inside private ATM networks, are usually proprietary and don't conform to standards, which the public interfaces have to do.



Figure 1: ATM network topology [1]

3. Ethernet overview

Ethernet is the most popular LAN technology today. Approximately 80% of all LAN installations deploy Ethernet. This represents over 120 million hosts [10]. Ethernet is well understood, easy to install, cheap and the technology with the best support from manufacturers. The basic transfer rate of Ethernet is 10Mbps. Traditional Ethernet is a shared access technology, based on a broadcast medium. All of the stations attached to the Ethernet share a single communications medium, a coaxial or a twisted pair cable. The broadcast nature of Ethernet is very different to the peer-to-peer networking of ATM.
Ethernet comes in two topologies: bus and star. The bus, utilizing coaxial cable, was the original topology that is rarely used anymore, due to difficulties of adding or moving users and troubleshooting. Today, by far the most common topology is a star, which uses twisted pair cables, shielded or unshielded. In a star topology all stations are wired to a central wiring concentrator which has a port for each station. The concentrator can be a hub or a switch, which is more common nowadays.
A hub is a "dump" physical layer device that broadcasts signals from the source port to all other ports. A switch is a smarter datalink layer device that forwards frames from the source port to the destination port only. This decreases the number of collisions in the whole network. In short, switching is a technique that divides a LAN to several smaller network segments, or collision domains, providing full bandwidth to each segment and diminishing the overall network congestion. Switching is a very popular and easy way to add capacity to Ethernet. To get a switched Ethernet, only the hub needs to be replaced into a switch. The equipment and software in the Ethernet hosts remain unaffected. Most current Ethernet networks are based on switching.
Stations access the shared medium of Ethernet using an access scheme called Carrier Sense Multiple Access with Collision Detection (CSMA/CD). It's a democratic scheme giving all stations an equal ability to transmit data. Before transmitting data, a station listens to the medium and starts transmitting only if the medium is free. A collision can happen if two stations start transmitting at the same time. When a collision is detected, all transmissions are damaged and stations stop transmitting. A station restarts transmit after a partially random period of time, determined by a backoff algorithm, and again only if the medium is free. CSMA/CD is a simple and viable access technology.
Ethernet uses variable-length packets called frames to carry data. The size of the frame is from 72 to 1526 bytes, of which 26 bytes is dedicated to the header [4].
Ethernet has been extended twice over the years to provide more bandwidth and to better compete with other broadband technologies. Fast Ethernet transports data at 100 Mbps and Gigabit Ethernet at a staggering 1000 Mbps. These extensions leverage the familiar Ethernet technology while retaining the CSMA/CD scheme of the original 10 Mbps Ethernet.



Figure 2: Ethernet network topologies [15]

4. Technology comparison

4.1 General

ATM is a complex technology. There are tons of standards covering various aspects of ATM [2]. One can imagine that the complexity is pretty much due to the nature of ATM of trying to be one-solution-fits-all, technology from LAN to WAN, for all data types. The connection-orientation also largely contributes to the overall complexity, because it requires the existence of specific signalling and routing protocols. The complexity is driven by the powerful capabilites in ATM, but not all need them. While there are many standards, some parts of the technology had to wait standardation for a long time, and this slowed down the adoption of the technology. The PNNI-interface was not standarded until 1996, and being a major interface between public ATM networks, the interoperability between public networks was not possible. Interoperability is still an issue.
In contrast, the beauty of Ethernet is its simplicity. Only a few standards cover the whole technology. The technology is easy to understand and deploy. This is the primary reason for the popularity and wide adoption of Ethernet.
A great benefit for ATM is that it is independent of the underlying transport mechanism. ATM does not define Media Access Control (MAC) mechanism (lower part of datalink layer, the other being LLC) or the physical layer, whereas Ethernet does define these. As a consequence, ATM can run on top of different transport mechanisms and can adapt to new transport technologies and greater speeds. Without physical independency, the final goal of ATM running everywhere would simply be impossible.
The original Ethernet technology has been available since 1976 [14]. ATM technology has been utilized since 1994, USA and Finland being the first countries [12]. By looking at these years, we can say that Ethernet technology is more mature than ATM. However, the comparison nowadays is done between Gigabit Ethernet and ATM, and Gigabit Ethernet is surely less mature of these two. Gigabit Ethernet Alliance was formed in 1996 to develop and standardize the technology. On the other hand, the standardization has been speedier for Ethernet, and people speaking for Gigabit Ethernet say that building on the original well-proven principles, Gigabit Ethernet should pose little technical problems.
Only a few applications have been developed for ATM. In addition of lacking native applications taking true advantage of ATM, there are few experts on ATM in the field.

4.2 Bandwidth

ATM offered massive bandwidth at the time it was introduced. Speeds of 155 Mbps and awesome 622 Mbps attracted people to ATM. It was seen that once the need for greater bandwidths arises, Ethernet LANs get replaced with ATM networks. The speed of 25 Mbps was proposed as the bandwidth to the desktop, but in practise is was not worth going from 10 Mbps Ethernet to 25 Mbps ATM, mostly for economical reasons. After two upgrades, Ethernet fights back. ATM can no longer compete in pure speed. The routing and switching technology has improved and ATM alone can't take advantage of simple and fast hardware switching. The new gigabit speeds put burden on the back-end servers, and the server processing speed is becoming the bottleneck rather than the network.
An important point to make is that the actual bandwidth for the payload is always smaller than the full transmission speed of the medium. This is because the protocols and their headers eat up some of the total bandwidth, and there usually exists a couple layers of protocols.

4.3 Scalability

One important benefit of ATM is that ATM can be used as both LAN and WAN technologies. The original idea of ATM was the concept of spanning over LAN/MAN/WAN, and utilizing the same protocol over the entire network, eliminating the requirement for routers and gateways. This vision has not materialized. Instead, of these, the WAN has proven to be ATM's strong suit. "Despite the talk about Gigabit Ethernet displacing it, ATM continues to be a great fit in the WAN" [13]. The original Ethernet was purely a LAN technology, but Fast Ethernet took Ethernet to company backbones as well, and Gigabit Ethernet is striving itself to even bigger backbones, over campus areas. There has been critizism over Ethernet of being a technology that cannot scale, but its underlying transmission scheme continues to be one of the principal means of transporting data for contemporary campus applications.
Although Ethernet has not been considered to span over a WAN, research is being done to make it a reality. Ethernet is stepping on ATM's shoes in WAN too.

4.4 Overhead

One could argue that ATM is not optimized for any application. The technology holds a compromise. At the beginning of the standardization, the size of the cell generated heated discussion between USA and Europe. Europe wanted 32 bytes of payload and USA wanted 64 bytes. A compromise was agreed on 48 bytes for the payload [18]. The overhead from a 5 byte header of a total cell size of 53 bytes is 9.4%. In Ethernet, the overhead is minimal as the frame size can be 1526 bytes, of which the header is 26 bytes. However, one should remember that this doesn't hold the whole truth: Ethernet frames do carry other protocol packets in them, and these packets have headers too. And Ethernet alone can't move data over WAN so a direct compare is not fully justified.
Another aspect creating overhead is the connection-orientation of ATM. A virtual circuit must be setup end-to-end before communication can take place. For exchanges of small amount of data, the connection setup can take considerably more time than the actual data exchange. A good example is the connectionless DNS-protocol of TCP/IP-networks. DNS-messages mainly consist of a single UDP-packet, because it's good enough and a TCP-connection would take way too much time to setup and produce extra congestion on the netowork.
In today's world, ATM is mostly used to trasmit TCP/IP-packets. The larger TCP/IP-packets must be sliced into smaller ATM cells, which increases the overhead. Total overhead on ATM backbones typically comes in between 15% and 25%. On a 155 Mbps circuit, effective throughput can drop to 116 Mbps. That's 39 Mbps down the drain. [8]

4.5 Interoperability

Different Ethernet versions work well together. Gigabit Ethernet technology is based on the original technology: CSMA/CD media access with the original protocol frame format. The upgrade path is relative straightforward. Upgrade can be performed in segments and divide the costs over a period. And what's important: the old LAN applications will operate unchanged. The cabling needs updates though. Gigabit Ethernet practically needs fiber-optic cables. Unshielded twisted-pair copper runs only up to 25 meters. Since Ethernet technology occupies the lowest two layers of the ISO-protocol stack, layer 3 protocols such as IP run happily over Ethernet. Ethernet does not get in the way.
Because large portions of all networking applications have been build for traditional LANs, ATM has had hard time to pursue to the desktop. Applications would have to be changed for ATM. ATM LANE, LAN Emulation, has been created to accelerate adoption of ATM. In essence, LANE allows ATM technology to be used in traditional LANs without any change in applications at the workstations. LANE allows an easy migration from Ethernet to ATM, but also means that sophisticated ATM features are not exploited.
The goal of MPOA, Multiprotocol Over ATM, is to make existing LANs and their protocols interoperate with an ATM backbone. MPOA will have a greater role than LANE since ATM has not replaced existing LANs, and interoperability with legacy networks is the issue. MPOA will gradually replace LANE.

4.6 Management

What comes to installation and configuration, Ethernet beats ATM. Ethernet is well-understood and nearly plug and play. ATM network configuration is rather difficult with many arcaine parameters at the switch and the workstation. ATM takes time to install and requires a bit of expertise. This in turn directly affects the total costs of the technology. ATM network management is more difficult than Ethernet LAN networks, due to many parameters of ATM networks and interoperability issues. In an essential role is the Interim Local Management Interface (ILMI) which uses SNMP across UNI and NNI to access status and configuration information within each network node. ILMI and ATM networking in general is still evolving.

4.7 Price

The Ethernet family holds a clear economic advantage over ATM. This is true for both network interface cards and for the network infrastructure equipment. A transition to an ATM LAN costs more than an Ethernet upgrade. The price is always an important business decision factor. Very few administrators will use a more-expensive technology unless they get actual benefit from it. When deciding between ATM and Ethernet, professor Raj Jain from the Ohio State University says that buyers face the old house versus new house-dilemma. Fixing the old house is cheaper initially, but whereas a new house is more expensive, does it pay back in the long run [11]? The question remains to be answered.

5. Quality of service comparison

5.1 ATM

ATM has established quality of service standards. This is the key strength of ATM. Currently ATM is the best technology for transmitting voice, video and computer data over a single line. ATM offers a choice of four different types of service:
  • Constant Bit Rate, CBR, provides a fixed and steady bit rate for real-time data. Analogous to a circuit-switched line. This is the most simple service level.
  • Variable Bit Rate, VBR, provides a service for real-time and bursty data where the bit rate varies. This class has been lately divided into real-time and non-real-time.
  • Unspecified Bit Rate, UBR, does not guarantee any bit rate. Used for data that can tolerate delays, such as traditional computer data. This can be seen as an interpretation of the common term "best effort service".
  • Available Bit Rate, ABR, is a service for applications that can negotiate the bit rate during the transfer. The available bit rate varies in the network and the applications must adjust to different bit rates based on feedback from the network.
While it seems that ATM can fill every need for quality of service, there are problems. The mechanisms for achieving quality of service are complex. In order to have quality of service from end to end, the ATM switches must implement quality based routing over the PNNI-interface. Since PNNI lacked a standard for a long time, and being a complex entity, there are still problems in getting quality of service features to work between equipment from different vendors. Besides the fact that operators rarely support all classes of service [7], users found it difficult to specify a particular quality of service. The service types are associated with a wide range of parameters to choose from, like cell transfer delay, peak cell rate, cell loss rate and cell delay variation tolerance. What happens is that users end up ordering ATM connections as leased lines, rather than as ATM services [7]. The technology is not being utilized to its maximum.
Despite the few problems and high cost, currently ATM offers the best way to implement quality of service over a WAN. A company private ATM network should integrate reasonably well with a public ATM network in the WAN and provide good quality of service for those who need it now and can carry the costs.

5.2 Ethernet

There is active debate among the networking community about Ethernet and it's quality of service. The current family of Ethernet does not provide explicit quality of service capabilities. Ethernet LANs have usually provided enough bandwidth to make quality of service needless. Some say that increasing the bandwidth of Ethernet is the same as adding quality of service. "If you give them a fatter pipe, you've pretty much solved their problem", says one network vendor [13]. While this may be true for smaller LANs, quality of service starts to matter when the networks gets larger and the interoperability with other networks gets into consideration. Quality of service mechanisms for Ethernet are on the way. Current implementations include policy servers, tag switching, intelligent queuing, various IP-based implementations and tools. The problem with current implementations is that they are not standards-based; each vendor has its own solution and their devices will not interoperate. There is a clear need for standards or Ethernet can end up in a chaos of nonintegrating proprietary networks.
An attempt that is expected to bring quality of service standard to Ethernet are standards 802.1q and 802.1p, proposed by IEEE. They both operate at layer 2.
802.1q is a standard for providing Virtual LAN identification and quality of service levels. A Virtual LAN is a logical subgroup within a LAN whose purpose is to isolate traffic within the subgroup. 802.1q uses 3 bits to allow eight priority levels and 12 to identify up to 4096 VLANs. 802.1p allows switches to reorder packets based on the priority level. 802.1p also defines means for stations to request a membership in a multicast domain and map it to a VLAN.
IEEE 802.1 proposals bring support for RSVP, which is a protocol to request and provide quality of service network connections at layer 3. RSVP is dependent on layer 2 to provide the quality of service over a data link. Since Ethernet has not been able to provide quality of service, RSVP has been run over unprioritized Ethernet links, which is clearly not the desired scenario. Taking advantage of IEEE 802.1, RSVP support can be achieved by mapping RSVP sessions into 802.1p priority levels. Whether RSVP will succeed or not is another matter.
A key question for Ethernet is how to be able to provide quality of service across a WAN and over heterogeneous network infrastructures. Ethernet alone cannot be used to transmit data from point A to point B anywhere in the world. In the middle, there lies a WAN, and before end to end quality of service is achieved, the quality of service requirements must be mapped and transmitted from one transport technology to another. Before we get there, technologies need to evolve.

6. Future network technology

6.1 IP

IP has a strong position of becoming the basis of the future networking technology. Today, IP is the de facto standard for transmitting data over the Internet. The family of TCP/IP-protocols is becoming increasingly successful. One of the elements contributing to the success of IP is that it is completely independent of the underlying network technology. IP can operate over heterogenous network infrastructures. At the other side of the coin, the greatest weakness of IP is the lack of quality of service. IP provides only best effort delivery. During the early years of ATM, it was seen as the unified choice for virtually every networking. IP was not a big player at the time and certainly not a threat to ATM. Now things have changed. IP has become the dominant networking technology, and we face the issue of how these two mesh together. The nature of the two collide with each other: IP is connectionless protocol while ATM is fundamentally connection-oriented. IP is a packet-routing technology routing variable length packets with no delay guarantees while ATM is a cell-switching technology with strict quality of service.
IP and ATM don't mix well together. Integrating IP and ATM to operate efficiently together is a challenging task. Today, IP is transmitted on top of ATM without taking advantage of the features of ATM. There exists overhead in encapsulation, routing, assembling and reassemblig packets, and ATM's quality of service is left unexploited. The strengths of ATM are not being used. ATM could be the technology for providing quality of service to IP-networking, creating a happy marriage of these two. On the other hand, some see that ATM is not needed in the future TCP/IP network infrastructure [16].
Ethernet and IP operate well together. Ethernet is packet-oriented connectionless service just like IP. Operating at different layers, there is little conflicts with these technologies. Seeing the strong future of IP, it is easy to believe that the network technology that will win will be the one that is the simplest, fastest, cheapest, and easiest to use with IP. Ethernet fits well into this picture.

6.2 LAN technology

It is expected that Ethernet remains the most popular LAN technology and the most widely deployed access network in companies in the foreseeable future. Ethernet has a huge base of installations, the technology is simple and well-understood. For those that want extra bandwidth, Ethernet provides switching and easy upgrade paths to the newer incarnations of Ethernet, Fast Ethernet and Gigabit Ethernet. Quality of service is being standardized. Ethernet is still hot technology. To take ATM to the LAN and desktop it would require quite a large infrastructure change which doesn't come free. Equipment has a high price and the interoperability issues whith legacy LAN applications must be taken into account. While it is true that in order to take full benefit of ATM, it should reach all the way to the desktop, Ethernet with good-enough quality of service mechanisms can eliminate the need of ATM to the LAN.

6.3 Backbone technology

ATM is widely employed in Internet WAN backbones. Almost all Internet operators run and offer ATM services. Ethernet has grown from pure LAN to backbones as well. The question from ATM's point of view is how far towards LAN can ATM infiltrate while Ethernet's point of view is how far towards backbones can Ethernet infiltrate. ATM was designed to provide broadband networking and run efficiently from LAN to WAN. But ATM's problem is IP. There is a large overhead with IP over ATM, and alternatives to ATM as the transport technology are being researched. One such a techonology is Packet over SONET, or IP over SONET, as it is also being called. The European counterpart to SONET is SDH.
On the bottom layer of a contemporary Internet backbone, there is SONET or SDH, which are layer 1 specifications for data transmission over optical fibers in the public network. ATM runs on top of SDH. In IP over SDH, ATM is eliminated from the transmission picture all together, and IP-packets are transmitted directly on top of SDH frames through the use of Point to Point Protocol (PPP). As a result, the bandwidth gets utilized more efficiently. IP over SDH can provide as much as 25% to 30% higher throughput than ATM [5]. IP over SDH is an ideal for transmitting IP. And when we talk about IP-packets, one doesn't have to stretch his mind far to start thinking about transferring Ethernet frames on top of SDH. Pretty nice picture.
But this is not the whole truth. The migration will not be that easy. By eliminating ATM, we loose the management, routing and other features of ATM. The management infrastructure required for SDH is completely different from ATM's. IP over SDH is best suited for high-volume point to point configurations. In a more complex and hierarchial network, it runs into trouble.
Looking even further into the future, some have questioned how SDH fits into the future broadband networks [6, 9]. A new technology called WDM, Wave Division Multiplexing, is an emerging technology that allows multiple optical signals to be transported over a single fiber, providing massive bandwidth for the 21th century. Each signal can carry a different channel, an SDH-channel, for example. But instead of using SDH frames in the channel, one could use Ethernet frames.
Recently Siemens conducted a pilot project where it transmitted Ethernet frames at a speed of 1Gbps over a full-duplex optical fibre with WDM [17]. The link distance was 1570 kilometers, a current world record. With such advancements in technology, the battle between ATM and Ethernet is strongly extending from LAN to WAN too.

7. Conclusions

The competition between ATM and Ethernet most likely continues many years to the next millennium. ATM has a well established position in Internet backbones, and LANs are dominated by Ethernet. Neither is disappearing any time soon. Both technologies will co-exist for time to come. There are huge investments on both sides to the infrastructure and no matter how superior any technology is, migrations from one technology to another always takes time. Looking at the sheer number of installations, price and easy of use reveal that Ethernet is the dominating technology in the LAN, and ATM can hardly change this. In the backbones, the competition seems to be harder. Ethernet is a not real threat but there are experimental transport technologies that ATM has to face.
Quality of service will have its effect in the development. There is a clear trend that Internet is increasingly being used to run real-time communication services such as voice over IP. Quality of service will matter in the future. ATM already has quality of service, though not fully implemented, that puts it ahead in this sector. Ethernet does not have quality of service, but development is active. While Ethernet may not achieve the "state of the art" quality of service of ATM, it may well provide good enough mechanisms to satisfy most of the needs.
The unified factor in the future networking seems to be IP rather than ATM. Marrying these two technologies could provide an answer, but due to the fundamental differences of the two, other alternatives are being seeked as well. Running IP directly over backbones is a viable technology, but at the current state it is somewhat limited and cannot provide as comprehensive solutions as ATM. On the whole, somehow it makes sense to believe that the winning technology will be the one that integrates most efficiently with IP.

How is MPLS different from SONET and ATM networks?

Michael Brandenburg, Technical Editor

Synchronous Optical Network (SONET) is the fiber optic standard that is focused on the physical layers of the OSI model. SONET, and its international equivalent Synchronous Digital Hierarchy (SDH), originally define how voice traffic will be carried across the carrier-built fiber optic networks deployed throughout the world. As a Layer-1, physical level protocol, SONET makes link connections along and between these fiber networks. SONET was evolved over time to include data services -- such as frame relay, T1, and OC-3 -- to connect over the fiber links. Because SONET was originally designed for voice and not variable-sized data packets, however, moving data across it was inefficient and required padding data packets with irrelevant data to make up any differences. Asynchronous transfer mode (ATM) was introduced as a solution for this inefficiency. Through the use of hardware network interface adapters, ATM networks break data into smaller cells for transport. ATM over SONET also makes home and business ISDN (Integrated Services Digital Network) data services possible.

In contrast, ATM and Multiprotocol Label Switching (MPLS) are data transport protocols, meaning that both reside above the physical data layers in the OSI model and aid in moving data from one point to another. The primary difference between ATM and MPLS is that while ATM was designed to exist in a circuit-switched environment, MPLS has its place within modern packet-switched networks such as Ethernet or IP. This difference is most apparent in how the two types of network topologies are deployed. ATM is primarily designed as a point-to-point connection, requiring an ATM adapter on each end of a physical or virtual circuit. MPLS, on the other hand, operates similar to an Ethernet switch in an any-to-any topology, allowing each of the network endpoints to be connected to the MPLS network and mesh with a particular customer’s virtual mesh. For ATM to replicate this level of meshing, multiple ATM connections would have to be installed at each of an organization’s locations. The multi-protocol nature of MPLS also enables the technology to label and pass other protocols, including ATM, across an MPLS network. Two ATM endpoints, for example, could be connected across an MPLS network, with the network itself quickly guiding traffic to each other transparently.