Understanding the OSI model and its layers

The OSI model is the part of the Open Systems Interconnection project at the International Organisation for Standardisation.

The OSI model characterizes and provides a standard set of protocols for communication functions of a telecommunication or network system regardless to its underlying architecture.

It provides interoperability and helps in easy troubleshooting of the network.
The OSI model is not a protocol, it is a network architecture that is flexible, robust and inter-operable.

It is divided into seven different layers:

  • Physical Layer
  • Data link Layer
  • Network Layer
  • Transport Layer
  • Session Layer
  • Presentation Layer
  • Application Layer

Lets look into each layer one by one,

Layer 7: Application layer

  • The application layer is the layer close to the user, which means both the application layer and the user interact with the software application.
  • It acts as a interface between application and user.
  • Application-layer functions include identifying communication partners, determining resource availability, and synchronising communication.
  • Protocols in this layer are BGP, DHCP, DNS, FTP, HTTP, IMAP, LDAP, MGCP, MQTT, NNTP, NTP, POP, ONC/RPC, RTP, RTSP, RIP, SIP, SMTP, SNMP, SSH, Telnet, TLS/SSL, XMPP.

Layer 6: Presentation layer

  • The presentation layer handles the delivery and formatting of information. this layer presents data for the application or the network.
  • It converts data into the standard format.
  • It performs functions such as encryption, decryption, data compression, data decompression and character encoding.
  • Protocols used are eXternal Data Representation (XDR), Network Data Representation (NDR), X.25 Packet Assembler/Dis-assembler Protocol (PAD), Lightweight Presentation Protocol (LPP), Internet key exchange (IKE)

Layer 5: Session layer

  • Its main function is to manage communication sessions – continuous exchange of information.
  • It provides the mechanism for opening, closing and managing a session between end-user application processes.
  • It creates a session ID for each application and keeps the data separate for each.
  • In case of a connection loss this protocol will try to recover the connection.
  • If a connection is not used for a long duration, then session-layer protocol will close or re-open it.
  • It provides either full duplex or half-duplex operation and provides synchronisation points in the messages exchanged.
  • Protocols used are ADSP, ASP, H.245, ISO-SP, iSNS, L2F, L2TP, NetBIOS, PAP, PPTP, RPC, RTCP, SMPP, SCP, SOCKS, ZIP, SDP, SIP.

Layer 4: Transport layer

  • Transport layer provides functions and procedures for transferring data to the destination.
  • Its the bridge between hardware and software layer. The heart of the OSI layer.
  • The transport layer controls link through flow control, segmentation/de-segmentation, and error control.
  • This layer can keep track of the segments and re-transmit those that fail delivery.
  • The transport layer also provides the acknowledgement signal on successful transmission of data and sends the next data if no errors occurred.
  • It creates segments of the data dividing them into smaller layer.
  • Responsible for process to process delivery.
  • Protocols used are TCP and UDP.

Layer 3: Network Layer

  • The Network layer is responsible for the delivery of individual packets from the source host to destination.
  • It is used to communicate between different network.
  • This layer uses logical addressing (adds IP address in the header).
  • It decides the path in which the information must flow.
  • The main functions are Routing and forwarding data, logical addressing, error detection, sequencing of data and flow control.
  • Data is in the form of packets.
  • Protocols used are RIP, OSPF, BGP, EIGRP, Apple Talk
  • Devices: Router, Firewalls, L3 switch

Layer 2: DataLink Layer

  • This layer is responsible for moving frames from one node to another.
  • It is responsible for communication with the Local Area Network.
  • Data shared is in the form of Frames.
  • Responsibility of this layer are framing, physical addressing, flow control, error control, access control.
  • Physical addressing – also called as MAC addressing or Layer 2 addressing.
  • It is divided into two sublayers : Logical Link Control (LLC) and Media Access Control (MAC).
  • MAC address is a 48 bit hexadecimal address.
  • Protocols used are ARP, RARP, ICMP, IGMP, HDLC, PPP.
  • Devices: NIC, Switch, Bridge.

Layer 1: Physical layer

  • The Physical layer is responsible for movement of individual bits from one hop to another.
  • It coordinates the function required to carry a bit stream over a physical medium.
  • Main functions are modification of data into digital signals, establish and termination of connection, effectively share resources among multiple users.
  • Devices: NIC, HUB, Repeaters and Modem.

Remembering the OSI Model 7 layers – 8 mnemonic tricks:

From Application to Physical:

All People Seem To Need Data Processing
All Pros Search Top Notch Donut Places
A Penguin Said That Nobody Drinks Pepsi
A Priest Saw Two Nuns Doing Pushups

From Physical to Application:

Please Do Not Throw Sausage Pizza Away
Pew! Dead Ninja Turtles Smell Particularly Awful
People Don’t Need To See Paula Abdul
Pete Doesn’t Need To Sell Pickles Anymore

Story of internet – The Network of Networks

A journey into story of internet

Everyone of us use Internet… But do you know how it originated and transformed to what it is today???

Here you go… Lets have a peek into the Bygone times,

Around 1960’s, the theories related to the internet started. In July 1961, Leonard Kleinrock at MIT published the first paper on packet switching theory. In August 1962, the articles written by J.C.R. Licklider of MIT stated Galactic network concept (the first noted description of interactions through network) – Concept similar to todays internet.

Later in October 1962, Licklider – First head of computer research program at DARPA, convinced his successors at DARPA, Ivan Sutherland, Bob Taylor, and MIT researcher Lawrence G. Roberts, of the importance of this networking concept.

In 1964, Leonard Kleinrock published a book on packet switching theory. Kleinrock convinced Roberts of the theoretical feasibility of communications using packets rather than circuits, which was a major step towards the developement of the computer networking.

In 1965, Roberts connected the TX-2 computer in Mass to the Q-32 in California with a low speed dial-up telephonic line building the first ever wide-area computer network (However small). But using circuit switching!!!.. which was inadequate for the job. Finally he agreed on Leonard Kleinrock on packet switching.

Later Roberts joined DARPA in 1966. In 1967, He suggested the plan of ARPANET and presented paper in a conference where he got some more info regarding the researches going on parallely in other organisations. August 1968, DARPA structured ARPANET and the key component was IMPs (Interface message processors)

In 1969, UCLA (Kleinrock’s laboratory where he developed packet switching theory) was selected as a first node of the ARPANET, therefore the first host computer was connected. One month later, SRI (Stanford Research Institute) became the second node (due to the project on Augmentation of Human Intellect carried out at SRI).

First host to host message was sent from UCLA to SRI.

Later two more nodes were added at the end of 1969, network creation started!

In 1970, initial ARPANET host to host protocal was release by S. Crocker, it was called as Network Control Protocol(NCP). The first international public demonstration of the ARPANET was held at ICCC – International Computer Conference in 1972 (In the same year Email concept was introduced).

The typical ARPANET grew into an internet. It was based on the idea that there would be multiple independent networks connected to other forming the internetworking architeture. For these different networks to communicate an open architecture was required. But the NCP had no ability to address different networking concept, It had a limitation that all the networks had to be build in a similar manner. The assumption was that the ARPANET cannot be changed.

This was a major draw back as it had no error control and any devition from the traditinal ARPANET would not work.

In later stages two organizations, International Organization for Standardization (ISO) and International Telegraph and Telephone Consultative Committee (CCITT) defined similar kind of networking model which supported the open architecture concept, these concepts put together formed the OSI Reference model. A turning point in the story of the internet. Having this as a reference different network can be created independently and could communicate which each other. The internetworking was finally possible.

Meanwhile Kahn from DARPA also devloped a similar model to OSI called te TCP/IP model. This new protocol was more over a communication protocol were as NCP just acted like a device driver.

Initially the internet was used only in defence and TOP 20 organizations.A major motivation for both the ARPANET and the Internet was the resource sharing. However, while file transfer and remote login (Telnet) were very important applications, electronic mail probably had the most significant impact of the innovations from that era.

When desktop computers first appeared, it was thought by some that TCP was too complex to run on a personal computer. David Clark and his research group at MIT explained that a simple implementation of TCP was possible. They demostrated an implementation, first for the Xerox Alto and then for the IBM PC. That implementation confirmed the interoperable with other TCPs and showed that workstations as well as large time-sharing systems could be a part of the Internet.

The development of Local area networks, PCs and workstations in the 1980s allowed the Internet to flourish at a greater extent. Ethernet technology, developed by Bob Metcalfe at Xerox PARC in 1973, is now probably the dominant network technology in the Internet. Having many networks has resulted in a number of new concepts and changes to the underlying network technology. First, it resulted into classification of networks inorder accommodate the range of networks into Class A, Class B and Class C.

Later to make it easy for people to use the network, hosts were assigned names, so that it was not necessary to remember the numeric addresses. Originally, there were a very few limited number of hosts, so it was easy to maintain a single table of all the hosts and their associated names and addresses. But as the network grew into having a large number of independently managed networks, having a single table of hosts was no longer making sense, and the Domain Name System (DNS) was invented by Paul Mockapetris of USC/ISI.

The increase in te size of network also challenged the router capability, to overcome this the hierarchical model of routing was replaced with IGP – Interior Gateway protocol and EGP – Exterior gateway protocol. Thus, by 1986, Internet was well established as a technology supporting the community of researchers and developers and was beginning to be used by other communities for daily computer network communications. Hence stared the cyber era.

Right now 80% of the people in the world use internet.

As the network grew larger many researches are carried on, new protocols and technologies evolve along with the threats as well. Thereby increasing the need for the network security.