Tuesday, 8 December 2009

the operation of switching and signalling within core networks

The purpose of signalling is so that when the caller dials a number they hear progress tones such as the dial-tone, the ringing or busy signals etc. also the caller might hear a digital message telling them the number they have dialled in is not in service or it has been changed. Signals in public networks are used for billing, monitoring and links to advance features for carriers to exchange traffic with each other. Signalling is used also to process every switched call on the public switched network and the public cell network.


Touch-tone phones use 8 voice-frequency tones for signalling, pushing a button will result in one low-tone and one high-tone to be transmitted at the same time.





Pressing the number seven on the keypad will result in 1209Hz and 852Hz being sent at the same time.







The above shows In-channel and Common-channel Signalling.




The latest standard for Common-Channel signalling is 64 kbits/S.
The 4 phases of carrying traffic in a telephone network are the idle phase which is when the terminal of the subscriber (the phone or computer) is idle. Then there is the connection set-up phase which is when a telephone call or a data connection between two or more subscribers is being set-up, the there is the transfer phase which is when the call is in progress or data is being exchanged between the subscribers. And finally there is the release phase which is when the call or the data connection is being released.

There are two types of switching circuit switching and packet switching. In circuit switching the path is decided before the data transmission begins. The system decides on which route to follow, based on a resourse-optimzing algorithm, and transmission goes according to the path. For the whole length of the communication session between the two communicating bodies, the route is dedicated and exclusive and released only when the session ends. In packet-switching the packets of data are sent towards their destination irrespective of each other. Each packet has to find its own path to the destination. There is no predetermined path, the decision as to which node to hop to in the next step is taken only when a node is reached. Each packet finds its way using the information it carries such as the source and destination IP addresses.


Each Packet from a datagram is sent independently and some of the packets may arrive out-of-order. This means the destination node must reorder, but the packets may be lost so the destination node or exit node must detect the lost packet.




A virtual circuit has a pre-planned route determined before the packets are sent. There is not a dedicated line as in circuit switching, as others can use the same circuit, so the packets may be queued. Each of the packets has a virtual circuit identifier or a VCI.

Advantages of a datagram are that the call time setup is avoided, datagrams can be routed away from any congestion, the delivery is more reliable, suitable when only a few packets need to be transmitted and if the node fails the datagrams will find another route.

Advantages of a virtual circuit are that a virtual circuit is suitable when packets are to be transmitted between 2 computers for a period of time, services can be provided for virtual circuit such as error control which is when if a packet arrives at one node with an error code it can be requested again, and virtual circuits are faster because there is no routing decision required at each node.

Next i will include som equestions and there answers that I have completed about Voice over IP (VoIP) and Voice over ATM (VoATM)

Voice over IP

1. State what VoIP is.

2. Discuss Voice Quality with VoIP.

3. List 4 different applications of VoIP

4. Describe one of those applications via diagram or text

Voice over ATM

5. What is ATM

6. Discuss the ATM Cell

7. Discuss VoATM

8. What are the advantages and disadvantages of ATM

1. VoIP enables data networks such as the internet, local area networks and wide area networks to be used for voice communication and it reduces the cost of communication.

2. Voice quality is very important in VoIP and the quality must be as good as the PSTN. There are 3 factors that affect voice quality in ViOP, transmission delay is the first which is the time it takes a packet from source to destination, the second is Jitter delay which is the difference in arrival time between packets and the third is packet loss which is when if a packet is dropped there is a gap in the conversation eg. You won’t be able to makeout the full word or sentence you are being told, packet loss with VoIP can cause big bits of a conversation to be lost.

3. Four applications of VoIP are computer to computer, computer to multiple sites, voice or internet via ISP and voice over intranet or LAN.

4. Computer to multiple sites is a traditional analog telephone attached to an IP telephony enabled PBX and this requires an expensive adapter.

5. ATM or asynchronous transfer mode is a connection-oriented, cell based technique for the transfer of information, ATM can be used for the transfer of audio, video and data. ATM is an extremely fast broadband switching technology eg high bandwidth.

6. ATM uses a uniform 53 byte cell, 5 bytes of addressing information, 48 bytes of payload. The benefit of the small cell size is reduced latency in transmitting through the network nodes. There is a disadvantage which is because of the small cell size it means that there is increased overhead.

7. Class A provides for Voice Over ATM and class A includes constant bit rate or CBR services for connection-oriented transfer with end-to-end synchronisation. Protocol AAL 1 supports services class A and some examples of this include PCM-coded voice, circuit-emulated connections having a bit rate of n x 64kbits/s or n x 2Mbits/s and video signals coded for constant bit rate.

8. The advantages of ATM are that it provides hardware switching, allows dynamic bandwidth of bursty traffic, it provides QoS for multimedia and voice, it scales in speed and network size, its used for LAN and WAN, has good network management capabilities and is used by 80-85% of internet backbone networks. But there are some disadvantages which are it has small cell size, has high overhead, has a high service cost, requires new equipment and requires new technical expertise.

Structure of the PSTN hierarchy

The PSTN hierarchy is made up of main distribution frames, wide area tandems, digital junction switching units, remote concentrator units and the digital local exchange. The subscriber lines will go into the main distribution frame or MDF and from there go to the digital local exchange or DLE. The signal is then sent on to the wide area tandem (WAT)/ digital junction switching unit (DJSU) or to the digital main switching unit (DMSU) and then sent to the international gateway.

Digital Local Exchanges, Remote Concentrator Units, Digital Main Switching Unit, Digital Derived Services Switching Centre, Gateways all of hese are major elements of the PSTN and are described below.

The digital local exchange relates to an exchange for connecting telecommunications terminals of subscribers to a telecommunications network, comprising subscriber lines in which the telecommunications terminals are linked with the exchange. The exchange is operated by a first network operator and is used by further network operators. The subscriber lines and subscribers are associated with one of said network operators each and the subscribers select the exchange through the telecommunications terminals to make a telecommunications link through the network operator.

 

The remote concentrator unit is a unit that consolidates subscriber lines. This equipment facilitates the use of lesser amounts of loop plant to serve a greater number of end users. All calls are switched by the central office switch to which the remote concentrator is connected. The voice or data path will always extend to the host switch even for calls between subscribers served by the same remote concentrator.

The Digital Local Switching Unit connects to the concentrator and routes calls to different DLSUs or DMSUs depending on the destination of the call. The main part of the DLSU is the Digital Switch which consists of time switches and a space switch. Incoming signals on the 30 channels from the concentrator units is connected to time switches. The purpose of these is to take any incoming individual time slot and connect it to an outgoing time slot and so perform a switching and routing function. Each DMSU is connected to all the other DMSUs and involves a much smaller number of links than required to link together DLEs. If used to route international calls is known as an international Gateway. The DMSUs are the original long-distance exchanges.

The DDSN intelligent network will allow a range of new features to be offered as Advanced Linkline to linkline service providers. These will include: time and day routing, call allocator, call queuing, call barring, alternative destination on busy, call prompter, courtesy response and command routing.

A default gateway is a router on a computer network that serves as an access point to another network. In homes, the gateway is usually the ISP-provided device that connects the user to the Internet. In enterprises, however, the gateway is the computer that routes the traffic from a workstation to the outside network. In such a situation, the gateway node often acts as a proxy server and a firewall. The gateway is also associated with both a router, which uses headers and forwarding tables to determine where packets are sent, and a switch, which provides the actual path for the packet in and out of the gateway.

What is TCP/IP?

The TCP/IP model also uses the modular approach but only contains 4 while the OSI model uses 7. The layers of the TCP/IP model have much more diverse tasks than the OSI model, considering that certain layers of the TCP/IP model are the same as several of the OSI model. The layers of the TCP/IP are NETWORK ACCESS LAYER – this specifies the form in which data must be routed whichever type of network is being used, the INTERNET LAYER – that is responsible for supplying the data packet or datagram, TRANSPORT LAYER – that provides the routing data along with the mechanisms making it possible to know the status of the transmission and finally the APPLICATION LAYER – this incorporates standard network applications such as telnet, SMTP, FTP etc.


What is the OSI model?

The OSI model (open systems interconnection) was established by ISO in order to implement a communications standard between computers that are on a network (the rules that manage the communications between computers on a network). The aim of having the system, in layers is to separate the problem into different parts depending on their level of abstraction. Each of the layers of the OSI model will communicate with an adjacent level that would be above or below it. The OSI model has 7 layers while the TCP/IP model only has 4. These layers are the PHYSICAL LAYER – that defines the way in which data is physically converted into digital signals on the communication media such as electric pulses, light modulation etc., THE DATA LINK LAYER – that defines the interface with the network interface card and the sharing of the transmission media, THE NETWORK LAYER – that makes it possible to manage addressing and routing of any data (their path via the network), THE TRANSPORT LAYER – this is in charge of data transport, its division into packets and management of potential transmission errors, THE SESSION LAYER – which defines the opening and destruction of communication sessions between networked machines, THE PRESENTATION LAYER – that defines the format of data that is handled by the application layer independently of the system, and finally THE APPLICATION LAYER – that provides the interface with applications and is the closest level to the users, managed directly by the software.

What are LAN topologies?

Three common local area network topologies are star, ring and linear bus. Star network s would be one of the most common topologies. The simplest star network would consist of a central switch, hub or computer that will act as a conduit to transmit messages and any other computers in the network would be connected to that one main computer, and the transmission lines between them would form a graph with the topology of a star. A ring topology is when each of the nodes of the network are connected to two other nodes in the network and with the first and last nodes being connected to each other, that would form a ring. Any data that would be transmitted would travel from one node to the next until its destination has been reached. Data would usually flow in only one direction only. Linear bus topologies is a topology in which all of the nodes in a network are connected to a common transmission medium which has two end points( this is the bus, but is also known as the backbone) any data that is transferred between nodes in the network would be transmitted over this common transmission medium and is able to be received by all of the nodes in the network at the same time.

What are LANs and WANs and what components are needed to make one?

What are LANs and WANs?

Local area networks (LANs) are computer networks that cover a small area such as a home, school, office or small group of buildings. Local area networks have a high-transfer rate, a smaller geographic range and don’t need leased communication lines as much. Wide area networks are computer networks that cover a much larger area compared to a LAN such as metropolitan, regional or national boundaries. An example of a WAN is the internet which is the largest and most well known.

What is needed to make a Local area network or Wide area network?

File servers, workstations, network interface cards, hubs, switches, repeaters, bridges, routers and gateways would all be needed for a LANs and WANs, but in LANs, routers would not be used.

File servers will stand at the heart of most networks and is a very fast computer with a very large amount of RAM, storage space and has a fast network interface card. The network operating system software resides on this computer along with any software and data that would need to be shared. The server will control the communication of information between the nodes on a network such as when it might be asked to send a word processor programme to one of the workstations, receive a database file from another workstation and store e-mail messages during the same time. This all requires a computer that can store a lot of information and share it very quickly.

Workstations are the user computers that are connected into a network. A normal workstation is a computer that is configured with a network interface card, networking software and the proper cables. The workstations do not necessarily need floppy disk drives because files can be saved on the networks file server. Almost any computer can serve as a workstation in a network.

Network interface cards provide the physical connection between the network and the computer workstation. Most network interface cards are internal with the card having been fitted into a expansion slot inside the PC. Laptops can be bought with a NIC built in or with network cards that slot into a PCMCIA slot. The network interface card is a major factor in finding out the speed and performance of a network. It is a good idea to use the fastest network card available for the type if workstation you have. A NIC would now come with speeds from 10 Mbps to 1000 Mbps or 1 Gbps. The most common NICs are Ethernet cards and token ring cards. Ethernet would be the most popular method, although wireless access is now common.

Hubs are devices that connect twisted pair or fibre optic Ethernet devices together. Hubs would work at the physical part of a network. Hubs may be active or passive, an active hub regenerates signals and passes them along (also called a mulitport repeater) and a passive hub is simply a central connection point with no amplification or regeneration of the data being transmitted. Hybrid hubs will maximize networks efficiency by interconnecting different types of cables and topologies. Hubs today are no longer used, switches would be used instead.

A switch determine destinations of messages and sends it only to the destination port, it provides full bandwidth to each station on a network. Switches will also handle several conversations on a network at one time. Switches used to be more expensive than hubs but you can’t buy hubs anymore because switches provide for better performance and would be the device of choice for many technicians.

Since signals will lose their strength as it passes along a cable if would be necessary to boost the signal with a device called a repeater. A repeater will electrically amplify the signal it receives and will rebroadcast it. Repeaters can be separate devices or they can be incorporated into a switch or hub. They would be used when the total length of your network cable exceeds the standards that are set for the type of cable being used.

A bridge is a device that will take a large LAN a segment it into two smaller, more efficient networks. A bridge will monitor the information traffic on both sides of the network so it can pass packets of information to the correct location. Most bridges can “listen” to the network and figure out the address of each computer on both sides of the bridge automatically. The bridge can look at each message and, if it necessary, broadcast it on both sides of the network. The bridge manages the traffic to keep optimum performance on each side of the network. It works as if it is a traffic policeman at a busy crossroads, it keeps information flowing on both sides of the network but it will not allow unnecessary traffic through. Bridge can be used to connect different types of cabling or physical topologies, but bridges must be used between networks with the same protocol.

A router will connect different types of networks together and is typically used for wide area networks (WANs). Routers use IP addresses and select the best path to route a message, based on the destination’s address and where it is located. A router can direct traffic to prevent collisions and is smart enough to know when the traffic will be needed to be directed along back roads and shortcuts. While bridges know the MAC addresses of all computers on each side of a network, the routers will know the IP addresses of computers, bridges and any other routers on the network. Routers can even listen top all the traffic on a network and determine which parts are the busiest and redirect the traffic away from the busiest paths until they become cleared so that traffic can use those routes again. If you only have one single computer on a local area network that you want connected to the internet you will need a router. In this case the router will act as a translator between the information on your computer/LAN and the internet and will also determine the best route for the data to be sent over. A router can direct signal traffic efficiently, route messages between any two protocols, route messages between linear bus, star and star-wired topologies, route messages across fibre optic, coaxial and twisted-pair cabling.

A gateway in a network is a device that will translate between two dissimilar protocols. For example a gateway can link and translate between local area networks with different protocols.

The Architecture of Microsoft XP and Linux





There are four basic types of user mode processes in XP, the first is system support processes (not service processes) such as logon, or Winlogon, and the session manager. The second is service processes that are started by service control manager e.g.- host Win32 services such as Task Scheduler. The third is user applications such as DOS, Windows 3.1 and Win32. And finally environment subsystem that emulates other operating systems, for example Windows XP only supports the Win32 environment. Applications pass operating system service requests through dynamic link libraries. A DLL is a set of functions or data that interface between user applications and operating system.



A conventional operating system usually segregates virtual memory into kernel space and user space. Kernel space is only reserved for running the kernel, kernel extensions and some devices drivers. In most operating systems the kernel memory is never swapped out to disk. User space is the memory area where all user mode applications work and this memory can be swapped out when necessary. Shown below is a simplified diagram of the Windows XP Architecture.


Kernel components of Windows XP are Executive, kernel, device drivers, hardware abstraction layer and windowing and graphics. Executive provides basic operating system services such as memory management and security. Kernel provides low-level operating system functions such as thread scheduling. Device drivers translate input/output requests to hard input/output requests. The hardware abstraction layer insulates the kernels/drivers. The windowing and graphics implements the user interface such as Windowing interface or graphical user interface (GUI).



The architecture for Linux is given in the simplified diagram below. The system calls interface user space with kernel and then there is five main subsystems. These are process scheduler that controls access to the processor, Memory management that permits multiple processes to share memory and uses virtual memory to run portions of processes, Virtual file system that presents common interface to all devices, network interface and finally Interprocess communication.







Objectives and Functions of an Operating System

OBJECTIVES AND FUNCTIONS OF AN OPERATING SYSTEM: The operating system is the most basic program in the computer. All computers have a operating system that among other things is used for starting the computer and running other programs (application programs). The operating system performs important tasks like receiving input from the keyboard and mouse, sending information to the VDU, keeping track of files and directories on the disk, as well as controlling the various units such as disks, printers’ etc-. An operating system also offers a user interface, giving the user the possibility to control the computer. Some examples of operating systems are Windows 95/98, Windows 2000, Windows XP, Mac OS, UNIX and Linux.

PROCESSES: Processes are an instance of a computer program that is being sequentially executed by a computer system that has the ability to run several computer programs at once. A computer program itself is just a passive collection of instructions, while a process is the actual execution of those instructions. Numerous processes may be associated with the same program, e.g. Opening up several windows of the same program often means more than one process is being executed. In the computing world processes are formally defined by the operating system or systems running them and so they may be different in detail from one operating system to another. Each process or task is a program in some stage of execution. Each of the processes that run in an operating system is assigned a process control block that will hold information about the process, such as a unique process ID (a number which is used to identify the process), the saved state of the process, the process priority and where it is located in the memory. The process priority is used to determine how often the process receives processor time. The operating system may run all with the same priority, or it can run some processes more often than others. Any processes that may have been waiting a long time for execution by the processor may have their priority increased so they are more likely to be executed later.

MEMORY MANAGEMENT: Current computer architectures arrange the computer’s memory in a hierarchical manner that starts form the fastest registers, CPU cache, RAM and disk storage. An operating systems memory manager coordinates the use of these types of memory by tracking which one is available, which is to be allocated or deallocated and how to move data between them. This activity is usually referred to as virtual memory and increases the amount of memory available for each process by making the disk storage seem like main memory. There is a speed penalty connected with using disks or other slower storage as memory – if running processes need a lot more RAM than what is available the system may start thrashing. This can happen either because one process requires a large amount of RAM or because two or more processes compete for a larger amount of the memory than what is there; this then leads to constant transfer of each process’s data to slower storage.

SCHEDULING AND RESOURCE MANAGEMENT: Scheduling is a key concept in computer multitasking and multiprocessing operating system design, and in real-time OS design. It refers to the way that processes are assigned priorities in a priority queue. This assignment is carried out by a software known as a scheduler. The scheduler is concerned mainly with CPU utilization to keep the CPU as busy as possible, throughput that is the number of processes that complete there execution per time unit, turnaround which is the amount of time to execute a particular process, waiting time that is the amount of time a process has been waiting in the queue that is ready and finally response time that is the amount of time it takes from when a request was submitted until the first response is produced.

SECURITY: Many operating systems include some level of security. The security is based on two ideas: firstly the OS provides access to a number of resources either directly or indirectly such as files on a local disk, privileged system calls, personal information about users and the services offered by programs running on the system. And secondly the operating system is capable of distinguishing between some requesters of these resources who are allowed to access the resource and others who are not allowed to view them. While some systems may only distinguish between privileged and non-privileged, systems have commonly a form of requester identity such as a user name. Requesters divide into two categories which are internal security and extern al security, internal security is an already running program. On some systems, a program once it is running has no limitations but commonly the program has an identity which will keep it running and will check all of its requests for resources. External security is a new request from outside the computer such as login at a connected console or some kind of network connection. To find out the identity there must be a process of authentication which is often a user name and a password that must be entered. Other methods of authentication such as magnetic cards or biometric data may be used instead. Connections from a network may not need any authentication at all and resources may be accessed without any authentication.

 
Click Here! Click Here!