Tuesday, August 20, 2013

Smartwatch

A smartwatch is a computerized wrist watch with enhanced functionality beyond timekeeping, often with features comparable to a PDA. While early models were capable of performing basic tasks like calculations, translations, or playing games, modern smartwatches are effectively wearable computers which include features such as camera, accelerometer, thermometer, altimeter, barometer, compass, chronograph, calculator, cell phone, touch screen, maps with GPS navigation, speaker, scheduler, SDcards that are recognized as a mass storage device by a computer etc. and rechargeable battery. It may communicate with a wireless headset, heads-up display, insulin pump, microphone, modem and other devices.




How Smartwatches work



Most of the popular Smart Watches work using a technology called Smart Personal Object Technology – SPOT, which is developed by Microsoft. SPOT allows for enhanced miniaturization, low power consumption and a low cost solution allowing for accessories such as watches to become more purposeful through the use of software.



SPOT uses FM broadcasting to deliver web-based data to Smart objects. Microsoft DirectBand Network is used to send data to Smart Watches and other SPOT objects. DirectBand consists of two components: a special chipset in the watch that houses the radio receiver and a nationwide, wide-area network (WAN) that is built on FM-subcarrier technology.



For example, a Fossil Abacus Smart Watch consists of the following components:









• Piezo (piezoelectric ceramic crystal) - This material expands and contracts when electric current is applied. The Piezo crystal in the watch acts as a tiny speaker driver, allowing the Smart Watch to generate sound.

• PCB (printed circuit board) - A PCB is usually a multi-layered board made of fiberglass. The surface and sublayers use tiny copper lines to direct electricity to various components on the PCB. The PCB in the Smart Watch houses the CPU, memory and radio chip. 

• CPU - The Smart Watch uses an ARM 7 TDMI as its central processor. 

• Memory - The Smart Watch uses 512 KB of ROM and 384 KB of RAM. 

• DirectBand radio receiver chip - This chip was made specifically for the Smart Watch and is how the MSN Direct service connects to the watch. 

• Battery - The Smart Watch battery is rechargeable. The Fossil Abacus comes with a recharging stand, but other models use an adapter that plugs into the wall. 

• Inductive charging coil - This is used to charge the battery. The coil is attached to the contact surface on the back of the watch. When this surface comes in contact with the charging plate on the watch stand, the Smart Watch battery is charged through induction.



References:

http://en.wikipedia.org/wiki/Smartwatch

http://electronics.howstuffworks.com/gadgets/clocks-watches/smart-watch2.htm



Dropbox

Dropbox is a file hosting service that offers cloud storage, file synchronization along with client software. It allows users to create a special folder on each of their computers, which it then synchronizes so that the folder appears (with the same contents) on each of the synced computers, regardless of the computer used to view it. Files placed in this folder are accessible through a website and mobile phone applications.




Dropbox provides client software for Microsoft Windows, Mac OS X, Linux, Android, iOS, BlackBerry OS and web browsers.





Business Model



Dropbox uses a freemium business model, where users are offered a free account with a set storage size and paid subscriptions for accounts with more capacity.



Files uploaded via the web site are limited to not more than 300 MB per file. To prevent free users from creating multiple linked free accounts, Dropbox includes the content of shared folders when totaling the amount of space used on the account.





Technologies Used



Both the Dropbox server and desktop client software are primarily written in Python. The desktop client uses GUI toolkits such as wxWidgets and Cocoa.



Dropbox uses Amazon's S3 storage system to store the files. It also uses SSL transfers for synchronization and stores the data via AES-256 encryption.



Other than synchronization & sharing, Dropbox client also supports personal storage, revision history (so files deleted from the Dropbox folder may be recovered from any of the synced computers), multi-user version control (enabling several users to edit and re-post files without overwriting versions) etc.



References:

http://en.wikipedia.org/wiki/Dropbox_(service)

MPLS VPN

MPLS VPN is a virtual private network (VPN) for securely connecting two or more locations over the public Internet or a private MPLS VPN network. It harnesses the power of multiprotocol label switching (MPLS) to create VPNs, thereby giving the network engineers the flexibility to transport and route several types of network traffic using the technologies of a MPLS backbone.




MPLS VPN networks are secured through encryption on a customer’s router. Such a network is known as a CPE based MPLS VPN. Alternately, they are secured through the MPLS VPN provider’s network router, and such networks are known as a network based MPLS VPN.



MPLS VPN services are typically provisioned over Internet T1 lines or a private MPLS circuit; higher bandwidth speeds are offered as well (MPLS Ethernet, NxT1, DS3), with options for managed MPLS VPN services.



Uses

• MPLS IP VPN services are used by businesses to provide reliable, secure, MPLS VPN service for applications including credit card processing, file sharing, data backup, MPLS VOIP, or remote access.

• MPLS VPN’s can also be configured to carry voice, Internet, and IP VPN services together on an Integrated MPLS T1 line.



Types of MPLS VPNs

• Point-to-point (pseudowire) – Point-to-point MPLS VPNs employ VLLs (virtual leased lines) for providing Layer2 point-to-point connectivity between two sites. Ethernet, TDM, and ATM frames can be encapsulated within these VLLs. Point-to-point MPLS VPNs might be used to encapsulate TDM T1 circuits attached to RTUs, forward non-routed DNP3 traffic across the backbone network to the SCADA master controller etc.



• Layer 2 VPN (VPLS) – Layer 2 MPLS VPNs, or VPLS (virtual private LAN service), offers a “switch in the cloud” style VPLS service. VPLS provides the ability to span VLANs between sites. L2 VPNs are typically used to route voice, video and AMI traffic between substation and data center locations.



• Layer 3 VPN (VPRN) – Layer 3, or VPRN (virtual private routed network), utilizes layer 3 VRF (VPN/virtual routing and forwarding) to segment routing tables for each “customer” utilizing the service. The customer peers with the service provider router and the two exchange routes, which are placed into a routing table specific to the customer. L3 VPN could be used to route traffic between corporate or datacenter locations.



References:

http://en.wikipedia.org/wiki/MPLS_VPN

http://www.itquotes.com/what-is-mpls-vpn.html

Content Management System

A Content Management System (CMS) is a computer program that allows publishing, editing and modifying content as well as its maintenance from a single back-end interface. Such systems also provide procedures to manage workflow in a collaborative environment.




CMSs allow a user to add and/or update website content without the knowledge of programming language. Text formatting and image inserting is usually similar to the Word application. CMSs also offer the comfort of a user interface with intuitive control and an online assistant.



Content management system can be implemented for various types of web presentations, such as:

• Portal solutions

• Commercial and personal websites

• Intranet / Extranet

• Integrated Flash websites

Characteristics & features of CMS

• Allows immediate modification of a website content

• Centralized data editing, publishing and modification

• Intuitive operation

• Supports implementation of any web page design

• Includes advanced configurations for SEO - search engine optimization

• Automatically generates XHTML valid websites according to the W3C standards

• Can be accessed using any web browser (Internet Explorer, Firefox, Opera, Safari, ...)

• Ability to interconnect itself with other software systems

• High measure of security - multi-level data and access protection

Typical content management systems



Web content management systems

- They are bundled or stand-alone applications to create, manage, store and deploy content as Web pages. Web CMSs usually allow client control over HTML-based content, files, documents, and web hosting plans based on the system depth and the areas they serve.



Component content management systems

- They specialize in the creation of documents from component parts. These components can be reused (rather than copied and pasted) within another document or across multiple documents to ensure that content is consistent across the entire documentation set.



Enterprise content management systems

- They organize documents, contacts and records related to the processes of a commercial organization. They also structure the enterprise's information content and file formats, manage locations, streamline access by eliminating bottlenecks and optimize security and integrity.



References:

http://www.creativesites.eu/content-management-system-cms-joomla/

https://en.wikipedia.org/wiki/Content_management_system



Service Delivery Platform

In telecommunications, a service delivery platform (SDP) is usually a set of components that provide service delivery architecture (such as service creation, session control and protocols, orchestration and execution, as well as abstractions for media control, presence/location, integration, and other low-level communications capabilities) for a type of service.




The business objective of implementing the SDP is to enable rapid development and deployment of new converged multimedia services, from basic phone services to complex audio/video conferencing solutions.



SDP provides a complete ecosystem for the rapid deployment, provisioning, execution, management and billing of value added services. SDPs available today tend to be optimized for the delivery of a service in a given technological or network domain (e.g. web, IMS, IPTV, Mobile TV, etc.). SDPs are applicable to both consumer and business applications.





SDP Architecture



Examples:

• A mobile sends a short code based sms i.e. “577577 Katrina” to download an image.

• Message will go through the GSM network and will reach to SMSC (Short Message Service Centre).

• SMSC is configured with end point URLs of the target applications, so SMSC will forward the request to respective application which will finally provide the image of Katrina.

• The application will push the delivery to the mobile device (e.g. send a WAP-push link to the device).

• If a push link is received by the mobile device, clicking on the link will automatically download the content to the mobile device through WAP gateway.

SDP also enables users to see incoming phone calls (Wireline or Wireless), IM buddies (PC) or the locations of friends (GPS Enabled Device) on their television screen, airline customers to receive a text message from an automated system regarding a flight cancellation, and then opt to use a voice or interactive self-service interface to reschedule.



References:

http://en.wikipedia.org/wiki/Service_delivery_platform

http://www.techmahindra.com/network_services/telecom_service_delivery_platform.aspx

http://searchcloudprovider.techtarget.com/tip/Service-delivery-platforms-enable-service-differentiators



Single sign-on (SSO)

Single sign-on (SSO) is a property of access control of multiple related but independent software systems. With this property a user logs in once and gains access to all systems without being prompted to log in again for each of them. Conversely, Single sign-off is the property whereby a single action of signing out terminates access to multiple software systems.




SSO uses centralized authentication servers that all other applications and systems utilize for authentication purposes, and combines this with techniques to ensure that users do not have to actively enter their credentials more than once.



Benefits

Benefits of using single sign-on include:

• Reducing password fatigue from different user name and password combinations

• Reducing time spent re-entering passwords for the same identity

• Reducing IT costs due to lower number of IT help desk calls about passwords



Common Configurations

Below are the common configuration methods, which are being used for single sign-on authentication:



Kerberos Based

• Initial sign-on prompts the user for credentials, and gets a Kerberos ticket-granting ticket (TGT).

• Additional software applications requiring authentication, such as email clients, wikis, revision control systems, etc., use the ticket-granting ticket to acquire service tickets, proving the user's identity to the mailserver / wiki server / etc. without prompting the user to re-enter credentials. 



Windows environment – Windows login fetches TGT. Active Directory-aware applications fetch service tickets, so user is not prompted to re-authenticate.



Unix/Linux environment – Login via Kerberos PAM modules fetches TGT. Kerberized client applications such as Evolution, Firefox, and SVN use service tickets, so user is not prompted to re-authenticate.



Other common configuration methods used for SSO authentication are:

• Smart card Based

• OTP token

• Integrated Windows Authentication

• Security Assertion Markup Language (SAML)



Shared authentication schemes which are not single sign-on

Single sign-on requires that users literally sign in once to establish their credentials. Systems which require the user to log in multiple times to the same identity are inherently not single sign-on. For example, an environment where users are prompted to log into their desktop, then log into their email using the same credentials, is not single sign-on.



References:

http://en.wikipedia.org/wiki/Single_sign-on

http://www.opengroup.org/security/sso/sso_intro.htm



Also, the following links provide information about a security study (made in March, 2012) of some Commercially Deployed Single-Sign-On Web Services, their flaws and resolutions:

http://research.microsoft.com/apps/pubs/default.aspx?id=160659

http://openid.net/2012/03/14/vulnerability-report-data-confusion/

Brain–Computer Interface

A brain–computer interface (BCI), often called a mind-machine interface (MMI), or a direct neural interface or a brain–machine interface (BMI), is a direct communication pathway between the brain and an external device. BCIs are often used to assist, augment or repair human cognitive or sensory-motor functions.




The field of BCI research and development is focused primarily on neuroprosthetic applications that aim at restoring damaged hearing, sight and movement. Due to the brain’s ability to develop and adapt (cortical plasticity), signals from implanted prostheses can, after adaptation, be handled by the brain like natural sensor. Following years of animal experimentation, the first neuroprosthetic devices implanted in humans appeared in the mid-1990s.



However, the difference between BCIs and neuroprosthetics is that the latter typically connect the nervous system to a device, whereas BCIs usually connect the brain with a computer system.



Invasive BCIs – Invasive BCI research has targeted repairing damaged sight and to restore movement in individuals with paralysis or provide devices to assist them. Invasive BCIs are implanted directly into the grey matter of the brain during neurosurgery. Hence, invasive devices produce the highest quality signals of BCI devices but are prone to scar-tissue build-up, causing the signal to become weaker, or even non-existent, as the body reacts to a foreign object in the brain.



Partially invasive BCIs – Partially invasive BCI devices are implanted inside the skull but rest outside the brain rather than within the grey matter. They produce better resolution signals than non-invasive BCIs where the bone tissue of the cranium deflects and deforms signals and have a lower risk of forming scar-tissue in the brain than fully invasive BCIs.

Electrocorticography (ECoG) is a partially invasive procedure, which measures the electrical activity of the brain taken from beneath the skull in a similar way to non-invasive electroencephalography (EEG), but the electrodes are embedded in a thin plastic pad that is placed above the cortex, beneath the dura mater. ECoG has higher spatial resolution, better signal-to-noise ratio, wider frequency range, and less training requirements than scalp-recorded EEG.



Non-invasive BCIs – Signals recorded in a non-invasive way have been used to power muscle implants and restore partial movement in experimental volunteers. Although they are easy to wear, non-invasive implants produce poor signal resolution because the skull dampens signals, dispersing and blurring the electromagnetic waves created by the neurons. Although the waves can still be detected it is more difficult to determine the area of the brain that created them or the actions of individual neurons.

Electroencephalography (EEG), Magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) are the popular non-invasive interfaces.

Also, currently, there is a new field of gaming called Neurogaming, which uses non-invasive BCI in order to improve game-play so that users can interact with a console without the use of a traditional joystick.



References:

https://en.wikipedia.org/wiki/Brain-computer_interface

Firewall

A firewall is a software or hardware-based network security system that controls the incoming and outgoing network traffic by analyzing the data packets and determining whether they should be allowed through or not, based on a rule set. Generally, firewalls are configured to protect against unauthenticated interactive logins from the outside world. This helps prevent hackers from logging into machines on a network.




Firewalls also provide logging and auditing functions; often they provide summaries to the administrator about what type/volume of traffic has been processed through.







Network Layer Firewalls

Network layer firewalls, also called packet filters, operate at a relatively low level of the TCP/IP protocol stack, not allowing packets to pass through unless they match the established rule set. Network layer firewalls generally make their decisions based on the source address, destination address and ports in individual IP packets.



“Stateful” network layer firewalls maintain context about active sessions, and use that "state information" to speed packet processing. If a packet does not match an existing connection, it will be evaluated according to the ruleset for new connections. If a packet matches an existing connection based on comparison with the firewall's state table, it will be allowed to pass without further processing.



“Stateless” network layer firewalls require less memory, and can be faster for simple filters that require less time to filter than to look up a session. However, they cannot make more complex decisions based on what stage communications between hosts have reached.



Application Layer Firewalls

Application-layer firewalls work on the application level of the TCP/IP stack (i.e., all browser traffic, or all telnet or ftp traffic), and may intercept all packets traveling to or from an application.



Application firewalls function by determining whether a process should accept any given connection. Application firewalls accomplish their function by hooking into socket calls to filter the connections between the application layer and the lower layers of the OSI model. Also, application firewalls further filter connections by examining the process ID of data packets against a ruleset for the local process involved in the data transmission.



Proxies

A proxy server (running either on dedicated hardware or as software on a general-purpose machine) may act as a firewall by responding to input packets (connection requests, for example) in the manner of an application, while blocking other packets. A proxy server is a gateway from one network to another for a specific network application, in the sense that it functions as a proxy on behalf of the network user.



Computers establish a connection to the proxy, which serves as an intermediary, and initiate a new network connection on behalf of the request. This prevents direct connections between systems on either side of the firewall and makes it harder for an attacker to discover where the network is, because they will never receive packets directly by their target system.



References:

http://searchnetworking.techtarget.com/tutorial/Introduction-to-firewalls-Types-of-firewalls

http://en.wikipedia.org/wiki/Firewall_(computing)



3D Internet

3D Internet is a set of interconnected virtual worlds that users can visit to consume services, moving from one world to another. It uses many of the same basic technology components as the 2D Internet—a browser, search engine and servers, for example, but with the additional use of 3D computer graphics and, in many cases, avatars.




The transition from 2D to 3D Internet is in its early stages and is predicted to take off in 2018 with mass adoption. 3D will make the Web more social and it will introduce powerful new ways for people in education, business and medicine to interact with the content and with each other. It combines the immediacy of television, the versatile content of the Web, and the relationship-building strengths of social networking sites.



Examples of 3D Internet

• The experience of interacting with another character in a 3D environment, as opposed to a screen name or a flat image, adds new appeal to the act of socializing on the Internet.

• Companies that specialize in interior design or furniture showrooms will be able to offer customized models of rooms through users' home PC.

• The travel services may enable potential tourists to virtually visit any place.



The 3D Internet Meets the Internet of Things

As for the range of services that can benefit from 3D, it includes everything from virtual meetings (a next-generation telepresence), trainings, simulations and educational sessions to chat, support group meetings in the healthcare field, and even shopping for clothes, furniture and cars.



Where the 3D Internet gets really interesting is in its intersection with the Internet of Things and augmented reality (AR), which opens up the possibility of controlling the real world from the virtual world. Potential use cases vary from a virtual visit to any person to controlling elements like temperature, lighting, media and door locks in buildings. Virtual versions of large buildings or industrial plants, for example, can make it much easier and faster than current technologies to pinpoint the source of an alert and respond to it.



References:

http://www.seminarsonly.com/Labels/3d-Internet-Wikipedia.php

http://newsroom.cisco.com/feature/1129025/When-the-Internet-Goes-3D



Nano-SIM

A SIM (subscriber identity module) card is an application on a smartcard that stores data for GSM/CDMA Cellular telephone subscribers.


The SIM contains a cryptographic chip, but critically it also stores a copy of the unique subscriber key, the other copy being held in the network operator's authentication server. During the GSM authentication a random number is encrypted by the server using that key and must be decrypted by the phone to prove the subscriber is genuine. The chips, holding the key and doing the cryptography are glued to the back of the SIM's metal contacts.

Additional data stored on a SIM are contact lists, stored text messages etc.



Evolution of SIM

When the GSM network first appeared SIM cards were the size of credit cards. The subsequent miniaturization of the phones led to the standardization of smaller SIMs, the Plug-in SIM, and later the Mini-UICC, also known as 3rd form factor (3FF) or Micro-SIM.



Nano-SIM

The latest form factor is the 4th form factor or Nano-SIM, introduced in early 2012. Measuring 12.3 x 8.8 x 0.67 mm the Nano-SIM is about 30 percent smaller than the Micro-SIM. The Nano-SIM offers device manufacturers the crucial advantage of freeing up extra space for other mobile phone components such as additional memory or larger batteries and also, to produce devices that are thinner and more appealing.



Nano SIM Structure

There are eight connections on all the SIM designs, although on both the traditional SIM and the Micro-SIM (3FF) the electrical ground connection (C5, see diagram) is generally extended down the middle. That's harder on the nano SIM because it has two connections in the middle (C4 and C8).



Contacts on the nano SIM.



Of the eight contacts, three are optional, but in most cases six pads will be visible. C1 is the supply voltage; C2 is the reset signal so the SIM knows when to start doing something; and C3 is the clock signal as the timing clock was left out of the SIM spec to keep the cost down. C5 is ground; C6 is the NFC SWP contact; and C7 is the serial communications connection that actually does the task execution one expects a SIM to do.



Apple's iPhone 5 uses nano SIM.



References:

http://en.wikipedia.org/wiki/Subscriber_identity_module

http://www.theregister.co.uk/2012/09/18/nano_sim/

http://www.gi-de.com/gd_media/media/en/documents/brochures/mobile_security_2/cste_1/Nano-SIM.pdf



Vehicle Tracking System

Vehicle tracking systems involve the installation of an electronic device in a vehicle or fleet of vehicles, with purpose-designed computer software at least at one operational base to enable the owner or a third party to track the vehicle's location, collect data in the process from the field and deliver it to the base of operation.




Modern vehicle tracking systems commonly use GPS or GLONASS technology for locating a vehicle. Vehicle information can be viewed on electronic maps integrated into the application at the operational base via the Internet or specialized software.



Types of Vehicle Tracking systems

• Passive tracking – "Passive" systems have devices that store GPS location, speed, heading and sometimes a trigger event such as key on/off, door open/closed. Once the vehicle returns to a predetermined point, the device is removed and the data is downloaded to a computer for evaluation.

• Active tracking – "Active" systems also have devices that collect the same information as passive systems but usually transmit the data in near-real-time via cellular or satellite networks to the operations base for evaluation.



Typical Architecture of a Vehicle Tracking System

Major constituents of the GPS based tracking are

1. GPS tracking device: The device fits into the vehicle and captures the GPS location information and other information such as fuel amount, engine temperature, altitude etc. Capabilities of these devices actually decide the final capability of the whole tracking system. These devices also have the ability to transmit the captured information to the operational base (only in Active systems).

2. GPS tracking server: The tracking server has three responsibilities: receiving data from the GPS tracking unit, securely storing it and serving this information on demand to the user.

3. User interface: The UI determines how one will be able to access information, view vehicle data, and elicit important details from it.



Common usage

Vehicle tracking systems are commonly used by fleet operators for fleet management functions such as fleet tracking, routing, dispatch, on-board information and security. Along with commercial fleet operators, urban transit agencies use the technology for a number of purposes, including monitoring schedule adherence of buses in service, triggering changes of buses' destination sign displays at the end of the line and triggering pre-recorded announcements for passengers.

Other applications include monitoring driving behavior, such as a parent with a teen driver.

Vehicle tracking systems are also popular in consumer vehicles as a theft prevention and retrieval device.

Some vehicle tracking systems integrate several security systems, for example by sending an automatic alert to a phone or email if an alarm is triggered or the vehicle is moved without authorization, or when it leaves or enters a geofence.



References:

http://en.wikipedia.org/wiki/Vehicle_tracking_system

Network Interface Controller

A network interface controller (NIC) (also known as a network interface card, network adapter, LAN adapter etc.) is a computer hardware component that connects a computer to a computer network, either by using cables or wirelessly. It is both an OSI layer 1 (physical layer) and layer 2 (data link layer) device, as it provides physical access to a networking medium and, for IEEE 802 networks and FDDI, provides a low-level addressing system through the use of MAC addresses.




Every network controller for an IEEE 802 network such as Ethernet, Wi-Fi or Token Ring and every FDDI network controller has a unique 48-bit serial number called a MAC address, which is stored in read-only memory format. Every computer on an Ethernet network must have at least one controller.



Controller vendors purchase blocks of addresses from the Institute of Electrical and Electronics Engineers (IEEE) and assign a unique address to each controller at the time of manufacture.



Ethernet network controllers typically support 10 Mbit/s Ethernet, 100 Mbit/s Ethernet, and 1000 Mbit/s Ethernet varieties and are designated as 10/100/1000 respectively.



The role of the NIC is to:

• Prepare data from the computer for the network cable.

• Send the data to another computer.

• Control the flow of data between the computer and the cabling system.

• Receive incoming data from the cable and translate it into bytes that can be understood by the computer's central processing unit (CPU).



The NIC may use one or more of two techniques to indicate the availability of packets to transfer:

• Polling is where the CPU examines the status of the peripheral under program control.

• Interrupt-driven I/O is where the peripheral alerts the CPU that it is ready to transfer data.

It may use one or more of two techniques to transfer packet data:

• Programmed input/output is where the CPU moves the data to or from the designated peripheral to memory.

• Direct memory access is where an intelligent peripheral assumes control of the system bus to access memory directly. This removes load from the CPU but requires more logic on the card. In addition, a packet buffer on the NIC may not be required and latency can be reduced.

References:

http://en.wikipedia.org/wiki/Network_interface_controller

http://pluto.ksi.edu/~cyh/cis370/ebook/ch02c.htm

Digital Watermark

A digital watermark is a kind of marker covertly embedded in a noise-tolerant signal such as audio or image data. It is typically used to identify ownership of the copyright of such signal. It is the process of hiding digital information in a carrier signal.




In digital watermarking, the signal may be audio, pictures, video, texts or 3D models. A signal may carry several different watermarks at the same time. Also, a digital watermark does not change the size of the carrier signal. It is a passive protection tool. It just marks data, neither degrades it nor controls the access to the data.



Applications

Digital watermarking may be used for a wide range of applications, such as:

• Copyright protection

• Source tracking (different recipients get differently watermarked content)

• Broadcast monitoring (television news often contains watermarked video from international agencies)



Classification

Robust imperceptible watermarks are generally proposed as tool for the protection of digital content, but the creation of it has proven to be quite challenging.



A digital watermark is called robust with respect to transformations if the embedded information may be detected reliably from the marked signal, even if degraded by any number of transformations.



A digital watermark is called perceptible if its presence in the marked signal is noticeable.



Digital watermarking techniques may also be classified based on the length of the embedded message and embedding method.



Watermarking for relational databases

Digital watermarking for relational databases emerged as a candidate solution to provide copyright protection, tamper detection, traitor tracing and maintaining integrity of relational data. Many watermarking techniques have been proposed to address these purposes.



References:

http://en.wikipedia.org/wiki/Digital_watermarking

Electronic Nose

An electronic nose is an instrument to detect odors and/or flavors and was developed in order to mimic human olfaction. The instrument consists of three modules which essentially fulfill the following tasks - head space sampling, odor/flavor sensing, and pattern recognition.




Electronic noses include three major parts: a sample delivery system, a detection system, a computing system.



• The sample delivery system enables the generation of the headspace (volatile compounds) of a sample. The system then injects this headspace into the detection system of the electronic nose. The sample delivery system is essential to guarantee constant operating conditions.

• The detection or sensory system, which consists of a sensor array/set, is the "reactive" part of the instrument. When in contact with volatile compounds, the sensors react, which means they experience a change of electrical properties.



The more commonly used sensors for electronic noses include

o metal–oxide–semiconductor

o conducting polymers

o quartz crystal microbalance

o surface acoustic wave



• The computing system works to combine the responses of all of the sensors and helps identify the odor/flavor. Commonly used data interpretation systems for the analysis of responses from the detection system include artificial neural network (ANN), fuzzy logic, pattern recognition modules, etc.

As a first step, an electronic nose needs to be trained with qualified samples so as to build a database of reference. Then the instrument can recognize new samples by comparing signal pattern generated by a volatile compound to those contained in its database.



Electronic nose instruments are used by research and development laboratories, quality control laboratories and process & production departments for various purposes. This also has possible future applications in the fields of health and security.



References:

http://en.wikipedia.org/wiki/Electronic_nose

Femtocell

In telecommunications, a femtocell is a small, low-power cellular base station, typically designed for use in a home or small business. It connects to the service provider’s network via broadband (such as DSL or cable) and typically supports 2 to 4 active mobile phones in a residential setting and 8 to 16 active mobile phones in enterprise settings. A femtocell allows service providers to extend service coverage indoors, especially where access would otherwise be limited or unavailable.




Operating Mode

Femtocells are sold by the mobile network operator (MNO) and are typically the size of a residential gateway or smaller.



In most cases, the end-user must declare which mobile phone numbers are allowed to connect to his femtocell, usually via a web interface provided by the MNO. When these mobile phones arrive under coverage of the femtocell, they switch over from the macrocell (outdoor) to the femtocell automatically. When the user leaves the femtocell coverage (whether in a call or not) area, his phone hands over seamlessly to the macro network.



Femtocells require specific hardware, so existing WiFi or DSL routers cannot be upgraded to a femtocell. Also, once installed in a specific location, most femtocells have protection mechanisms so that a location change will be reported to the MNO.



Femtocells are either under development or commercially available for cdma2000, GSM, TD-SCDMA, WiMAX and LTE.



Benefits for users

• “5 bar” coverage when there is no existing signal or poor coverage e.g. rural areas

• Higher mobile data capacity, which is important if the end-user makes use of mobile data on his mobile phone

• Depending on the pricing policy of the MNO, special tariffs at home can be applied for calls placed under femtocell coverage

• For enterprise users, having femtos instead of DECT phones enables them to have a single phone, so a single contact list, etc.

References:

http://en.wikipedia.org/wiki/Femtocell

GIS file format

A GIS file format is a standard of encoding geographical information into a file. GIS files are created mainly by government mapping agencies (such as the USGS or National Geospatial-Intelligence Agency) or by GIS software developers.




Metadata often includes:

• Elevation data, either in raster or vector (e.g., contour lines) form

• Shape layers, usually expressed as line drawings for streets, postal zone boundaries, etc.

• Coordinate system descriptions

• Details describing the precise shape of the Earth assumed by the coordinates



Raster

A raster data type is any type of digital image represented by reducible and enlargeable grids. Anyone who is familiar with digital photography will recognize the Raster graphics pixel as the smallest individual grid unit building block of an image, usually not readily identified as an artifact shape until an image is magnified on a very large scale.



The raster data type reflects a digitized abstraction of reality dealt with by grid populating tones or objects, quantities, cojoined or open boundaries, and map relief schemas. Aerial photos are one commonly used form of raster data, with one primary purpose in mind: to display a detailed image on a map area, or for the purposes of rendering its identifiable objects by digitization. Additional raster data sets used by a GIS will contain information regarding elevation, a digital elevation model, or reflectance of a particular wavelength of light, Landsat, or other electromagnetic spectrum indicators.



Raster data type consists of rows and columns of cells, with each cell storing a single value. Raster data can be images (raster images) with each pixel (or cell) containing a color value. Additional values recorded for each cell may be a discrete value, such as land use, a continuous value, such as temperature, or a null value if no data is available. While a raster cell stores a single value, it can be extended by using raster bands to represent RGB (red, green, blue) colors, colormaps (a mapping between a thematic code and RGB value), or an extended attribute table with one row for each unique cell value. The resolution of the raster data set is its cell width in ground units.



Vector

In a GIS, geographical features are often expressed as vectors, by considering those features as geometrical shapes.



Points

Zero-dimensional points are used for geographical features that can best be expressed by a single point reference—in other words, by simple location. Examples include wells, peaks, features of interest, and trailheads. Points convey the least amount of information of these file types. Points can also be used to represent areas when displayed at a small scale. For example, cities on a map of the world might be represented by points rather than polygons. No measurements are possible with point features.



Lines or polylines

One-dimensional lines or polylines are used for linear features such as rivers, roads, railroads, trails, and topographic lines. Again, as with point features, linear features displayed at a small scale will be represented as linear features rather than as a polygon. Line features can measure distance.



Polygons

Two-dimensional polygons are used for geographical features that cover a particular area of the earth's surface. Such features may include lakes, park boundaries, buildings, city boundaries, or land uses. Polygons convey the most amount of information of the file types. Polygon features can measure perimeter and area.





Each of these geometries is linked to a row in a database that describes their attributes. For example, a database that describes lakes may contain a lake's depth, water quality, pollution level. This information can be used to make a map to describe a particular attribute of the dataset. For example, lakes could be colored depending on level of pollution. Different geometries can also be compared. For example, the GIS could be used to identify all wells (point geometry) that are within one kilometer of a lake (polygon geometry) that has a high level of pollution.



Non-spatial data

Additional non-spatial data can also be stored along with the spatial data represented by the coordinates of a vector geometry or the position of a raster cell. In vector data, the additional data contains attributes of the feature. For example, a forest inventory polygon may also have an identifier value and information about tree species. In raster data the cell value can store attribute information, but it can also be used as an identifier that can relate to records in another table.



Software is currently being developed to support spatial and non-spatial decision-making, with the solutions to spatial problems being integrated with solutions to non-spatial problems. The end result with these flexible spatial decision-making support systems (FSDSSs) is expected to be that non-experts will be able to use GIS, along with spatial criteria, and simply integrate their non-spatial criteria to view solutions to multi-criteria problems. This system is intended to assist decision-making.



References:

http://en.wikipedia.org/wiki/GIS_file_formats



PSTN

The public switched telephone network (PSTN) is the network consisting of telephone lines, fiber optic cables, microwave transmission links, cellular networks, communications satellites and undersea telephone cables, all inter-connected by switching centers and single global address space for telephone numbers based on the E.163 and E.164 standards, thus allowing any telephone in the world to communicate with any other.




Originally a network of fixed-line analog telephone systems, the PSTN is now almost entirely digital in its core and includes mobile as well as fixed telephones.



Technology in the PSTN



• Network topology

The telephone exchanges are arranged into hierarchies, so that if a call cannot be handled in a local cluster, it is passed to one higher up for onward routing, reducing the number of connecting trunks required between operators over long distances and keeping local traffic separate.

In modern networks the cost of transmission and equipment is lower and, although hierarchies still exist, they are much flatter, with perhaps only two layers.



• Digital channels

Most automated telephone exchanges now use digital switching rather than mechanical or analog switching. However, analog two-wire circuits are still used to connect the last mile from the exchange to the telephone in the home (local loop). To carry a typical phone call from a calling party to a called party, the analog audio signal is digitized at an 8 kHz sample rate with 8-bit resolution using a special type of nonlinear pulse code modulation known as G.711. The call is then transmitted from one end to another via telephone exchanges.



The call is carried over the PSTN using a 64 kbit/s channel called Digital Signal 0 (DS0). A Digital Signal 1 (DS1) circuit carries 24 DS0s on a T-carrier (T1) line, or 32 DS0s (30 for calls, 2 for framing and signaling) on an E-carrier (E1) line. In modern networks, the multiplexing function is moved as close to the end user as possible.



The following list includes a few of the popular custom calling features commonly found in the PSTN today:



• Call waiting

• Call forwarding

• Three-way calling (Enables conference calling)

• Display calling party's directory number

• Call blocking

• Calling line ID blocking

• Automatic callback

• Call return



References:

http://fengnet.com/book/voip/ch01lev1sec3.html

http://en.wikipedia.org/wiki/Public_switched_telephone_network

Second Screen

Second screen, sometimes also referred to as "companion device" (or "companion apps" when referring to a software applications), is a term that refers to an additional electronic device (e.g. tablet, smartphone) that allows a television audience to interact with the content they are consuming, such as TV shows, movies, music, or video games. Extra data is displayed on a portable device synchronized with the content being viewed on television.




Several studies show a clear tendency of the consumer to use a device while watching television. They show high use of tablet or smartphone when watching television, and indicate a high percentage of comments or posts on social networks being about the content that's being watched.



Based on these studies, many companies both in content production and advertising have adapted their delivery content to the lifestyle of the consumer in order to get maximum attention and thus profits. Applications are becoming a natural extension of television programming, both live and on demand.



Applications

Many applications in the "second screen" are designed to give the consumer another way of interactivity. They also give the media companies another way to sell advertising content. Some examples:

• Transmission of the Master's Golf Tournament, application for the iPhone (rating information and publicity)

• TV programs broadcast live tweets and comment.

• Synchronization of audiovisual content via web advertising.

• Applications that extend the content information.

• Shows that add on their websites, content devoted exclusively to the second screen.

• Applications that synchronize the content being viewed to the portable device.

• Video game console playing with extra data, such as a map or strategy data that synchronize with the content being viewed to the portable device.

• TV discovery application with recommendation, EPG (live content), personalization.



Sports Broadcasting

Sports broadcasters, to stem the flight of the TV audience away from watching the main screen (new name for the television) to the second screen, are offering alternative and enhanced content to the main program. The idea is to present content related to the main program, such as unseen moments, alternative information, soundtrack, and characters. New technologies allow the viewer to see different camera angles while watching the game.



References:

http://en.wikipedia.org/wiki/Second_screen

http://www.secondscreen.com/

MANET

A mobile ad-hoc network (MANET) is a self-configuring infrastructure-less network of mobile devices connected wirelessly. Each device in a MANET is free to move independently in any direction, and will therefore change its links to other devices frequently. Each must forward traffic unrelated to its own use, and therefore be a router.




The primary challenge in building a MANET is equipping each device to continuously maintain the information required to properly route traffic. Such networks may operate by themselves or may be connected to the larger Internet.



Implementation

MANETs are a kind of wireless ad hoc networks that usually have a routable networking environment on top of a Link Layer ad hoc network.

The growth of laptops and 802.11/Wi-Fi wireless networking has made MANETs a popular research topic since the mid 1990s. Many academic papers evaluate protocols and their abilities, assuming varying degrees of mobility within a bounded space, usually with all nodes within a few hops of each other. Different protocols are then evaluated based on measure such as the packet drop rate, the overhead introduced by the routing protocol, end-to-end packet delays, network throughput etc.



Types of MANET

• Vehicular Ad-hoc Networks (VANETs) are used for communication among vehicles and between vehicles and roadside equipment.

• Internet Based Mobile Ad-hoc Networks (iMANET) are ad-hoc networks that link mobile nodes and fixed Internet-gateway nodes. In such type of networks normal ad-hoc routing algorithms don't apply directly.



References:

http://en.wikipedia.org/wiki/MANET

Assisted GPS

Assisted GPS, generally abbreviated as A-GPS or aGPS, is a system that can under certain conditions improve the startup performance, or time-to-first-fix (TTFF), of a GPS satellite-based positioning system. It is used extensively with GPS-capable cellular phones to make the location of a cell phone available to emergency call dispatchers.




"Standalone" or "autonomous" GPS operation uses radio signals from satellites alone. In very poor signal conditions, for example in a city, these signals may suffer multipath propagation where signals bounce off buildings, or are weakened by passing through atmospheric conditions, walls, or tree cover. When first turned on in these conditions, some standalone GPS navigation devices may not be able to fix a position due to the fragmentary signal, rendering them unable to function until a clearer signal can be received continuously for a long enough period of time.



An assisted GPS system can address these problems by using data available from a network to locate and use the satellites in poor signal conditions. For billing purposes, network providers often count this as a data access, which can cost money depending on the plan.



Basic Concepts

Standalone GPS provides first position in approximately 30–40 seconds. A Standalone GPS system needs orbital information of the satellites to calculate the current position. The data rate of the satellite signal is only 50 bits/s, so downloading orbital information like ephemeris and almanac directly from satellites typically takes a long time, and if the satellite signals are lost during the acquisition of this information, it is discarded and the standalone system has to start from scratch. In AGPS, the Network Operator deploys an AGPS server. These AGPS servers download the orbital information from the satellite and store it in the database. An AGPS capable device can connect to these servers and download this information using Mobile Network radio bearers such as GSM, CDMA, WCDMA, LTE or even using other wireless radio bearers such as Wi-Fi. Usually the data rate of these bearers is high; hence downloading orbital information takes less time.



AGPS has two modes of operation:



Mobile Station Assisted (MSA)

In MSA mode A-GPS operation, the A-GPS capable device receives acquisition assistance, reference time and other optional assistance data from the A-GPS server. With the help of the above data, the A-GPS device receives signals from the visible satellites and sends the measurements to the A-GPS server. The A-GPS server calculates the position and sends it back to the A-GPS device.



Mobile Station Based (MSB)

In MSB mode A-GPS operation, the A-GPS device receives ephemeris, reference location, reference time and other optional assistance data from the A-GPS server. With the help of the above data, the A-GPS device receives signals from the visible satellites and calculates the position.



Many mobile phones combine A-GPS and other location services including Wi-Fi Positioning System and cell-site multilateration and sometimes a hybrid positioning system.



References:

http://en.wikipedia.org/wiki/Assisted_GPS

Wireless ad hoc network

A wireless ad hoc network is a decentralized type of wireless network. The network is ad hoc because it does not rely on pre-existing routing, instead, each node participates in routing by forwarding data to other nodes, and so the determination of which nodes forward data is made dynamically based on the network connectivity. In addition to the classic routing, ad hoc networks can use flooding for forwarding the data.




An ad hoc network typically refers to any set of networks where all devices have equal status on a network and are free to associate with any other ad hoc network devices in link range. Very often, ad hoc network refers to a mode of operation of IEEE 802.11 wireless networks.



Application

The decentralized nature of wireless ad hoc networks makes them suitable for a variety of applications where central nodes can't be relied on, and may improve the scalability of networks compared to wireless managed networks.



Minimal configuration and quick deployment make ad hoc networks suitable for emergency situations like natural disasters or military conflicts. The presence of dynamic and adaptive routing protocols enables ad hoc networks to be formed quickly.



Wireless ad hoc networks can be further classified by their application:

• mobile ad hoc networks (MANET)

• wireless mesh networks (WMN)

• wireless sensor networks (WSN)



Technical requirements

An ad hoc network is made up of multiple “nodes” connected by “links”.



Links are influenced by the node's resources (e.g. transmitter power, computing power and memory) and by behavioral properties (e.g. reliability), as well as by link properties (e.g. length-of-link and signal loss, interference and noise). Since links can be connected or disconnected at any time, a functioning network must be able to cope with this dynamic restructuring, preferably in a way that is timely, efficient, reliable, robust and scalable.



The network must allow any two nodes to communicate, by relaying the information via other nodes. A “path” is a series of links that connects two nodes. Various routing methods use one or two paths between any two nodes; flooding methods use all or most of the available paths.



References:

http://en.wikipedia.org/wiki/Wireless_ad-hoc_network

Remote Radio Head

A remote radio head is an operator radio control panel that connects to a remote radio transceiver via electrical or wireless interface. When used to describe aircraft radio cockpit radio systems, this control panel is often called the radio head.




Current and future generations of wireless cellular systems feature heavy use of Remote Radio Heads (RRHs) in the base stations. Instead of hosting a bulky base station controller close to the top of antenna towers, new wireless networks connect the base station controller and remote radio heads through lossless optical fibers. The interface protocol that enables such a distributed architecture is called Common Publish Radio Interface (CPRI). With this new architecture, RRHs offload intermediate frequency (IF) and radio frequency (RF) processing from the base station. Furthermore, the base station and RF antennas can be physically separated by a considerable distance, providing much needed system deployment flexibility.



Typical advanced processing algorithms on RRHs include digital up-conversion and digital down-conversion (DUC and DDC), crest factor reduction (CFR), and digital pre-distortion (DPD). DUC interpolates base band data to a much higher sample rate via a cascade of interpolation filters. It further mixes the complex data channels with IF carrier signals so that RF modulation can be simplified. CFR reduces the peak-to-average power ratio of the data so it does not enter the non-linear region of the RF power amplifier. DPD estimates the distortion caused by the non-linear effect of the power amplifier and pre-compensates the data.



More importantly, many wireless standards demand re-configurability in both the base station and the RRH. For example, the 3GPP Long Term Evolution (LTE) and WiMax systems both feature scalable bandwidth. The RRH should be able to adjust – at run time – the bandwidth selection, the number of channels, the incoming data rate, among many other things.



RRH system model

Typically, a base station connects to a RRH via optical cables. On the downlink direction, base band data is transported to the RRH via CPRI links. The data is then up-converted to IF sample rates, preprocessed by CFR or DPD to mitigate non-linear effects of broadband power amplifiers, and eventually sent for radio transmission. A typical system is shown in Figure 1.





Figure 1: Block diagram of a typical RRH System



References:

http://en.wikipedia.org/wiki/Remote_radio_head

http://www.eetimes.com/design/programmable-logic/4212925/Designing-remote-radio-heads--RRHs--on-high-performance-FPGAs#39935



Jini

Jini, also called as Apache River, is network architecture for the construction of distributed systems in the form of modular co-operating services.




Jini technology is a service-oriented architecture that defines a programming model which exploits and extends Java technology to enable the construction of secure, distributed systems consisting of federations of well-behaved network services and clients. Jini technology can be used to build adaptive network systems that are scalable, evolvable and flexible as typically required in dynamic computing environments. Jini offers a number of powerful capabilities such as service discovery and mobile code. Jini is similar to Java Remote Method Invocation but more advanced.



The term Jini refers to a set of specifications and an implementation; the latter is referred to as the Jini Starter Kit. Both the specifications and the Starter Kit have been released under the Apache 2.0 license and have been offered to the Apache Software Foundation's Incubator.



Jini provides facilities for dealing with some of the fallacies of distributed computing, problems of system evolution, resilience, security and the dynamic assembly of service components. Code mobility is a core concept of the platform and provides many benefits including non-protocol dependence.



One of the goals of Jini is to shift the emphasis of computing away from the traditional file-system–oriented approach to a more network-oriented approach. Thus resources can often be used across a network as if they were available locally. Jini allows for advanced searching for services through a process of discovery of published services (making Jini akin to the service-oriented architecture concept).



There are three main parts to a Jini scenario. These are the client, the server, and the lookup service.



The service is the resource which is to be made available in the distributed environment. This can include physical devices (such as printers or disk drives) and software services (for example a database query or messaging service). The client is the entity which uses the service. Jini provides a mechanism for locating services on the network that conform to a particular (Java) interface. Once a service is located, the client can download an implementation of that interface, which it then uses to communicate with the service.



The three major components that make up a running Jini system are

1. The Jini Client—Anything that would like to make use of the Jini service

2. The Service Locator—Anything that acts as a Locator/Trader/Broker between the service and the client, and is used to find services in a distributed Jini system.

The Jini Service—Any entity that can be used by a client program or another service (for example, a printer, a DVR (this used to say "VCR" - that's how old this software is), or a software entity like an EJB service)



References:

http://en.wikipedia.org/wiki/Jini

Open Shortest Path First

Open Shortest Path First (OSPF) is a link-state routing protocol for Internet Protocol (IP) networks. It uses a link state routing algorithm and falls into the group of interior routing protocols, operating within a single autonomous system (AS). It is defined as OSPF Version 2 in RFC 2328 (1998) for IPv4. The updates for IPv6 are specified as OSPF Version 3 in RFC 5340 (2008).




OSPF is perhaps the most widely used interior gateway protocol (IGP) in large enterprise networks. IS-IS, another link-state dynamic routing protocol, is more common in large service provider networks. The most widely used exterior gateway protocol is the Border Gateway Protocol (BGP), the principal routing protocol between autonomous systems on the Internet.



OSPF is an interior gateway protocol that routes Internet Protocol (IP) packets solely within a single routing domain (autonomous system). It gathers link state information from available routers and constructs a topology map of the network. The topology determines the routing table presented to the Internet Layer which makes routing decisions based solely on the destination IP address found in IP packets. OSPF was designed to support variable-length subnet masking (VLSM) or Classless Inter-Domain Routing (CIDR) addressing models.



OSPF detects changes in the topology, such as link failures, very quickly and converges on a new loop-free routing structure within seconds. It computes the shortest path tree for each route using a method based on Dijkstra's algorithm, a shortest path first algorithm.



The link-state information is maintained on each router as a link-state database (LSDB) which is a tree-image of the entire network topology. Identical copies of the LSDB are periodically updated through flooding on all OSPF routers.



The OSPF routing policies to construct a route table are governed by link cost factors (external metrics) associated with each routing interface. Cost factors may be the distance of a router (round-trip time), network throughput of a link, or link availability and reliability, expressed as simple unitless numbers. This provides a dynamic process of traffic load balancing between routes of equal cost.



An OSPF network may be structured, or subdivided, into routing areas to simplify administration and optimize traffic and resource utilization. Areas are identified by 32-bit numbers, expressed either simply in decimal, or often in octet-based dot-decimal notation, familiar from IPv4 address notation



OSPF router Types



OSPF defines the following router types:

 Area border router (ABR)

 Autonomous system boundary router (ASBR)

 Internal router (IR)

 Backbone router (BR)



The router type is an attribute of an OSPF process. A given physical router may have one or more OSPF processes. For example, a router that is connected to more than one area, and which receives routes from a BGP process connected to another AS, is both an area border router and an autonomous system boundary router.



Each router has an identifier, customarily written in the dotted decimal format (e.g., 1.2.3.4) of an IP address. This identifier must be established in every OSPF instance. If not explicitly configured, the highest logical IP address will be duplicated as the router identifier. However, since the router identifier is not an IP address, it does not have to be a part of any routable subnet in the network, and often isn't to avoid confusion.



These router types should not be confused with the terms designated router (DR), or backup designated router (BDR), which are attributes of a router interface, not the router itself.

References:

http://en.wikipedia.org/wiki/Open_Shortest_Path_First

Peer to Peer (P2P)

A peer-to-peer (abbreviated to P2P) computer network is one in which each computer in the network can act as a client or server for the other computers in the network, allowing shared access to various resources such as files, peripherals, and sensors without the need for a central server. P2P networks can be set up within the home, a business, or over the Internet. Each network type requires all computers in the network to use the same or a compatible program to connect to each other and access files and other resources found on the other computer. P2P networks can be used for sharing content such as audio, video, data, or anything in digital format.




P2P is a distributed application architecture that partitions tasks or workloads among peers. Peers are equally privileged participants in the application. Each computer in the network is referred to as a node. The owner of each computer on a P2P network would set aside a portion of its resources - such as processing power, disk storage, or network bandwidth - to be made directly available to other network participant, without the need for central coordination by servers or stable hosts. With this model, peers are both suppliers and consumers of resources, in contrast to the traditional client–server model where only the server supply (send), and clients consume (receive). Emerging collaborative P2P systems are going beyond the era of peers doing similar things while sharing resources, and are looking for diverse peers that can bring in unique resources and capabilities to a virtual community thereby empowering it to engage in greater tasks beyond that can be accomplished by individual peers, yet are beneficial to all the peers.



Social and economic impact

The concept of P2P is increasingly evolving to an expanded usage as the relational dynamic active in distributed networks, i.e., not just computer-to-computer, but human-to-human. Yochai Benkler has coined the term commons-based peer production to denote collaborative projects such as free and open source software and Wikipedia. Associated with peer production are the concepts of:

• peer governance (referring to the manner in which peer production projects are managed)

• peer property (referring to the new type of licenses which recognize individual authorship but not exclusive property rights, such as the GNU General Public License and the Creative Commons licenses)

• peer distribution (or the manner in which products, particularly peer-produced products, are distributed)



Applications

There are numerous applications of peer-to-peer networks:

• Content delivery

• Exchange of physical goods, services, or space

• Networking

• Science

• Search

• Communications networks



A peer-to-peer system of nodes without central infrastructure



References:

http://en.wikipedia.org/wiki/Peer-to-peer

Media Gateway Control Protocol

Media Gateway Control Protocol also known as MGCP is one of the implementation of the Media Gateway Control Protocol Architecture for controlling media gateways on Internet Protocol (IP) networks and the public switched telephone network (PSTN). The general base architecture and programming interface is described in RFC 2805 and the current specific MGCP definition is RFC 3435. It is a successor to the Simple Gateway Control Protocol (SGCP).




MGCP is a signaling and call control protocol used within Voice over IP (VoIP) systems that typically inter-operate with the public switched telephone network (PSTN). As such it implements a PSTN-over-IP model with the power of the network residing in a call control center (softswitch, similar to the central office of the PSTN) and the endpoints being "low-intelligence" devices, mostly simply executing control commands. The protocol represents a decomposition of other VoIP models, such as H.323, in which the media gateways (e.g., H.323's gatekeeper) have higher levels of signaling intelligence.



MGCP uses the Session Description Protocol (SDP) for specifying and negotiating the media streams to be transmitted in a call session and the Real-time Transport Protocol (RTP) for framing of the media streams.



Another implementation of the Media Gateway Control Protocol Architecture which exists is the H.248/Megaco protocol.



MGCP is a master/slave protocol that allows a call control device such as Call Agent to take control of a specific port on a Media Gateway. In MGCP context Media Gateway Controller is referred to as Call Agent. This has the advantage of centralized gateway administration and provides for largely scalable IP Telephony solutions. The distributed system is composed of a Call Agent, at least one Media Gateway (MG) that performs the conversion of media signals between circuits and packets switched networks, and at least one Signaling gateway (SG) when connected to the PSTN (conversion from TDM voice to Voice over IP).



MGCP assumes a call control architecture where there is limited intelligence at the edge (endpoints, Media Gateways) and intelligence at the core Call Agent. The MGCP assumes that Call Agents will synchronize with each other to send coherent commands and responses to the gateways under their control.



Gateway Control Protocol Relationship





References:

http://en.wikipedia.org/wiki/Media_Gateway_Control_Protocol

Passive Optical Network

A passive optical network (PON) is a point-to-multipoint, fiber to the premises network architecture in which unpowered optical splitters are used to enable a single optical fiber to serve multiple premises, typically 16-128. A PON consists of an optical line terminal (OLT) at the service provider's central office and a number of optical network units (ONUs) near end users. A PON reduces the amount of fiber and central office equipment required compared with point-to-point architectures. A passive optical network is a form of fiber-optic access network.




Downstream signals are broadcast to all premises sharing multiple fibers. Encryption can prevent eavesdropping.



Upstream signals are combined using a multiple access protocol, usually time division multiple access (TDMA). The OLTs "range" the ONUs in order to provide time slot assignments for upstream communication.



Network elements

A PON takes advantage of wavelength division multiplexing (WDM), using one wavelength for downstream traffic and another for upstream traffic. It uses the 1490 nanometer (nm) wavelength for downstream traffic and 1310 nm wavelength for upstream traffic. 1550 nm is reserved for optional overlay services, typically RF (analog) video.



A PON consists of a central office node, called an optical line terminal (OLT), one or more user nodes, called optical network units (ONUs) or optical network terminals (ONTs), and the fibers and splitters between them, called the optical distribution network (ODN).



A PON is a shared network, in that the OLT sends a single stream of downstream traffic that is seen by all ONUs. Each ONU only reads the content of those packets that are addressed to it.



Passive optical components

The drivers behind the modern passive optical network are the optical components that enable Quality of Service (QoS).



Single-mode, passive optical components include branching devices such as Wavelength-Division Multiplexer/De-multiplexers–(WDMs), isolators, circulators, and filters. These components are used in interoffice, loop feeder, Fiber In The Loop (FITL), Hybrid Fiber-Coaxial Cable (HFC), Synchronous Optical Network (SONET), and Synchronous Digital Hierarchy (SDH) systems; and other telecommunications networks employing optical communications systems that utilize Optical Fiber Amplifiers (OFAs) and Dense Wavelength Division Multiplexer (DWDM) systems.



The broad variety of passive optical components applications include multichannel transmission, distribution, optical taps for monitoring, pump combiners for fiber amplifiers, bit-rate limiters, optical connects, route diversity, polarization diversity, interferometers etc.





Downstream traffic in active (top) vs. passive optical network



Applicable Standards:

non-zero dispersion-shifted fiber – used by PON for both upstream and downstream traffic on different wavelengths.



References:

http://en.wikipedia.org/wiki/Passive_optical_network

Mobile Virtual Private Network

A mobile virtual private network (mobile VPN or mVPN) provides mobile devices with access to network resources and software applications on their home network, when they connect via other wireless or wired networks.




Mobile VPNs are used in environments where workers need to keep application sessions open at all times, throughout the working day, as they connect via various wireless networks, encounter gaps in coverage, or suspend-and-resume their devices to preserve battery life. A conventional VPN cannot survive such events because the network tunnel is disrupted, causing applications to disconnect, time out, fail, or even the computing device itself to crash.



Makers of mobile VPNs draw a distinction between remote access and mobile environments. A remote-access user typically establishes a connection from a fixed endpoint, launches applications that connect to corporate resources as needed, and then logs off. In a mobile environment, the endpoint changes constantly (for instance, as users roam between different cellular networks or Wi-Fi access points). A mobile VPN maintains a virtual connection to the application at all times as the endpoint changes, handling the necessary network logins in a manner transparent to the user.



Functions

The following are functions common to mobile VPNs

• Persistence – Open applications remain active, open and available when the wireless connection changes or is interrupted, a laptop goes into hibernation, or a handheld user suspends and resumes the device

• Roaming – Underlying virtual connection remains intact when the device switches to a different network; the mobile VPN handles the logins automatically

• Application compatibility – Software applications that run in an "always-connected" wired LAN environment run over the mobile VPN without modification

• Security – Enforces authentication of the user, the device, or both; as well as encryption of the data traffic in compliance with security standards such as FIPS 140-2

• Acceleration – Link optimization and data compression improve performance over wireless networks, especially on cellular networks where bandwidth may be constrained.

• Strong authentication – Enforces two-factor authentication or multi-factor authentication using some combination of a password, smart card, public key certificate or biometric device; required by some regulations, notably for access to CJIS systems in law enforcement



Industries and applications

Mobile VPNs have found uses in a variety of industries, where they give mobile workers access to software applications:

• Public Safety

• Home Care

• Hospitals and Clinics

• Field Service

• Utilities

• Insurance



In telecommunications

In telecommunication, a mobile VPN is a solution that integrates all offices and employees in a common network that includes all mobile and desk phones. Using mVPNs the company has the following advantages:

• Direct connectivity – the corporate network becomes part of mobile operator's network through direct connection

• Private numbering plan – the communication is tailored to company organization

• Corporate Business Group – all offices and employees are part of one common group, that includes all mobile and desk phones

• Short dialing – a short number to access each employee

• Smart Divert – easy divert within company group

• Groups and subgroups – Several sub-groups could be defined within the group with different changing as well as with separate numbering plan

• Calls control – certain destinations could be allowed or barred both on mobile and desk phones.



References:

http://en.wikipedia.org/wiki/Mobile_virtual_private_network



e-UTRAN

e-UTRAN




e-UTRAN or eUTRAN is the air interface of 3GPP's Long Term Evolution (LTE) upgrade path for mobile networks. It is the abbreviation for evolved UMTS Terrestrial Radio Access Network, also referred to as the 3GPP work item on the Long Term Evolution (LTE) also known as the Evolved Universal Terrestrial Radio Access (E-UTRA) in early drafts of the 3GPP LTE specification.



It is a radio access network standard meant to be a replacement of the UMTS, HSDPA and HSUPA technologies specified in 3GPP releases 5 and beyond. Unlike HSPA, LTE's E-UTRA is an entirely new air interface system, unrelated to and incompatible with W-CDMA. It provides higher data rates, lower latency and is optimized for packet data. It uses OFDMA radio-access for the downlink and SC-FDMA on the uplink.



Rationale for E-UTRA

Although UMTS, with HSDPA and HSUPA and their evolution, deliver high data transfer rates, wireless data usage is expected to continue increasing significantly over the next years due to the increased offering and demand of services and content on the move and the continued reduction of costs for the final user. This increase is expected to require not only faster networks and radio interfaces but also more cost efficient than what is possible by the evolution of the current standards. Thus the 3GPP consortium set the requirements for a new radio interface (EUTRAN) and core network evolution (System Architecture Evolution SAE) that would fulfill this need. These improvements in performance allow wireless operators to offer quadruple play services - voice, high-speed interactive applications including large data transfer and feature-rich IPTV with full mobility.



Starting with the 3GPP Release 8, e-UTRA is designed to provide a single evolution path for the GSM/EDGE, UMTS/HSPA, CDMA2000/EV-DO and TD-SCDMA radio interfaces, providing increases in data speeds, and spectral efficiency, and allowing the provision of more functionality.





EUTRAN architecture as part of a LTE and SAE network



Features:

• Peak download rates of 299.6 Mbit/s for 4x4 antennas, 150.8 Mbit/s for 2x2 antennas with 20 MHz of spectrum.

• Peak upload rates of 75.4 Mbit/s for every 20 MHz of spectrum.

• Low data transfer latencies (sub-5ms latency for small IP packets in optimal conditions), lower latencies for handover and connection setup time.

• Support for terminals moving at up to 350 km/h or 500 km/h depending on the frequency band.

Reference:

http://en.wikipedia.org/wiki/E-UTRA



More...

For Any Suggestions/Feedback Please reach out to abhi.cs11@gmail.com

Integrated Services Digital Network (ISDN)

Integrated Services Digital Network is a set of communications standards for simultaneous digital transmission of voice, video, data, and other network services over the traditional circuits of the public switched telephone network. It was first defined in 1988 in the CCITT (International Telegraph and Telephone Consultative Committee). The key feature of ISDN is that it integrates speech and data on the same lines, adding features that were not available in the classic telephone system.




ISDN is a circuit-switched telephone network system, which also provides access to packet switched networks, designed to allow digital transmission of voice and data over ordinary telephone copper wires, resulting in potentially better voice quality than an analog phone can provide. It offers circuit-switched connections (for either voice or data), and packet-switched connections (for data), in increments of 64 kilobit/s. In a VIDEOCONFERENCE, ISDN provides simultaneous voice, video, and text transmission between individual desktop videoconferencing systems and group videoconferencing systems.



There are two levels of service: the Basic Rate Interface (BRI), intended for the home and small enterprise, and the Primary Rate Interface (PRI), for larger users. Both rates include a number of B-channels and D-channels. Each B-channel carries data, voice, and other services. Each D-channel carries control and signaling information.



• Basic Rate Interface: The entry level interface to ISDN is the Basic(s) Rate Interface (BRI), a 128 kbit/s service delivered over a pair of standard telephone copper wires. The 144 kbit/s payload rate is broken down into two 64 kbit/s bearer channels ('B' channels) and one 16 kbit/s signaling channel ('D' channel or delta channel). This is sometimes referred to as 2B+D.



• Primary Rate Interface: The other ISDN access available is the Primary Rate Interface (PRI), which is carried over an E1 (2048 kbit/s) in most parts of the world. An E1 is 30 'B' channels of 64 kbit/s, one 'D' channel of 64 kbit/s and a timing and alarm channel of 64 kbit/s.



Reference:

http://en.wikipedia.org/wiki/Integrated_Services_Digital_Network

http://searchenterprisewan.techtarget.com/definition/ISDN

Network Management Overview

Network management is a mission critical factor in successfully operating a network and the business. It ensures all networking equipment and other resources deployed effectively. It increases the availability of network and the proper quality of services. It ensures the security of information and the network. In the case of a service provider, it also provides accurate accounting information for billing.


There are many different reference models, technologies, systems and tools to cover the various functions of network management. In terms of the reference models, the most well known models include the ISO FCAPS: Fault, Configuration, Accounting, Performance and Security. ITU-T proposed the model called the Telecom Management Network (TMN). The newer one proposed by the TeleManagement Forum is called TOM: Telecoms Operations Map or eTOM: enhanced Telecom Operations Map. The most popular traditional model deployed by many Service Providers is called OAM&P: Operation, Administration, Maintenance and Provisioning.

There are many network management technologies and protocols which address some of the network management functions. The most popular technology deployed in the TCP/IP data communication network is the Simple Network Management Protocol (SNMP) defined by IETF. Another popular protocol is the Common Management Information Protocol (CMIP) and Common Management Information Service (CMIS) defined by ISO.

There are many types of systems available for various purposes of network management, which help network management professionals to manage and operate the network and services daily. However, there is no single solution available to address all the network management requirements. Each system may cover one or several functions.

A Typical Network Management Architecture





Reference:

http://www.networkdictionary.com/Telecom/Network-Management-Overview.php

Average revenue per user (ARPU)

Average revenue per user (sometimes average revenue per unit) usually abbreviated to ARPU is a measure used primarily by consumer communications and networking companies, defined as the total revenue divided by the number of subscribers. This term is used by companies that offer subscription services to clients for example, telephone carriers, Internet service providers, and hosts. It is a measure of the revenue generated by one customer phone, pager, etc., per unit time, typically per year or month. In mobile telephony, ARPU includes not only the revenues billed to the customer each month for usage, but also the revenue generated from incoming calls, payable within the regulatory interconnection regime.




There is a trend by telecommunications and internet companies and their suppliers to sell extra services to users and a lot of the promotion that is used by these companies talk of increased ARPU for these operators. It typically manifests in the form of value-added services such as entertainment being sold to customers especially in markets where the primary service offered to the customer, such as the telephony or Internet service, is sold at a commodity rate.



Method of Calculation: To calculate the ARPU, a standard time period must be defined. Most telecommunications carriers operate by the month. The total revenue generated by all units (paying subscribers or communications devices) during that period is determined. Then that figure is divided by the number of units. Because the number of units can vary from day to day, the average number of units must be calculated or estimated for a given month to obtain the most accurate possible ARPU figure for that month



The ARPU can be broken down according to income-producing categories. For example, monthly or annual subscriber fees generate a steady revenue stream but do not take into account short-term changes in customer usage habits. The income generated by "excess minutes," roaming services or incoming calls can be highly variable. New, novel features may temporarily generate higher ARPU figures than established, proven functions. The ARPU can be calculated for each feature to identify sources of the greatest income per unit.



References:

http://en.wikipedia.org/wiki/Average_revenue_per_user

http://searchtelecom.techtarget.com/definition/average-revenue-per-user



VDI - Virtual desktop infrastructure

VDI(Virtual desktop infrastructure) is a computing model that adds a layer of virtualization between the server and the desktop PCs. A VDI environment allows your company’s information technology pros to centrally manage thin client machines, leading to a mutually beneficial experience for both end-users and IT admins. VDI Provides Greater Security, Seamless User Experience, and Superior data security. Because VDI hosts the desktop image in the data center, organizations keep sensitive data safe in the corporate data center—not on the end-user’s machine which can be lost, stolen, or even destroyed. VDI effectively reduces the risks inherent in every aspect of the user environment. With VDI, the end-user experience remains familiar. Their desktop looks just like their desktop and their thin client machine perform just like the desktop PC they’ve grown comfortable with and accustomed to.






VDI is not one product, but rather a technology consisting of five separate components:

• Thin Client Computer

o Most leading thin client manufacturers are coming out with new devices geared toward VDI. The only difference between these devices and their standard thin client device offerings is one or more built-in 3rd Party Connection Brokers. Some are also offering local graphics acceleration where MPEG1 & MPEG2 are rendered locally using the thin client’s display adapter, while others are offering VOIP Soft Phone Support. Although any computer could act as a thin client device, true thin client terminal are more often the choice for VDI and companies don’t want to continue to manage the client OS.

• 3rd Party Connection Broker

o The Connection Broker is the brains of the architecture that determines which Remote Desktop Host (XP Pro or Vista) a user is assigned or connected to. The broker is often a full-blown management product allowing for the automatic deployment and provisioning of Remote Desktop Hosts.

• Virtualized Remote Desktop Host

o Single User Windows XP Pro, Windows Vista or Linux Client OS Hosts, Virtualized on VMware. Client computers connect to these hosts via remote display protocols like Microsoft RDP, Citrix ICA or NX.

• VMware Infrastructure 3 Server (VI3)

o VMware ESX Server software allows for hosting of hardware agnostic Virtual Machines. In the case of VDI, ESX is used to host many Virtual Machines of the Remote Desktop Host Operating Systems.

• VMware VirtualCenter

o Software component for managing ESX Server and libraries of Virtual Machines



Reference:

http://en.wikipedia.org/wiki/Desktop_virtualization

http://searchvirtualdesktop.techtarget.com/feature/What-is-VDI-technology

http://www.virtualizationadmin.com/articles-tutorials/vdi-articles/general/virtual-desktop-infrastructure-overview.html