A Seminar Report On Content Delivery Networks (CDN)
#1

ABSTRACT
Content Delivery Networks (CDN) is evolved in first 1998 as a technique for improving the web performance by replicating web contents over several Surrogate Servers (Mirrored servers) strategically place at various locations to deal with the flash crowds. A set of surrogate servers distributed around the world can cache the origin serverâ„¢s content. Routers and network elements that deliver content requests to the optimal location and the optimal surrogate server. Under CDN, the client-server communication is replaced by two communication flows: one between the client and the surrogate server, and another between the surrogate and origin server. The key advantage of Content Delivery Networks is that improve the content delivery quality, speed and reliability, reduce the load on the origin server and bypassing traffic jams over the web.
INTRODUCTION TO CDN
Content Delivery Networks (CDN) is evolved in first 1998 as a technique for improving the web performance by replicating web contents over several Surrogate Servers (Mirrored servers) strategically place at various locations to deal with the flash crowds. A set of surrogate servers distributed around the world can cache the origin serverâ„¢s content. Routers and network elements that deliver content requests to the optimal location and the optimal surrogate server. Under CDN, the client-server communication is replaced by two communication flows: one between the client and the surrogate server, and another between the surrogate and origin server. Figure below shows the overview of a Content Delivery Network Akamai, Limelight and Mirror Images are some of the examples of Content Delivery Networks. Akamai is the CDN of the website discovery.com. Limelight is for live delivery of videos, audios and games. Mirror Image is for online content and application delivery.
The important benefits of Content Delivery Networks

• Improve the content delivery quality, speed and reliability.
• Reduce the load on the origin server
• Bypassing traffic jams over the web

ORGANIZATION OF CDN
There are two approaches are used for building CDN:
1 Overlay approach
2 Network approach
Overlay approach:
In the overlay approach, application-specific servers and caches at several places in the network handle the distribution of specific content types. Other than providing the basic network connectivity and guaranteed QoS for specific request/traffic, the core network components such as routers and switches play no active role in content delivery. Most of the commercial CDN providers such as Akamai and Limelight Networks follow the overlay approach for CDN organization. These CDN providers replicate content to cache servers worldwide. When content requests are received from the end users, they are redirected to the nearest CDN server, thus improving Web site response time. As the CDN providers need not to control the underlying network elements, the management is simplified in an overlay approach and it opens opportunities for new services.
Network approach:
In the network approach, the network components including routers and switches are equipped with code for identifying specific application types and for forwarding the requests based on predefined policies. Examples of this approach include devices that redirect content requests to local caches or switch traffic to specific servers, optimized to serve specific content types. Some CDNs use both network and overlay approaches for CDN organization. In such case, a network elements can act at the front end of a server farm and redirects the content request to a nearby application-specific surrogate server.

ISSUES IN CONTENT DELIVERY NETWORKS
The major issues related with CDN are
1 Surrogate placement
2 Content selection and delivery
3 Content out sourcing
They are explained below.
Surrogate Placement
Choosing the best location for each surrogate server is important for each
CDN infrastructure. Determining the best network locations for CDN surrogate
servers known as the web server replica placement problem.
There are mainly three approaches are used for selecting the location of CDN
server. They
1.Center Placement
2.Hot spot
3.Topology-informed
Center placement:
Theoretical approaches such as minimum k- center problem and k-Hierarchically well-Separated Trees (k-HST) model the server placement problem as the center placement problem which is defined as follows: for the placement of a given number of centers, minimize the maximum distance between a node and the nearest center. The k-HST algorithm solves the server placement problem according to graph theory. In this approach, the network is represented as a graph G(V,E), where V is the set of nodes and E ? V ×V is the set of links. The algorithm consists of two phases. In the first phase, a node is arbitrarily selected from the complete graph (parent partition) and all the nodes, which are within a random radius from this node form a new partition (child partition). The radius of the child partition is a factor of k smaller than the diameter of the parent partition. This process continues until each of the nodes is in a partition of its own. Thus the graph is recursively partitioned and a tree of partitions is obtained with the root node being the entire network and the leaf nodes being individual nodes in the network. In the second phase, a virtual node is assigned to each of the partitions at each level. Each virtual node in a parent partition becomes the parent of the virtual nodes in the child partitions and together the virtual nodes form a tree. Afterwards, a greedy strategy is applied to find the number of centers needed for the resulted k-HST tree when the maximum center-node distance is bounded by D. The minimum k-center problem can be described as follows: (1) Given a graph G(V,E) with all its edges arranged in non-decreasing order of edge cost c :
c(e1) =
c(e2) = . . .. = c(em), construct a set of square graphs G
2
1, G
2
2, . . .. , G
2
m. Each
square graph G, denoted by G
2
is the graph containing nodes V and edges (u,v) wherever there is a path between u and v in G. (2) Compute the maximal independent set Mi for each G2i. An independent set of G2 is a set of nodes in G that are at least three hops apart in G and a maximal independent set M is defined as an independent set V_ such that all nodes in V -V_ are at most one hop away from nodes in V_. (3) Find smallest i such that Mi = K, which is defined as j. Finally, Mj is the set of K center.
Hot spot
:Hot spot algorithm places replicas near the
clients generating greatest load. It sorts the N potential site according to the amount
of traffic generated surrounding them and places replicas at the top M sites that
generate maximum traffic.
Topology-informed
:In this strategy servers are placed on
candidate hosts in descending order of out degrees (the number of other nodes
connected to a node). Here the assumption is that nodes with more out degrees can reach more nodes with smaller latency. For surrogate server placement, the CDN administrators also determine the optimal number of surrogate servers using single-ISP and multi-ISP approach. In the Single-ISP approach, a CDN provider typically deploys at least 40 surrogate servers around the network edge to support content delivery. The policy in a single-ISP approach is to put one or two surrogates in each major city within the ISP coverage. The ISP equips the surrogates with large caches. An ISP with global network can thus have extensive geographical coverage without relying on other ISPs. The drawback of this approach is that the surrogates may be placed at a distant place from the clients of the CDN provider. In Multi-ISP approach, the CDN provider places numerous surrogate servers at as many global ISP Points of Presence (POPs) as possible. It overcomes the problems with single-ISP approach and surrogates are placed close to the users and thus content is delivered reliably and timely from the requesting clientâ„¢s ISP. Large CDN providers such as Akamai have more than 25000 servers. Other than the cost and complexity of setup, the main disadvantage of the multi-ISP approach is that each surrogate server receives fewer (or no) content requests which may result in idle resources and poor CDN performance . Estimation of performance of these two approaches shows that single-ISP approach works better for sites with low-to- medium traffic volumes, while the multi-ISP approach is better for high-traffic sites
.
Content selection and delivery The efficiency of content delivery lies in the right selection of content to be delivered to the end users. An appropriate content selection approach can assist in the reduction of client download time and server load. Content can be delivered to the customer in two ways.


1.Full-site content selection and delivery.
2.Partial-site content selection and delivery.
Full-site content selection and delivery
:In this approach surrogate
servers perform entire replication in order to deliver the total content site to the end users. With this approach, a content provider configures its DNS in such a way that all client requests for the web site are resolved by CDN server, which then delivers all the content. The main advantage of this approach is its simplicity. This approach is not a feasible one because of that the sizes of the web objects are increasing day by day and so there is a chance for insufficient storage space on the CDN servers. Another problem is that web objects are not static, the problem of updating such a huge collection of web objects is unmanageable. Partial-site content selection and delivery :
In the partial-site content selection and delivery, surrogate servers perform partial replication of the web objects. Only embedded objects such as web images are delivered from CDN server. (An object created with one application and embedded into a document created by another application is called embedded object). Thus the base HTML is retrieved from origin server, while embedded objects are retrieved from CDN cache servers. The selection of web objects from large collection of web objects is needed in the case of partial-site content selection and delivery. There are mainly three methods 1.
Empirical “based
2.Popularity-based
3.Cluster-based
Empirical-based
:In this approach the web site administrator selects the content to be replicated to the CDN servers. Heuristics are used in making such a empirical decision. The main drawback of lies approach lies in the uncertainty in choosing the right heuristics.
Popularity-based
: In this the most popular objects are replicated to the surrogates. This approach is time consuming and reliable objects request statistics is not guaranteed due to the popularity of each object varies considerably. Moreover, such statistics are not available for newly introduced contents. Cluster-based
:
In a cluster-based approach, web content is grouped based on either correlation or access frequency and is replicated in units of content clusters. Content outsourcing Choosing an efficient content outsourcing technique is important same as placement of surrogate servers and content selection for delivery. Content out sourcing is performed using
1.Cooperative push-based
2.Non-cooperative pull-based
3.Cooperative pull-based
Cooperative push-based
:Cooperative push-based approach
depends on the pre-fetching of content to the surrogates. Content is pushed to surrogate servers from the origin, and surrogate servers cooperate to reduce replication and update cost. In this scheme, the CDN maintains a mapping between content and surrogate servers, and each request is directed to closest surrogate server and if it cannot handle the request then request is directed to the origin server. In this approach heuristic algorithm is suitable for making replication decision among cooperating servers. Non-cooperative pull-based
:In this approach client request are
directed to their closest surrogate servers. If there is cache miss, surrogate servers pull content from the origin server. Most CDN providers such as Akamai, Mirror Image use this approach. The drawback of this approach is that an optimal server not always chosen to serve content request. Many CDN s use this approach since the cooperative push-based approach is still at the experimental stage. Cooperative pull-based
: In this approach surrogate servers cooperate with each other to get the requested content in the case of cache miss. In the cooperative pull-based approach client requests are directed to the closest surrogate server through DNS redirection. Using a distributed index, the surrogate server find nearby copies of requested content and store it in the cache. In the context of content outsourcing, it is crucial to determine in which surrogate servers the outsourced content should be replicated. Several works can be found in literature demonstrating the effectiveness of different replication strategies for outsourced content. Kangasharju et al. [54] have used four heuristics, namely random, popularity, greedy-single, and greedy-global, for replication of outsourced content.Tse [94] has presented a set of greedy approaches where the placement is occurred by balancing the loads and sizes of the surrogate servers. Pallis et al. [72] have presented a self-tuning, parameterless algorithm called lat- cdn for optimally placing outsourced content in CDNâ„¢s surrogate servers. This algorithm uses objectâ„¢s latency to make replication decision. An objectâ„¢s latency is defined as the delay between a request for a Web object and receiving the object in its entirety. An improvement of the lat-cdn algorithm is il2p [70], which places the outsourced objects to surrogate servers with respect to the latency and load of the objects.



INTERACTION PROTOCOLS
There are two protocols are used in CDN for interaction between network
elements.
1Network Element Control Protocol (NECP)
2Web Cache Control Protocol (WCCP)
3Cache Array Routing Protocol (CARP)
4Cache Array Routing Protocol (CARP)
5Internet Cache Protocol (ICP)
6Hyper Text Caching Protocol (HTCP)
Network Element Control Protocol (NECP) NECP is a protocol used for signaling between servers and the network elements that forward traffic to them. It is a lightweight protocol (A protocol is called lightweight if it is designed with less complexity in order to reduce overhead. For e.g. Uses fixed length headers) .The network elements consists of a range of devices, including content-aware switches and load-balancing routers. NECP does not have any load balancing policy, rather this protocol provides methods for network elements to learn about server capabilities, availability and hints as to which flows can and cannot be served. Hence network elements can make load balancing decisions.
NCEP uses Transport Control Protocol (TCP). When a server is initialized, it establishes a TCP connection to the network elements using a well known port number. Messages can then send bi-directionally between the server and network elements. Most messages consist of a request followed by a reply or acknowledgement. Receiving a positive acknowledgement implies the recording of some state in a peer system. This can be assumed to remain in that peer until it expires or the peer crashes. Application level KEEPALIVE messages are used to detect a crashed peer in such communications. When a node detects that its peer has been crashed, it assumes that all the states in that peer need to be reinstalled after the peer is revived. Web Cache Control Protocol (WCCP) TheWeb Cache Control Protocol (WCCP) specifies interaction between one or more routers and one or more Web-caches. It runs between a router functioning as a redirecting network element and interception proxies. The purpose of such interaction is to establish and maintain the transparent redirection of selected types of traffic flow through a group of routers. The selected traffic is redirected to a group of Web-caches in order to increase resource utilization and to minimize response time. WCCP allows one or more proxies to register with a single router to receive redirected traffic. This traffic includes user requests to view s and graphics on World Wide Web (WWW) servers, whether internal or external to the network, and the replies to those requests. This protocol allows one of the proxies, the designated proxy, to dictate to the router how redirected traffic is distributed across the caching proxy array. WCCP provides the means to negotiate the specific method used to distribute load among Web caches. It also provides methods to transport traffic between router and cache. Cache Array Routing Protocol (CARP) The Cache Array Routing Protocol (CARP) [96] is a distributed caching protocol based on a known list of loosely coupled proxy servers and a hash function for dividing URL space among those proxies. An HTTP client implementing CARP can route requests to any member of the Proxy Array. The proxy array membership table is defined as a plain ASCII text file retrieved from an Array Configuration URL. The hash function and the routing algorithm of
CARP take a member proxy defined in the proxy array membership table, and make an on-the-fly determination about the proxy array member which should be the proper container for a cached version of a resource pointed to by a URL. Since requests are sorted through the proxies, duplication of cache content is eliminated and global cache hit rates are improved.
Internet Cache Protocol (ICP)
The Internet Cache Protocol (ICP) is a lightweight message format used for inter- cache communication. Caches exchange ICP queries and replies to gather information to use in selecting the most appropriate location in order to retrieve an object. Other than functioning as an object location protocol, ICP messages can also be used for cache selection. ICP is a widely deployed protocol. although, Web caches use HTTP for the transfer of object data, most of the caching proxy implementations support it in some form. It is used in a caching proxy mesh to locate specific Web objects in neighboring caches. One cache sends an ICP query to its neighbors and the neighbors respond with an ICP reply indicating a HIT or a MISS. Failure to receive a reply from the neighbors within a short period of time implies that the network path is either congested or broken. Usually, ICP is implemented on top of User Datagram Protocol (UDP) in order to provide important features to Web caching applications. Since UDP is an unreliable and connectionless network transport protocol, an estimate of network congestion and availability may be calculated by ICP loss. This sort of loss measurement together with the round-trip-time provides a way to load balancing among caches. Hyper Text Caching Protocol (HTCP)
The Hyper Text Caching Protocol (HTCP) is a protocol for discovering HTTP caches, cached data, managing sets of HTTP caches and monitoring cache activity. HTCP is compatible with HTTP 1.0. This is in contrast with ICP, which was designed for HTTP 0.9. HTCP also expands the domain of cache management to include monitoring a remote cacheâ„¢s additions and deletions, requesting immediate deletions, and sending hints about Web objects such as the third party locations of cacheable objects or the measured uncacheability or unavailability of Web objects. HTCP messages may be sent over UDP or TCP. HTCP agents must not be isolated from network failure and delays. An HTCP agent should be prepared to act in useful ways in the absence of response or in case of lost Or damaged responses.
CONTENT/SERVICE TYPES
CDN providers host third-party content for fast delivery of any digital content, including “ static content, dynamic content, streaming media (e.g. audio, real time video), and different content services (e.g. directory service, e-commerce service, and file transfer service). The sources of content are large enterprises, Web service providers, media companies, and news broadcasters. Variation in content and services delivered requires a CDN to adopt application-specific characteristics, architectures, and technologies. Due to this reason, some of the CDNs are dedicated for delivering particular content and/or services. Here, we analyze the characteristics of the content/service types to reveal their nature. Static content refers to content for which the frequency of change is low. It does not change depending on user requests. It includes static HTML s, embedded images, executables, PDF documents, software patches, audio and/or video files. All CDN providers support this type of content delivery. This type of content can be cached easily and their freshness can be maintained using traditional caching technologies. Dynamic content refers to the content that is personalized for the user or created on-demand by the execution of some application process. It changes frequently depending on user requests. It includes animations, scripts, and DHTML. Due to the frequently changing nature of the dynamic content, usually it is considered as uncachable. Streaming media can be live or on-demand. Live media delivery is used for live events such as sports, concerts, channel, and/or news broadcast. In this case, content is delivered instantly from the encoder to the media server, and then onto the media client. In case of on-demand delivery, the content is encoded and then is stored as streaming media files in the media servers. The content is available upon requests from the media clients. On-demand media content can include audio and/or video on-demand, movie files and music clips. Streaming servers are adopted with specialized protocols for delivery of content across the IP network. A CDN can offer its network resources to be used as a service distribution channel and thus allows the value-added services providers to make their application as an Internet infrastructure service. When the edge servers host the software of value added services for content delivery, they may behave like transcoding proxy servers, remote callout servers, or surrogate servers [64]. These servers also demonstrate capability for processing and special hosting of the value- added Internet infrastructure services. Services provided by CDNs can be directory, Web storage, file transfer, and e-commerce services. Directory services are provided by the CDN for accessing the database servers. Users query for certain data is directed to the database servers and the results of frequent queries are cached at the edge servers of the CDN. Web storage service provided by the CDN is meant for storing content at the edge servers and is essentially based on the same techniques used for static content delivery. File transfer services facilitate the worldwide distribution of software, virus definitions, movies on-demand, and highly detailed medical images. All these contents are static by nature. Web services technologies are adopted by a CDN for their maintenance and delivery. E-commerce is highly popular for business transactions through the Web. Shopping carts for e-commerce services can be stored and maintained at the edge servers of the CDN and online transactions (e.g. third-party verification, credit card transactions) can be performed at the edge of CDNs. To facilitate this service, CDN edge servers should be enabled with dynamic content caching for e-commerce sites.
CACHE ORANIZATION AND MANAGEMENT
Content management is essential for CDN performance, which is mainly depend on the cache organization followed by the CDN. Cache organization includes the caching technique used and the frequency of cache update to ensure the freshness, availability, and reliability of the content. Caching Techniques
There are four caching techniques are used in the CDN.
1.Query-based
2.Digest-based
3.Directory-based
4.Hashing-based
Query-based
:In query based scheme, when a cache miss
occur the CDN server broadcasts a query to other cooperating CDN servers. The problems with this scheme are the significant query traffic and delay because a CDN server has to wait for last miss reply from all the cooperating surrogates before concluding that none of its peers has the requested content. Because of these drawbacks, query-based scheme suffers from implementation overhead. Digest-based
:
In the digest-based scheme, each of the CDN servers maintains a digest of content held by the other cooperating surrogates. The cooperating surrogates are informed about any sort of update of the content by updating CDN server. On checking the content digest, CDN server can take decision to route a content request to a particular surrogate. The main drawback is that it suffers from update traffic overhead, because of the frequent exchange of update traffic needs to make sure that the cooperating surrogates have correct information about each other.
Directory-based
:The directory-based scheme is a centralized version of digest-based scheme. In directory-based scheme, a centralized server keeps content information of all the cooperating surrogates inside a cluster. Each CDN server only notifies the directory server when local updates occur and queries the directory server whenever there is a local cache miss. The main drawback of this scheme is that due to the centralized approach if the directory server failed the overall cache management will not be possible and the other is that the single directory server receives updates and query traffic from all cooperating surrogates. Hashing-based
:In hashing-based scheme, the cooperating CDN servers maintain the same hashing function. A designated CDN server holds a content based on the contentâ„¢s URL, IP address of CDN servers, and hashing function. All requests for that particular content are directed to that designated server. Hashing based scheme is more efficient than other schemes since it has the smallest implementation overhead and highest content sharing efficiency.
Cache Updating Methods
To ensure the consistency and freshness of the content at replicas, CDNs deploy different cache update techniques. The common methods used for updating caches are
1.Periodic update
2.Update propagation
3.On-demand update
4.Invalidation
Periodic Update
:In periodic update caches are updated
in regular fashion. Means at each interval the origin server updates the caches of
surrogate servers. But this approach suffers from significant levels of unnecessary traffic generated from update traffic at each interval. Update Propagation
:Update propagation is triggered with the change in the content. It performs active content pushing to the cache servers. In this mechanism, an updated version of a document is delivered to all caches whenever a change is made to the document at the origin server. For frequently change contents, this approach generates excess update traffic. On-demand Update
:On-demand update is a cache update mechanism where the latest copy of a document is propagated to the surrogate cache server based on prior request to that content. This approach follows an assume nothing structure and content is not updated unless it is requested. The disadvantage of this approach is the back-and-forth traffic between the cache and origin server in order to ensure that the delivered content is latest. Invalidation
:In this method an invalidation message is sent to all surrogate caches when a document is changed at the origin server. The surrogate caches are blocked from accessing the documents when it is being changed. Each cache needs to fetch an updated version of the document individually later. The drawback of this approach is that delayed fetching of content by caches may lead to inefficiency of managing consistency among cached contents.
PERFORMANCE MEASUREMENT
Performance measurement of a CDN is done to measure its ability to serve the customers with the desired content and/or service. Typically the content providers to evaluate the performance of a CDN use five key matrices. Those are

Cache hit ratio
: It is defined as the ratio of number of cached documents versus total documents requested. A high hit rate reflects that the CDN is using an effective cache technique to manage its caches.
Reserved bandwidth
:It is the measure of the bandwidth
used by the origin server. It is measured in bytes and is and is retrieved from the origin server.

Latency
:It refers to the user perceived response time. Reduced latency indicates that less bandwidth is reserved by origin server.
Surrogate server utilization :
It refers to the fraction of time during
which the surrogate servers remain busy. This metric is used by the
administrators to calculate CPU load, number of requests served and storage
I/O usage.

Reliability
Tongueacket-loss measurements are used to
determine the reliability of CDN. High reliability indicates that the CDN incurs less packet loss and is always available to clients. Performance measurement can be accomplished based on internal measures as well as from the customer perspective. A CDN providerâ„¢s own performance testing can be misleading, since it may perform well for a particular web site and/or content, but poorly for others. To ensure the reliable performance measurement, a CDNâ„¢s performance can be measured by independent third-party.
The performance measurement taxonomy is shown below.
Performance
measurement
Internal measurement
CDN servers could be equipped with the ability to collect statistics in order to get an end-to-end measurement of its performance. In addition, deployment of probes throughout the network and correlation of the information collected by probes with the cache and server logs can be used to measure the end-to-end performance.
External measurement
In addition to internal performance measurement, external measurement of performance by an independent third-party informs the CDN customers about the verified and guaranteed performance. This process is efficient since independent performance-measuring companies support benchmarking networks strategically located measurement computers connected through major Internet backbones in several cities. These computers measure how a particular web site performs from the end userâ„¢s perspective, considering service performance metrics in critical areas.
Network statistics acquisition for performance measurement For internal or external performance measurement, different network statistics acquisition techniques are deployed based on several parameters. Such techniques may involve network probing, traffic monitoring, and feedback from
Internal measurement
External measurement
surrogates. Typical parameters in the network statistics acquisition process include geographical proximity, network proximity, latency, server load, and server performance as a whole. Network probing is a measurement technique where the possible requesting entities are probed in order to determine one or more metrics from each surrogate or a set of surrogates. Network probing can be for P2P-based cooperative CDNs where the the surrogates servers are not controlled by a single CDN provider. Examples of such probing techniques sometimes not suitable and limited for some reasons. It introduces additional network latency which may be significant for small web requests. Moreover, performing several probes to an entity often triggers intrusion-detection alerts, resulting in abuse complaints. Probing sometimes may lead to an inaccurate metric as ICMP traffic can be ignored or reprioritized due to concerns of Distributed Denial of Service attacks. Traffic monitoring is a measurement technique where the traffic between the client and surrogate is monitored to know the actual performance metrics. Once the client connects, the actual performance of transfer is measured. This data is then fed back into the request-routing system. An example of such monitoring is to watch the packet loss from a client to surrogate or the user perceived response time(latency) by observing the TCP behaviour. Latency is simplest and mostly used distance metric, which can be estimated by monitoring the number of packets traveled along the route between client and the surrogate. Performance measurement through simulation Other than using internal and external performance measurement, researchers use simulation tools to measure a CDNâ„¢s performance. Some researchers also experiment their CDN policies on real platforms such as PlanetLab. The CDN simulators implemented in software are valuable tools for researchers to develop, test and diagnose a CDN Some researchers also experiment their CDN policies on real platforms such as PlanetLab . The CDN simulators implemented in software are valuable tools for researchers to develop, test and diagnose a CDNâ„¢s performance, since accessing real CDN traces and logs is not easy due to the proprietary nature of commercial CDNs. Such a simulation process is economical because of no involvement of dedicated hardware to carryout the experiments. Moreover, it is flexible because it is possible to simulate a page link with any bandwidth and propagation delay and a router with any queue size and queue management technique. A simulated network environment is free of any uncontrollable factors such as unwanted external traffic, which the researchers may experience while running experiments in a real network. Hence, simulation results are reproducible and easy to analyze. A wide range of network simulators are available which can be used to simulate a CDN to measure its performance. Moreover, there are also some specific CDN simulation systems that allow a (closely) realistic approach for the research community and CDN developers to measure performance and experiment their policies. However, the results obtained from a simulation may be misleading if a CDN simulation system does not take into account several critical factors such as the bottlenecks that are likelyto occur in a network, the number of traversed nodes etc., considering the TCP/IP network infrastructure. Mapping of the Taxonomy to Representative CDNs In this section, we provide the categorization and mapping of our taxonomy to a few representative CDNs that have been surveyed in Chap. 1 of this book. We also present the perceived insights and a critical evaluation of the existing systems while classifying them. Our analysis of the CDNs based on the taxonomy also examines the validity and applicability of the taxonomy.
REQUEST ROUTING IN CDN
A request-routing system is responsible for routing client requests to an appropriate surrogate server for the delivery of content. It consists of a collection of network elements to support request-routing for a single CDN. It directs client requests to the replica server closest to the client. However, the closest server may not be the best surrogate server for servicing the client request. Hence, a request-routing system uses a set of metrics such as network proximity, client perceived latency, distance, and replica server load in an attempt to direct users to the closest surrogate that can best serve the request. The content selection and delivery techniques (i.e. full-site and partial-site) used by a CDN have a direct impact on the design of its request-routing system. If the full-site approach is used by a CDN, the request routing system assists to direct the client requests to the surrogate servers as they hold all the outsourced content. On the other hand, if the partial-site approach is used, the request-routing system is designed in such a way that on receiving the client request, the origin server delivers the basic content while surrogate servers deliver the embedded objects. The request-routing system in a CDN has two parts: deployment of a request- routing algorithm and use of a request-routing mechanism. A request-routing algorithm is invoked on receiving a client request. It specifies how to select an edge server in response to the given client request. On the other hand, a request-routing mechanism is a way to inform the client about the selection. Such a mechanism at first invokes a request-routing algorithm and then informs the client about the selection result it obtains.
CONCLUSION
Content Delivery Networks are still in an early stage of development and itâ„¢s future evolution remains an open issue. It is essential to understand the existing practices involved in CDN framework in order to propose or predict the evolutionary steps. The challenge is to provide a dedicate balance between costs and customers satisfaction. In this framework, caching-related practices, content personalization processes, and data mining techniques seem to offer an effective roadmap for the further evolution of CDNs.


REFERENCES
1. Pallis, G. and Vakali,A. Insight and Perspective for Content Delivery
Networks. communications of ACM 49, 1(January 2006),101-106
2. Buyya, R. Pathan, M. And Vakali, A. Content Delivery Networks
3. Held,G. A Practical Guide To Content Delivery Networks
4. http://en.wikipediawiki/Content_delivery_network
5. http://cs.mu.oz.au/apathan/CDNs.htm
Reply
#2

otlix.com
Otlix - Content Delivery Network - CDN
Otlixâ„¢ has been dedicated to optimize the use of media since 2006, and provides Content Delivery Network (CDN) services, promotion and advertising solutions for traditional media companies, advertising networks, movie studios, social media.
Our web-based service, OtRiderâ„¢, is a Rich Internet Application (RIA) that provides a single, integrated front-end control console for managing all of the CDN services. OtRiderâ„¢ includes inter alia, critical functions that simplify tracking processes, enables cross channel, Analytics and Statistics, Essential Reporting and streamline integration with technology components, whilst ensuring data security and compliance.
OtRiderâ„¢ is the simplest way to give you the ability to hold the reins.
In 2009, Otlixâ„¢ led Digital Campaigns for a variety of formats, AD Contents, Internet Banners including Rich Media and Stream Video for brands.
Otlixâ„¢ Core Systems are serving all major markets worldwide; this allows customers to deploy Global Operation with guaranteed service level, integrated metrics receiving partner.
Our loyal customers' base is our key to success. We are ready to listen to our customers and will do anything to satisfy the needs of each customer, regardless of size or type of business.
At Otlixâ„¢, we welcome and encourage your feedback and we thrive on your accomplishments.

Otlix.com CDN, Digital Media, Storage, High Speed Network, Delivering Large Files, Ads Solutions, lead Campaign, banners, internet banners, delivery, promotion, content, CDN solution, storage, bandwidth, Otlix CDN, OtRider, statistical, analytics, large distribute content, digital media, media agency, creative, advertiser, publisher ,streaming , flash ,swf, flv, gif, jpg, Otlix Promotional advertising , reporting, analytics and statistical, Delivering Large Files, Newsletters services ,backups archive solutions, content distribution network, high-speed, network, content delivery, HD, windows media streaming, flash streaming, content delivery network, adobe, windows media, live streaming, rich media , Otlix Content Delivery Network, Promotional advertising, live streaming, rich media , Otlix Content Delivery Network, Promotional advertising

http://otlix.com

--------------------------------------------------------------------------------------------

Our Technology
CDN-Overview

As we all know, nowadays fast-growing companies and organizations are launching to the web in order to reach their customers to provide them fast digital media content.
Major companies are streaming live or on-demand video, and making large files available to view, browse and download to your local and global audience.
We are the address for supporting your digital media strategic needs.

Content Delivery Network Services
A new powerful service, Essential Delivery, which delivers fast, accurate and stable content engine, makes dreams come reality. Face it to believe it.
Our CDN was designed and built to meet and transform all of your delivery needs, especially for the largest media and entertainment companies.
CDN helps users to resolve distributive storage problems, assists loading balance, re-direction of network request and its content management all at once.
It is designed to deliver the content on website to the "edge"… closest to the users’ environment to enable them to acquire their required content at hand, solve internet network congestion, as well as improving response to website visits by adding a new layer of global load balancing architecture on the internet.
Technically, it completely solves the fundamental problems caused by slow response to users’ request while accessing due to insufficient bandwidth, massive accesses and uneven distributed network nodes.
http://otlixour_technology.php?page=cdn
---------------------------------------------------------------------------------------------------
OtRiderâ„¢
OtRiderâ„¢ is a Rich Internet Application (RIA) that provides a single, integrated front end control console for managing all of the CDN services includes critical functions that simplify keeping track processes, enable cross channel, statistics, analytics optimizing loading Essential Reporting and streamline integration with technology components, while ensuring data secure, stable and compliance.
OtRider™ simplifies the right tools, streamlines the process and enhances your view. OtRider™ gives a clear and customizable road map: an intuitive guide to get your company’s needs ready to provide powerful analytics.
OtRiderâ„¢ was initially built mainly for huge agencies that have a major level of database. Along the way, we have learnt how to optimize and improve DNS processes and cut them from six or seven clicks down to one or two.
We added new startup innovations that will literally shave hours out of your workweek.
Start thinking what to do with all the extra time you're saving.
http://otlixour_technology.php?page=otRider
---------------------------------------------------------------------------------------------------
Global Load Balancing Architecture
Otlixâ„¢ Global Load Balancing Architecture is mainly used for routing algorithms to route users directly to the POP that will serve them the fastest.
POP is taking many factors into transforming algorithms, including: the number of network hoops to the end-user, the exact geographic location of the user in the world, Internet congestion along the route to the end user and how many different networks that a user would travel through to access the content.
Otlixâ„¢ CDN use multi-level DNS to attempt to triangulate the position of a user to determine its exact best location, so no matter where that user's DNS server is located, the user is located and delivers accordingly.
Our GLBA helps you and us as well to tap into both public and private inter-network peering opportunities available to us at these exchange points. Our clients are able to take full advantage of Otlixâ„¢ CDN architecture with complete global coverage and local POPs.
Presence at key exchange points around the world ensures that their services are optimized to be the most cost-effective.
http://otlixour_technology.php?page=global_load_balancing_architecture
---------------------------------------------------------------------------------------------------
Otlix Implement Acceleration
As we all know even the best designed websites will lose existing customers along their potential if surfing experience is not fast, compelling and satisfying.
Even if your edge servers are optimized, and your website is designed well, inherent internet latency and congestion can cause delays and other performance problems.
These problems are magnified for non-cacheable content due to a greater dependency on the edge servers and the distance between an end-user and the edge destination server.
Otlixâ„¢ Implement Acceleration solves these problems by optimizing the middle mile, enabling a super stable paralleled end-user experience.
OtRiderâ„¢ provides proven technology by reducing the number of data roundtrips necessary to a complete stable web request, plus accelerating performance and improving the end-user experience.

Our Customer Satisfaction
With an extremely fast page loading times using Otlixâ„¢ engine with a dramatically improved end-user operation, you will enjoy a more satisfying and comfortable online working environment.

Otlix Control
In order to reduce lagging, Otlixâ„¢ uses a minimal infrastructure footprint and dynamic traffic routing capabilities that enables a maximum reach and efficiency without compromising quality or reducing performance.

Measurement Improving
We can improve web page response speeds, plus ensure reliability and security through distributed processing technology.
Even at times of unexpected traffic surges or temporary network interruptions, Otlixâ„¢ engine will deliver the content everywhere in the world.

Rapid Web Page Loading
Otlixâ„¢ creative engine has an enhancement of existing web page loading. Its speed accomplished through accelerated processes that diverse into a complete splitting of the Video Streaming, Internet Banners, AD content and graphic images.

Secure and Failure Solving
Processing and loading of high levels of traffic by using Otlixâ„¢ global CDN infrastructure will accomplish a more stable uninterrupted processing.
Otlixâ„¢ automatically provides a complete, fast detour route for service when there is a problem with a particular node.


OtRiderâ„¢ is fundamental smart tool to have.
http://otlixour_technology.php?page=otlix_implement_acceleration
---------------------------------------------------------------------------------------------------
Essential Delivery Services
We are facing to the age of web 2.0 hosting services with the growth of rich media, and the requirement for high level capacity transferring content is becoming hard to maintain, nevertheless, optimum service quality is a critical competitive factor.
Otlixâ„¢ delivers extremely fast stable traffic balancing technology, smoothes file synchronization including Essential Delivery Services, as well as its extensive service know-how to help businesses effectively and efficiently manage and deliver content. It is more than just a simple content delivery.
One of the most critical issues is that content downloads disables a large number of users from downloading a variety of digital content at the same time.
Otlixâ„¢ enables this option and ensures delivering ultra fast downloads of large files even during traffic peaks and surges, in a stable and fast way.

Delivering Large Files
Every professional weight lifter should be able to endure heavy lifting. When it comes to rich media and high volume data trafficking, we, here at Otlixâ„¢, have been through all the lifting workouts for you. Now all that is left for us to do is to impress you.

• Orientated to the delivery of large files - over 4 GB
• Authentication and Digital Rights Management (DRM) for content security
• Download service using several protocols (HTTP/FTP, etc.)
• Optimize the quality of service - sessions are automatically added to guarantee a complete flow and smooth to your rate of speed

Content Stability With SSL
Entering an SSL secure site ensures stability and ease by using SSL authentication certificates.


The SSL caching provides a level of service processes with encryption operation of SSL authentication between the user and the server with a separate caching service.
Primary security synergy with the SSL caching server and secondary security certified content performance improvement.
With SSL caching maintenance cost reduction, the need for high-priced equipment and specialized professional staff will be reduced. Manage SSL scalability flexible expansion options to cope with the traffic increases.
Your peace of mind is a must.
http://otlixour_technology.php?page=essential_delivery_services
--------------------------------------------------------------------------------------------------
Storage
Our Storage infrastructure is a high-performance Storage platform, high level capacity, high availability Storage for all of their content, accessible via FTP and Web Interface, and is designed to be redundant and scalable. Additionally, Otlixâ„¢ can deploy custom Storage solutions at all of our POPs throughout the globe.
Otlixâ„¢ will work with you to find the most efficient and cost effective Storage strategy. There are a number of factors that go into the decision making of how to best store content. An intersection of price and availability will determine the right decision for you.

Otlix Edge
For content that is most highly demanded, the best, fastest and most efficient way of delivery is Otlixâ„¢ Edge Storage.
We have architected our CDN to allow for nearly instant propagation of your files to each of our POPs worldwide. The file is automatically stored and ready to be delivered, viewed and downloaded. If demand lessens, the file can be moved to origin Storage, where it remains available.

Otlix Source
You can maintain the readiness of your files, while saving on your monthly bill. You can also set parameters around each file to ensure that it lives on the edge when in high demand, and moves back to the origin during slower times.

http://otlixour_technology.php?page=storage
----------------------------------------------------------------------------------
Products & Services
Measurement, Placing and Reporting
Imagine a state in which you will know where your investment bears the most fruit. Which one of the commercials takes your business in the best direction it should, and gives you the ability to know which way this is?!
We believe that since it is your resources such as content and bandwidth that are on the line, you should have all the monitoring, tracking and analyzing tools your business needs.
We, here at Otlixâ„¢, provide those tools and features in order to make your life easier. Our tools provide complete reporting flexibility with access to new reports on how and when you may need them. Reporting and Analyzing should enable you to lower costs, analyze trends, predict usage patterns and help you improve the performance of your content delivery.
This is the time for you to know which geese are laying the golden eggs.

Analytics and Statistics
We would like to take this opportunity and fill you in on the diverse sorts of services and software solutions we offer. Our statistical software service that is being led by our innovative OtRiderâ„¢ Rich Internet Application (RIA) provides customers with daily statistical reports via personal password protected account.

Here is a glance at our technical abilities:
• Smart system feature which calculates the amount of clicks and views web wide
• Data mining and displaying in desired time ratios. From a 30-minute-window to a yearly report
• Data of your best periods can be gathered and calculated with the possibility to choose specific days, weeks or months for the report
• The system results are displayed by a graph which will present the data in different ways
• Campaign sorting by varied criteria
• Elaborated data with regards to any specific given file
• A full disclosure on the best websites and/or "geographical" location for your content


Rotation and Randomness
You are given the chance to construct a farm house or a skyscraper at the most wanted location and for the same amount of money. What would you choose?
Otlix™ will give you the ability of getting the most out of the advertising space you possess. This ability is powered by our OtRider™ Rich Internet Application (RIA). The procedure, or the "Smart System” feature that takes place, is one in which your most popular files are being selected and weaved together into a single link.
This single page link connects those files and represents them randomly.
Instead of acquiring several advertisement spaces, the page link which displays the content randomly is adjacent to only one advertisement area.
In doing so, maximum space usage and exposure are guaranteed, as well as costs efficiency.

Real Time Reporting
OtRiderâ„¢ Rich Internet Application (RIA) will let you see the full picture in one place, available in the online reporting interface, and can be re-run as needed.

• Gone are the days of manually combining data
• See the full integrated picture in your favorite analytics system
• Transfer data from OtRider™ Rich Internet Application (RIA) database
• All in a secured way which is being monitored 24/7
• Monitoring real time traffic/user connection statistics and disk utilization
http://otlixservices.php?page=measurement_placing_and_reporting
------------------------------------------------------------------------------------------------
Content Management
The ability to quickly and effectively manage your content is the key to a successful CDN solution.
OtRiderâ„¢ Rich Internet Application (RIA) helps you to possess the power of absolute control over all content material in real time.
A full featured browser based content management solution.
OtRiderâ„¢ Rich Internet Application (RIA) follows our customers focus usability principles by displaying a PC browsing environment. The folder arrangement system accustomed to standard computers is fully integrated into the software.
An available delivery system for HTTP (progressive download) content lives in folders and creating a folder that is easy and instantaneous. Once a new folder is created on the CDN, it is immediately ready to store content. OtRiderâ„¢ Rich Internet Application (RIA) supports standard file system behavior, including renaming, file/folder deletion and drag-and-drop operations.

Management Features
From today on, you will find that your closet is much bigger when the clothes are folded. You will stop looking for your car keys.
Arrangement and organization are key factors in success, and together, we will move forward.
• Upload, track, organize, move and delete content
• View content uploading and propagation in real time
• Safeguard content with customized access protection

A key performance differentiator of the OtRiderâ„¢ Rich Internet Application (RIA) CDN is our content differentiate rate. Once you have uploaded content to one of our ingest locations, file delivery to the other POPs is accelerated as it propagates over our backbone, making it available to your customer instantly.

Delivering Large Files
Every professional weight lifter should be able to endure heavy lifting. When it comes to rich media and high volume data trafficking, we, here at Otlixâ„¢, have been through all the lifting workouts for you. Now all that is left for us to do is to impress you.

• Orientated to the delivery of large files - over 4 GB
• Authentication and Digital Rights Management (DRM) for content security
• Download service using several protocols (HTTP/FTP, etc.)
• Optimize the quality of service - sessions are automatically added to guarantee a complete flow and smooth to your rate of speed
Content Stability With SSL
Entering an SSL secure site ensures stability and ease by using SSL authentication certificates.
The SSL caching provides a level of service processes with encryption operation of SSL authentication between the user and the server with a separate caching service.
Primary security synergy with the SSL caching server and secondary security certified content performance improvement.
With SSL caching maintenance cost reduction, the need for high-priced equipment and specialized professional staff will be reduced. Manage SSL scalability flexible expansion options to cope with the traffic increases.
Your peace of mind is a must.

http://otlixservices.php?page=content_management

------------------------------------------------------------------------------------------------
Newsletters Services
Circulation mail messages are basic web selling practice. By identifying and categorizing potential customers, efficient mailing lists can be drafted in order to reach maximum exposure where it is most effective. Identifying and monitoring the circulation lists can be easily done using the OtRiderâ„¢ Rich Internet Application (RIA) Newsletter Service.

Features include:
• A fixed template of massage can be exported to all of the persons who were added to the service.
• A client can create a registration page by which internet surfers can join the service.
• The system creates a meticulous report of joining surfers and existing ones.
• Monthly pre-designed e-mail sent to a specific mailing list.

http://otlixservices.php?page=newsletter_services
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: content delivery networks status and trends, education seminar content, versailles peace, video delivery network, message delivery by phone, embed analytics, course content of mba,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  computer networks full report seminar topics 8 42,001 06-10-2018, 12:35 PM
Last Post: jntuworldforum
  Vertical Handoff Decision Algorithm Providing Optimized Performance in Heterogeneous Wireless Networks computer science topics 2 30,158 07-10-2016, 09:02 AM
Last Post: ijasti
  Optical Computer Full Seminar Report Download computer science crazy 46 66,321 29-04-2016, 09:16 AM
Last Post: dhanabhagya
  Digital Signature Full Seminar Report Download computer science crazy 20 43,671 16-09-2015, 02:51 PM
Last Post: seminar report asees
  HOLOGRAPHIC VERSATILE DISC A SEMINAR REPORT Computer Science Clay 20 39,226 16-09-2015, 02:18 PM
Last Post: seminar report asees
  Computer Sci Seminar lists7 computer science crazy 4 11,408 17-07-2015, 10:29 AM
Last Post: dhanyasoubhagya
  Dynamic Search Algorithm in Unstructured Peer-to-Peer Networks seminar surveyer 3 2,816 14-07-2015, 02:24 PM
Last Post: seminar report asees
  Steganography In Images (Download Seminar Report) Computer Science Clay 16 25,703 08-06-2015, 03:26 PM
Last Post: seminar report asees
  Mobile Train Radio Communication ( Download Full Seminar Report ) computer science crazy 10 27,931 01-05-2015, 03:36 PM
Last Post: seminar report asees
  A SEMINAR REPORT on GRID COMPUTING Computer Science Clay 5 16,207 09-03-2015, 04:48 PM
Last Post: iyjwtfxgj

Forum Jump: