detecting malicious packet loss
#1

Need more details about this paper..
regarding the red algorithm and how it is used to detect the malicious packet loss
Reply
#2
Detecting Malicious Packet Losses
In this article, we consider the problem of detecting whether a compromised router is maliciously manipulating its stream of packets. We consider the attack in which a router selectively drops packets destined for some victim. Modern networks routinely drop packets when the load temporarily exceeds a routerâ„¢s buffering capacity. Previous methods depend on the fact that too many dropped packets implies malicious intent. Here is described a compromised router detection protocol that dynamically infers,
based on measured traffic rates and buffer sizes, the number of congestive packet losses that will occur.

INFERRING CONGESTIVE LOSS:
there are three approaches for addressing the issue whether the absence of a given packet be seen as malicious or benign.
-Static Threshold. Low rates of packet loss are assumed to be congestive, while rates above some predefined
threshold are deemed malicious.
-Traffic modeling. Packet loss rates are predicted as a function of traffic parameters, losses beyond the prediction
are deemed malicious.
-Traffic measurement. Individual packet losses are predicted as a function of measured traffic load and router
buffer capacity. Deviations from these predictions are deemed malicious.

SYSTEM MODEL
Network Model:
We consider a network to consist of individual homogeneous routers interconnected via directional point-to-point links.we assume that packets are forwarded in a hop-by-hop fashion, based on a local forwarding table within a network.This overall model is consistent with the typical construction of large enterprise IP networks or the internal structure of single ISP backbone networks.

Threat Model:
data plane attacks are focused here. A router can be traffic faulty by maliciously dropping packets and protocol faulty by not following the rules of the detection protocol. Attackers can compromise one or more routers in a network.

For more details, refer this PDF:
Reply
#3
i want detection malicious packet losses uml diagrams
Reply
#4
Hi every body...

can any one send IEEE 2009 projects on "Networking"..... please.......

Nayeem
Reply
#5
[attachment=8828]
Detecting Malicious Packet Losses
Abstract

In this paper, we consider the problem of detecting whether a compromised router is maliciously manipulating its stream ofpackets. In particular, we are concerned with a simple yet effective attack in which a router selectively drops packets destined for somevictim. Unfortunately, it is quite challenging to attribute a missing packet to a malicious action because normal network congestion canproduce the same effect. Modern networks routinely drop packets when the load temporarily exceeds their buffering capacities.Previous detection protocols have tried to address this problem with a user-defined threshold: too many dropped packets implymalicious intent. However, this heuristic is fundamentally unsound; setting this threshold is, at best, an art and will certainly createunnecessary false positives or mask highly focused attacks. We have designed, developed, and implemented a compromised routerdetection protocol that dynamically infers, based on measured traffic rates and buffer sizes, the number of congestive packet lossesthat will occur. Once the ambiguity from congestion is removed, subsequent packet losses can be attributed to malicious actions. Wehave tested our protocol in Emulab and have studied its effectiveness in differentiating attacks from legitimate network behavior.
INTRODUCTION
THE Internet is not a safe place. Unsecured hosts canexpect to be compromised within minutes of connectingto the Internet and even well-protected hosts may becrippled with denial-of-service (DoS) attacks. However,while such threats to host systems are widely understood, itis less well appreciated that the network infrastructure itselfis subject to constant attack as well. Indeed, throughcombinations of social engineering and weak passwords,attackers have seized control over thousands of Internetrouters [1], [2]. Even more troubling is Mike Lynn’scontroversial presentation at the 2005 Black Hat Briefings,which demonstrated how Cisco routers can be compromisedvia simple software vulnerabilities. Once a router hasbeen compromised in such a fashion, an attacker mayinterpose on the traffic stream and manipulate it maliciouslyto attack others—selectively dropping, modifying,or rerouting packets.Several researchers have developed distributed protocolsto detect such traffic manipulations, typically byvalidating that traffic transmitted by one router is receivedunmodified by another [3], [4]. However, all of theseschemes—including our own—struggle in interpreting theabsence of traffic. While a packet that has been modified intransit represents clear evidence of tampering, a missingpacket is inherently ambiguous: it may have beenexplicitly blocked by a compromised router or it mayhave been dropped benignly due to network congestion.In fact, modern routers routinely drop packets due tobursts in traffic that exceed their buffering capacities, andthe widely used Transmission Control Protocol (TCP) isdesigned to cause such losses as part of its normalcongestion control behavior. Thus, existing traffic validationsystems must inevitably produce false positives forbenign events and/or produce false negatives by failing toreport real malicious packet dropping.In this paper, we develop a compromised routerdetection protocol that dynamically infers the precisenumber of congestive packet losses that will occur. Oncethe congestion ambiguity is removed, subsequent packetlosses can be safely attributed to malicious actions. Webelieve our protocol is the first to automatically predictcongestion in a systematic manner and that it is necessaryfor making any such network fault detection practical.In the remainder of this paper, we briefly survey therelated background material, evaluate options for inferringcongestion, and then present the assumptions, specification,and a formal description of a protocol that achieves thesegoals. We have evaluated our protocol in a small experimentalnetwork and demonstrate that it is capable ofaccurately resolving extremely small and fine-grainedattacks.
2 BACKGROUND
There are inherently two threats posed by a compromisedrouter. The attacker may subvert the network control plane(e.g., by manipulating the routing protocol into false routeupdates) or may subvert the network data plane andforward individual packets incorrectly. The first set ofattacks have seen the widest interest and the mostactivity—largely due to their catastrophic potential. Byviolating the routing protocol itself, an attacker may causelarge portions of the network to become inoperable. Thus,there have been a variety of efforts to impart authenticity and consistency guarantees on route update messages withvarying levels of cost and protection [5], [6], [7], [8], [9], [10].We do not consider this class of attacks in this paper.Instead, we have focused on the less well-appreciatedthreat of an attacker subverting the packet forwardingprocess on a compromised router. Such an attack presentsa wide set of opportunities including DoS, surveillance,man-in-the-middle attacks, replay and insertion attacks,and so on. Moreover, most of these attacks can be triviallyimplemented via the existing command shell languages incommodity routers.The earliest work on fault-tolerant forwarding is due toPerlman [11] who developed a robust routing system basedon source routing, digitally signed route-setup packets, andreserved buffers. While groundbreaking, Perlman’s workrequired significant commitments of router resources andhigh levels of network participation to detect anomalies.Since then, a variety of researchers have proposed lighterweight protocols for actively probing the network to testwhether packets are forwarded in a manner consistent withthe advertised global topology [5], [12], [13]. Conversely, the1997 WATCHERS system detects disruptive routers passivelyvia a distributed monitoring algorithm that detectsdeviations from a “conservation of flow” invariant [14], [3].However, work on WATCHERS was abandoned, in part dueto limitations in its distributed detection protocol, its overhead,and the problem of ambiguity stemming from congestion[15]. Finally, our own work broke the problem into threepieces: a traffic validation mechanism, a distributed detectionprotocol, and a rerouting countermeasure. In [16] and [4], wefocused on the detection protocol, provided a formal frameworkfor evaluating the accuracy and precision of any suchprotocol, and described several practical protocols that allowscalable implementations. However, we also assumed thatthe problem of congestion ambiguity could be solved,without providing a solution. This paper presents a protocolthat removes this assumption.
3 INFERRING CONGESTIVE LOSS
In building a traffic validation protocol, it is necessary toexplicitly resolve the ambiguity around packet losses.Should the absence of a given packet be seen as maliciousor benign? In practice, there are three approaches foraddressing this issue:.
Static Threshold.
Low rates of packet loss are assumedto be congestive, while rates above some predefinedthreshold are deemed malicious..
Traffic modeling.
Packet loss rates are predicted as afunction of traffic parameters and losses beyond theprediction are deemed malicious..
Traffic measurement.
Individual packet losses arepredicted as a function of measured traffic loadand router buffer capacity. Deviations from thesepredictions are deemed malicious.Most traffic validation protocols, including WATCHERS [3],Secure Traceroute [12], and our own work described in [4],analyze aggregate traffic over some period of time in orderto amortize monitoring overhead over many packets. Forexample, one validation protocol described in [4] maintainspacket counters in each router to detect if traffic flow is notconserved from source to destination. When a packet arrivesat router r and is forwarded to a destination that willtraverse a path segment ending at router x, r increments anoutbound counter associated with router x. Conversely,when a packet arrives at router r, via a path segmentbeginning with router x, it increments its inbound counterassociated with router x. Periodically, router x sends a copyof its outbound counters to the associated routers forvalidation. Then, a given router r can compare the numberof packets that x claims to have sent to r with the number ofpackets it counts as being received from x, and it can detectthe number of packet losses.Thus, over some time window, a router simply knowsthat out of m packets sent, n were successfully received. Toaddress congestion ambiguity, all of these systems employ apredefined threshold: if more than this number is droppedin a time interval, then one assumes that some router iscompromised. However, this heuristic is fundamentallyflawed: how does one choose the threshold?In order to avoid false positives, the threshold must belarge enough to include the maximum number of possiblecongestive legitimate packet losses over a measurementinterval. Thus, any compromised router can drop that manypackets without being detected. Unfortunately, given thenature of the dominant TCP, even small numbers of lossescan have significant impacts. Subtle attackers can selectivelytarget the traffic flows of a single victim and within theseflows only drop those packets that cause the most harm. Forexample, losing a TCP SYN packet used in connectionestablishment has a disproportionate impact on a hostbecause the retransmission time-out must necessarily bevery long (typically 3 seconds or more). Other seeminglyminor attacks that cause TCP time-outs can have similareffects—a class of attacks well described in [17].All things considered, it is clear that the static thresholdmechanism is inadequate since it allows an attacker tomount vigorous attacks without being detected.Instead of using a static threshold, if the probability ofcongestive losses can be modeled, then one could resolveambiguities by comparing measured loss rates to the ratespredicted by the model. One approach for doing this is topredict congestion analytically as a function of individualtraffic flow parameters, since TCP explicitly responds tocongestion. Indeed, the behavior of TCP has been excessivelystudied
Reply
#6
[attachment=9970]
Detecting Malicious Packet Losses
Abstract

In this paper, we consider the problem of detecting whether a compromised router is maliciously manipulating its stream of packets. In particular, we are concerned with a simple yet effective attack in which a router selectively drops packets destined for some victim. Unfortunately, it is quite challenging to attribute a missing packet to a malicious action because normal network congestion can produce the same effect. Modern networks routinely drop packets when the load temporarily exceeds their buffering capacities. Previous detection protocols have tried to address this problem with a user -defined threshold: too many dropped packets imply malicious intent. However, this heuristic is fundamentally unsound; setting this threshold is, at best, an art and will certainly create unnecessary false positives or mask highly focused attacks. We have designed, developed, and implemented a compromised router detection protocol that dynamically infers, based on measured traffic rates and buffer sizes, the number of congestive packet losses that will occur. Once the ambiguity from congestion is removed, subsequent packet losses can be attributed to malicious actions.
Existing System
1. Previous detection protocols have tried to address this problem with a user -defined threshold: too many dropped packets imply malicious intent.
2. However, this heuristic is fundamentally unsound; setting this threshold is, at best, an art and will certainly create unnecessary false positives or mask highly focused attacks.
Limitation of Existing System
1. Thus, existing traffic validation systems must inevitably produce false positives for benign events and/or produce false negatives by failing to report real malicious packet dropping.
2. Previous work has approached this issue using a static user-defined threshold, which is fundamentally limiting.
Proposed System
1. We are designed, developed, and implemented a compromised router detection protocol that dynamically infers, based on measured traffic rates and buffer sizes, the number of congestive packet losses that will occur.
2. Once the ambiguity from congestion is removed, subsequent packet losses can be attributed to malicious actions.
Advantage
Finally, our own work broke the problem into three pieces:
1. A traffic validation mechanism
2. A distributed detection protocol
3. And a rerouting countermeasure.
Hardware Requirements:
PROCESSOR : PENTIUM IV 2.6 GHz
RAM : 512 MB
MONITOR : 15”
HARD DISK : 20 GB
CDDRIVE : 52X
KEYBOARD : STANDARD 102 KEYS
MOUSE : 3 BUTTONS
Software Requirements:

FRONT END : JAVA, SWING
TOOLS USED : JFRAME BUILDER
OPERATING SYSTEM: WINDOWS XP
Reply
#7
[attachment=10010]
Detecting Malicious Packet Losses
Abstract:

We consider the problem of detecting whether a compromised router is maliciously manipulating its stream of packets. In particular, we are concerned with a simple yet effective attack in which a router selectively drops packets destined for some Victim. Unfortunately, it is quite challenging to attribute a missing packet to a malicious action because normal network congestion can produce the same effect. Modern networks routinely drop packets when the load emporarily exceeds their buffering capacities. Previous detection protocols have tried to address this problem with a user-defined threshold: too many dropped packets imply malicious intent. However, this heuristic is fundamentally unsound; setting this threshold is, at best, an art and will certainly create unnecessary false positives or mask highly focused attacks.
Algorithm / Technique used:
RED Algorithm.

Algorithm Description:
RED monitors the average queue size, based on an exponential weighted moving average: where the actual queue size and weight for a low-pass filter. RED uses three more parameters in minimum threshold,
Maximum, Maximum threshold. Using, RED dynamically computes a dropping probability in two steps for each packet it receives. First, it computes an interim probability, Further; the RED algorithm tracks the number of packets, since the last dropped packet. The final dropping probability, p, is specified to increase slowly as increases.
Existing System:
Network routers occupy a unique role in modern distributed systems. They are responsible for cooperatively shuttling packets amongst themselves in order to provide the illusion of a network with universal point-to-point connectivity. However, this illusion is shattered - as are implicit assumptions of availability, confidentiality, or integrity - when network routers are subverted to act in a malicious fashion. By manipulating, diverting, or dropping packets arriving at a compromised router, an attacker can trivially mount denial-of-service, surveillance, or man-in-the-middle attacks on end host systems. Consequently, Internet routers have become a choice target for would-be attackers and thousands have been subverted to these ends. In this paper, we specify this problem of detecting routers with incorrect packet forwarding behavior and we explore the design space of protocols that implement such a detector. We further present a concrete protocol that is likely inexpensive enough for practical implementation at scale. Finally, we present a prototype system, called Fatih, that implements this approach on a PC router and describe our experiences with it. We show that Fatih is able to detect and isolate a range of malicious router actions with acceptable overhead and complexity. We believe our work is an important step in being able to tolerate attacks on key network infrastructure components
Proposed System:
We have designed, developed, and implemented a compromised router detection protocol that dynamically infers, based on measured traffic rates and buffer sizes, the number of congestive packet losses that will occur.
Once the ambiguity from congestion is removed, subsequent packet losses can be attributed to malicious actions. We have tested our protocol in Emulab and have studied its effectiveness in differentiating attacks from legitimate network behavior.
Modules:
1. Network Module
2. Threat Model
3. Traffic Validation
4. Random Early Detection(RED)
5. Distributed Detection
Module Description:
1. Network Module
Client-server computing or networking is a distributed application architecture that partitions tasks or workloads between service providers (servers) and service requesters, called clients. Often clients and servers operate over a computer network on separate hardware. A server machine is a high-performance host that is running one or more server programs which share its resources with clients. A client also shares any of its resources; Clients therefore initiate communication sessions with servers which await (listen to) incoming requests.
2. Threat Model
This focuses solely on data plane attacks (control plane attacks can be addressed by other protocols with appropriate threat models, and moreover, for simplicity, we examine only attacks that involve packet dropping.
However, our approach is easily extended to address other attacks such as packet modification or reordering similar to our previous work. Finally, as in, the protocol we develop validates traffic whose source and sink routers are uncompromised. A router can be traffic faulty by maliciously dropping packets and protocol faulty by not following the rules of the detection protocol. We say that a compromised router r is traffic faulty with respect to a path segment during if contains r and, during the period of time, r maliciously drops or misroutes packets that flow through. A router can drop packets without being faulty, as long as the packets are dropped because the corresponding output interface is congested. A compromised router r can also behave in an arbitrarily malicious way in terms of executing the protocol we present, in which case we indicate r as protocol faulty. A protocol faulty router can send control messages with arbitrarily faulty information, or it can simply not send some or all of them. A faulty router is one that is traffic faulty, protocol faulty, or both.
Reply
#8
[attachment=11085]
Chapter 1
INTRODUCTION
1.1 INTRODUCTION TO PROJECT

The Internet is not a safe place. Unsecured hosts can expect to be compromised within minutes of connecting to the Internet and even well-protected hosts may be crippled with denial-of-service (DoS) attacks. However, while such threats to host systems are widely understood, it is less well appreciated that the network infrastructure itself is subject to constant attack as well. Indeed, through combinations of social engineering and weak passwords, attackers have seized control over thousands of Internet routers . Even more troubling is Mike Lynn’s controversial presentation at the 2005 Black Hat Briefings, which demonstrated how Cisco routers can be compromised via simple software vulnerabilities. Once a router has been compromised in such a fashion, an attacker may interpose on the traffic stream and manipulate it maliciously to attack others—selectively dropping, modifying, or rerouting packets.
Several researchers have developed distributed protocols to detect such traffic manipulations, typically by validating that traffic transmitted by one router is received unmodified by another . However, all of these schemes—including our own—struggle in interpreting the absence of traffic. While a packet that has been modified in transit represents clear evidence of tampering, a missing packet is inherently ambiguous: it may have been explicitly blocked by a compromised router or it may have been dropped benignly due to network congestion. In fact, modern routers routinely drop packets due to bursts in traffic that exceed their buffering capacities, and the widely used Transmission Control Protocol (TCP) is designed to cause such losses as part of its normal congestion control behavior. Thus, existing traffic validation systems must inevitably produce false positives for benign events and/or produce false negatives by failing to report real malicious packet dropping.
In this project, we develop a compromised router detection protocol that dynamically infers the precise number of congestive packet losses that will occur. Once the congestion ambiguity is removed, subsequent packet losses can be safely attributed to malicious actions. We believe our protocol is the first to automatically predict congestion in a systematic manner and that it is necessary for making any such network fault detection practical. In the remainder of this paper, we briefly survey the related background material, evaluate options for inferring congestion, and then present the assumptions, specification, and a formal description of a protocol that achieves these goals. We have evaluated our protocol in a small experimental network and demonstrate that it is capable of accurately resolving extremely small and fine-grained attacks.
1.2 PROJECT OVERVIEW
we consider the problem of detecting whether a compromised router is maliciously manipulating its stream of packets. In particular, we are concerned with a simple yet effective attack in which a router selectively drops packets destined for some victim. Unfortunately, it is quite challenging to attribute a missing packet to a malicious action because normal network congestion can produce the same effect. Modern networks routinely drop packets when the load temporarily exceeds their buffering capacities. Previous detection protocols have tried to address this problem with a user-defined threshold: too many dropped packets imply malicious intent. However, this heuristic is fundamentally unsound; setting this threshold is, at best, an art and will certainly create
unnecessary false positives or mask highly focused attacks. We have designed, developed, and implemented a compromised router detection protocol that dynamically infers, based on measured traffic rates and buffer sizes, the number of congestive packet losses that will occur. Once the ambiguity from congestion is removed, subsequent packet losses can be attributed to malicious actions. We have tested our protocol in Emulab and have studied its effectiveness in differentiating attacks from legitimate network behavior.
Chapter 2
SYSTEM ANALYSIS
2.1 Existing System

The earliest work on fault-tolerant forwarding is due to Perlman who developed a robust routing system based on source routing, digitally signed route-setup packets, and reserved buffers. While groundbreaking, Perlman’s work required significant commitments of router resources and high levels of network participation to detect anomalies. Since then, a variety of researchers have proposed lighter weight protocols for actively probing the network to test whether packets are forwarded in a manner consistent with the advertised global topology Conversely, the1997 WATCHERS system detects disruptive routers passively via a distributed monitoring algorithm that detects deviations from a conservation of flow” invariant . However, work on WATCHERS was abandoned, in part due to limitations in its distributed detection protocol, its overhead, and the problem of ambiguity stemming from congestion To perform all these processes manually is a very tedious process for a trained expert and, thus, an automated differential counting system that helps in saving time and money is highly desirable.
2.2 Proposed System
We have designed, developed, and implemented a compromised router detection protocol that dynamically infers, based on measured traffic rates and buffer sizes, the number of congestive packet losses that will occur.
Once the ambiguity from congestion is removed, subsequent packet losses can be attributed to malicious actions. We have tested our protocol in Emulab and have studied its effectiveness in differentiating attacks from legitimate network behavior
2.3 OVERVIEW OF MODULES
Total project has divided into 5 Modules. They are
1. GUI Design
2. Protocol Utilization
3. Packet Transmission Details
4. CLP Calculation using Bayesian Theorem
5. Identifying Normal Packet using Threshold value.
Module 1: GUI Design
Using swing concepts in Java, we allocate area for the statistical characterization for the transmitted packets. So that the design is consistent and efficient for the user to interact with the software.
Module 2Tonguerotocol Utilization
Packets are transmitted through the use of networking protocols. So that ports are distinguished by different colors. I consider two types of protocols called TCP and UDP protocols. Here I use the acknowleged TCP protocol to transmit the packets.
Module 3 TongueTD (Packet Transmission Details)
Packet scores are generated based on the protocol values, the size of the packet and the destination.
Module 4: CLP Calculation using Bayesian Theorem
The Scores of each packet is generated and the probability calculation is performed. The attribute value of each packet is compared with the base line profile value and sorting of packets occurred.
Module 5: Identifying NP using Threshold Value
In this module, the threshold value is calculated and is compared with the attribute value of each packet. Discarding of packets takes place in comparison with the threshold value.
Chapter 3
ALGORITHM/TECHNIQUES USED
3.1 RED Algorithm Description

RED monitors the average queue size, based on an exponential weighted moving average: where the actual queue size and weight for a low-pass filter. RED uses three more parameters in minimum threshold, Maximum, Maximum threshold. Using, RED dynamically computes a dropping probability in two steps for each packet it receives. First, it computes an interim probability, Further; the RED algorithm tracks the number of packets, since the last dropped packet. The final dropping probability, p, is specified to increase slowly as increases.
3.2 Methodology And Specifications
There are inherently two threats posed by a compromised router. The attacker may subvert the network control plane (e.g., by manipulating the routing protocol into false route updates) or may subvert the network data plane and forward individual packets incorrectly. The first set of attacks have seen the widest interest and the most activity—largely due to their catastrophic potential. By violating the routing protocol itself, an attacker may cause large portions of the network to become inoperable. Thus, there have been a variety of efforts to impart authenticity and consistency guarantees on route update messages with varying levels of cost and protection We do not consider this class of attacks in this paper. Instead, we have focused on the less well-appreciated threat of an attacker subverting the packet forwarding process on a compromised router. Such an attack presents a wide set of opportunities including DoS, surveillance, man-in-the-middle attacks, replay and insertion attacks, and so on. Moreover, most of these attacks can be trivially implemented via the existing command shell languages in commodity routers. The earliest work on fault-tolerant forwarding is due to Perlman who developed a robust routing system based on source routing, digitally signed route-setup packets, and reserved buffers.
While groundbreaking, Perlman’s work required significant commitments of router resources and high levels of network participation to detect anomalies. Since then, a variety of researchers have proposed lighter weight protocols for actively probing the network to test whether packets are forwarded in a manner consistent with the advertised global topology Conversely, the 1997 WATCHERS system detects disruptive routers passively via a distributed monitoring algorithm that detects deviations from a “conservation of flow” invariant However, work on WATCHERS was abandoned, in part due to limitations in its distributed detection protocol, its overhead, and the problem of ambiguity stemming from congestion . Finally, our own work broke the problem into three
pieces: a traffic validation mechanism, a distributed detection protocol, and a rerouting countermeasure. In we focused on the detection protocol, provided a formal framework for evaluating the accuracy and precision of any such protocol, and described several practical protocols that allow scalable implementations.
However, we also assumed that the problem of congestion ambiguity could be solved, without providing a solution. This paper presents a protocol that removes this assumption. INFERRING CONGESTIVE LOSS In building a traffic validation protocol, it is necessary to explicitly resolve the ambiguity around packet losses. Should the absence of a given packet be seen as malicious or benign? In practice,
Reply
#9
In this paper, we consider the problem of detecting whether a compromised router is maliciously manipulating its stream ofpackets. In particular, we are concerned with a simple yet effective attack in which a router selectively drops packets destined for somevictim. Unfortunately, it is quite challenging to attribute a missing packet to a malicious action because normal network congestion canproduce the same effect. Modern networks routinely drop packets when the load temporarily exceeds their buffering capacities.Previous detection protocols have tried to address this problem with a user-defined threshold: too many dropped packets implymalicious intent. However, this heuristic is fundamentally unsound; setting this threshold is, at best, an art and will certainly createunnecessary false positives or mask highly focused attacks. We have designed, developed, and implemented a compromised routerdetection protocol that dynamically infers, based on measured traffic rates and buffer sizes, the number of congestive packet lossesthat will occur. Once the ambiguity from congestion is removed, subsequent packet losses can be attributed to malicious actions. Wehave tested our protocol in Emulab and have studied its effectiveness in differentiating attacks from legitimate network behavior.
INTRODUCTION
THE Internet is not a safe place. Unsecured hosts canexpect to be compromised within minutes of connectingto the Internet and even well-protected hosts may becrippled with denial-of-service (DoS) attacks. However,while such threats to host systems are widely understood, itis less well appreciated that the network infrastructure itselfis subject to constant attack as well. Indeed, throughcombinations of social engineering and weak passwords,attackers have seized control over thousands of Internetrouters [1], [2]. Even more troubling is Mike Lynn’scontroversial presentation at the 2005 Black Hat Briefings,which demonstrated how Cisco routers can be compromisedvia simple software vulnerabilities. Once a router hasbeen compromised in such a fashion, an attacker mayinterpose on the traffic stream and manipulate it maliciouslyto attack others—selectively dropping, modifying,or rerouting packets.Several researchers have developed distributed protocolsto detect such traffic manipulations, typically byvalidating that traffic transmitted by one router is receivedunmodified by another [3], [4]. However, all of theseschemes—including our own—struggle in interpreting theabsence of traffic. While a packet that has been modified intransit represents clear evidence of tampering, a missingpacket is inherently ambiguous: it may have beenexplicitly blocked by a compromised router or it mayhave been dropped benignly due to network congestion.In fact, modern routers routinely drop packets due tobursts in traffic that exceed their buffering capacities, andthe widely used Transmission Control Protocol (TCP) isdesigned to cause such losses as part of its normalcongestion control behavior. Thus, existing traffic validationsystems must inevitably produce false positives forbenign events and/or produce false negatives by failing toreport real malicious packet dropping.In this paper, we develop a compromised routerdetection protocol that dynamically infers the precisenumber of congestive packet losses that will occur. Oncethe congestion ambiguity is removed, subsequent packetlosses can be safely attributed to malicious actions. Webelieve our protocol is the first to automatically predictcongestion in a systematic manner and that it is necessaryfor making any such network fault detection practical.In the remainder of this paper, we briefly survey therelated background material, evaluate options for inferringcongestion, and then present the assumptions, specification,and a formal description of a protocol that achieves thesegoals. We have evaluated our protocol in a small experimentalnetwork and demonstrate that it is capable ofaccurately resolving extremely small and fine-grainedattacks.
2 BACKGROUND
There are inherently two threats posed by a compromisedrouter. The attacker may subvert the network control plane(e.g., by manipulating the routing protocol into false routeupdates) or may subvert the network data plane andforward individual packets incorrectly. The first set ofattacks have seen the widest interest and the mostactivity—largely due to their catastrophic potential. Byviolating the routing protocol itself, an attacker may causelarge portions of the network to become inoperable. Thus,there have been a variety of efforts to impart authenticity and consistency guarantees on route update messages withvarying levels of cost and protection [5], [6], [7], [8], [9], [10].We do not consider this class of attacks in this paper.Instead, we have focused on the less well-appreciatedthreat of an attacker subverting the packet forwardingprocess on a compromised router. Such an attack presentsa wide set of opportunities including DoS, surveillance,man-in-the-middle attacks, replay and insertion attacks,and so on. Moreover, most of these attacks can be triviallyimplemented via the existing command shell languages incommodity routers.The earliest work on fault-tolerant forwarding is due toPerlman [11] who developed a robust routing system basedon source routing, digitally signed route-setup packets, andreserved buffers. While groundbreaking, Perlman’s workrequired significant commitments of router resources andhigh levels of network participation to detect anomalies.Since then, a variety of researchers have proposed lighterweight protocols for actively probing the network to testwhether packets are forwarded in a manner consistent withthe advertised global topology [5], [12], [13]. Conversely, the1997 WATCHERS system detects disruptive routers passivelyvia a distributed monitoring algorithm that detectsdeviations from a “conservation of flow” invariant [14], [3].However, work on WATCHERS was abandoned, in part dueto limitations in its distributed detection protocol, its overhead,and the problem of ambiguity stemming from congestion[15]. Finally, our own work broke the problem into threepieces: a traffic validation mechanism, a distributed detectionprotocol, and a rerouting countermeasure. In [16] and [4], wefocused on the detection protocol, provided a formal frameworkfor evaluating the accuracy and precision of any suchprotocol, and described several practical protocols that allowscalable implementations. However, we also assumed thatthe problem of congestion ambiguity could be solved,without providing a solution. This paper presents a protocolthat removes this assumption.
3 INFERRING CONGESTIVE LOSS
In building a traffic validation protocol, it is necessary toexplicitly resolve the ambiguity around packet losses.Should the absence of a given packet be seen as maliciousor benign? In practice, there are three approaches foraddressing this issue:.
Static Threshold.
Low rates of packet loss are assumedto be congestive, while rates above some predefinedthreshold are deemed malicious..
Traffic modeling.
Packet loss rates are predicted as afunction of traffic parameters and losses beyond theprediction are deemed malicious..
Traffic measurement.
Individual packet losses arepredicted as a function of measured traffic loadand router buffer capacity. Deviations from thesepredictions are deemed malicious.Most traffic validation protocols, including WATCHERS [3],Secure Traceroute [12], and our own work described in [4],analyze aggregate traffic over some period of time in orderto amortize monitoring overhead over many packets. Forexample, one validation protocol described in [4] maintainspacket counters in each router to detect if traffic flow is notconserved from source to destination. When a packet arrivesat router r and is forwarded to a destination that willtraverse a path segment ending at router x, r increments anoutbound counter associated with router x. Conversely,when a packet arrives at router r, via a path segmentbeginning with router x, it increments its inbound counterassociated with router x. Periodically, router x sends a copyof its outbound counters to the associated routers forvalidation. Then, a given router r can compare the numberof packets that x claims to have sent to r with the number ofpackets it counts as being received from x, and it can detectthe number of packet losses.Thus, over some time window, a router simply knowsthat out of m packets sent, n were successfully received. Toaddress congestion ambiguity, all of these systems employ apredefined threshold: if more than this number is droppedin a time interval, then one assumes that some router iscompromised. However, this heuristic is fundamentallyflawed: how does one choose the threshold?In order to avoid false positives, the threshold must belarge enough to include the maximum number of possiblecongestive legitimate packet losses over a measurementinterval. Thus, any compromised router can drop that manypackets without being detected. Unfortunately, given thenature of the dominant TCP, even small numbers of lossescan have significant impacts. Subtle attackers can selectivelytarget the traffic flows of a single victim and within theseflows only drop those packets that cause the most harm. Forexample, losing a TCP SYN packet used in connectionestablishment has a disproportionate impact on a hostbecause the retransmission time-out must necessarily bevery long (typically 3 seconds or more). Other seeminglyminor attacks that cause TCP time-outs can have similareffects—a class of attacks well described in [17].All things considered, it is clear that the static thresholdmechanism is inadequate since it allows an attacker tomount vigorous attacks without being detected.Instead of using a static threshold, if the probability ofcongestive losses can be modeled, then one could resolveambiguities by comparing measured loss rates to the ratespredicted by the model. One approach for doing this is topredict congestion analytically as a function of individualtraffic flow parameters, since TCP explicitly responds tocongestion. Indeed, the behavior of TCP has been excessivelystudied

Reference: http://studentbank.in/report-detecting-m...z1P0356Skh
Reply
#10
Need dis proj.......plsss i'm present working on it....
if nybody hav dis proj please mail me.... please mail me watever u hav abt dis thing...it might helpfull 2 me alot....

very very tnx in advance
my id: sasidhar527[at]gmail.com
Reply
#11
to get information about the topic detecting malicious packet losses full report ppt, and related topics refer the page link bellow

http://studentbank.in/report-detecting-m...ses--11712

http://studentbank.in/report-detecting-m...acket-loss

http://studentbank.in/report-detecting-m...ket-losses

http://studentbank.in/report-detecting-m...ses--20238

http://studentbank.in/report-detecting-m...oss?page=2

http://studentbank.in/report-detecting-m...?pid=29436

Reply
#12
i need uml and dataflow diagram
as soon as possible plz send it
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Tagged Pages: detecting malicious packet losses github,
Popular Searches: code for detecting malicious data packets in java, packet loss in manet ppt with seminar report, meaning of zing in packet loss measurement, acceptable packet loss for voip, proj tv, seminar on corona loss, packet loss graph,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  allocation of dg in 33 bus system by loss sensitivity factor method 2 1,457 22-08-2015, 02:11 PM
Last Post: seminar report asees
  solar power remote control bomb detecting robot pinkyshar 4 4,287 24-07-2013, 02:51 PM
Last Post: G.KARTHIKEYAN
Bug Detecting malicious packet losses ravidanny23 1 1,156 30-03-2013, 10:35 AM
Last Post: computer topic
  Constructing Inter-Domain Packet Filters to Control IP Spoofing Based on BGP Updates deepu1331 3 2,345 31-01-2013, 03:33 PM
Last Post: itnagraja
  detecting malicious packet losses mahadev 12 7,703 01-12-2012, 01:39 PM
Last Post: seminar details
Sad Detecting adverse Drug Reactions 0 420 14-03-2012, 08:40 AM
Last Post: Guest
Video a new tcp pr persistence with packet reordering vasu236 1 1,208 01-03-2012, 02:53 PM
Last Post: seminar paper
  controlling ip spoofing through inter domain packet filters haree143 4 3,696 29-02-2012, 12:51 PM
Last Post: seminar paper
  CONTROLLING IP SPOOFING THROUGH INTER DOMAIN PACKET FILTER 1 1,394 29-02-2012, 12:51 PM
Last Post: seminar paper
  A MODEL BASED APPROACH TO EVALUATION OF FEC CODING IN COMBATING NETWORK PACKET LOSSES lavanya.x 1 1,238 20-02-2012, 03:58 PM
Last Post: seminar paper

Forum Jump: