05-05-2011, 11:54 AM
Abstract
In a bandwidth-flooding attack, compromised sourcessend high-volume traffic to the target with the purpose of causingcongestion in its tail circuit and disrupting its legitimate communications.In this paper, we present Active Internet Traffic Filtering(AITF), a network-layer defense mechanism against such attacks.AITF enables a receiver to contact misbehaving sources and askthem to stop sending it traffic; each source that has been asked tostop is policed by its own Internet service provider (ISP), which ensuresits compliance. An ISP that hosts misbehaving sources eithersupports AITF (and accepts to police its misbehaving clients), orrisks losing all access to the complaining receiver—this is a strongincentive to cooperate, especially when the receiver is a popularpublic-access site.We show that AITF preserves a significant fractionof a receiver’s bandwidth in the face of bandwidth flooding,and does so at a per-client cost that is already affordable for today’sISPs; this per-client cost is not expected to increase, as long asbotnet-size growth does not outpace Moore’s law. We also showthat even the first two networks that deploy AITF can maintaintheir connectivity to each other in the face of bandwidth flooding.We conclude that the network-layer of the Internet can provide aneffective, scalable, and incrementally deployable solution againstbandwidth-flooding attacks.Index Terms—Denial-of-service defenses, network-level securityand protection, traffic filtering.
I. INTRODUCTION
I N A DISTRIBUTED bandwidth-flooding attack, a largenumber of compromised sources send high-volume trafficto the target in order to create congestion and packet loss in itstail circuit; as a result, the target’s communication to legitimatesources deteriorates. It has been shown that such attacks can exploitthe behavior of legitimate TCP sources (which back off inthe face of packet loss) to dramatically reduce their throughputor, in the case of long-lived flows, drive it to zero [1].Real-life reports complement such analysis: The first welldocumentedincident we are aware of is the 2001 attack againstthe Gibson Research Corporation (GRC) web site. To blockthe flood, GRC analyzed the undesired traffic, determined itssources, and asked from their Internet service provider (ISP) tomanually install filters that blocked traffic from these sources;in the meantime, their site was unreachable for more than 30hours [2]. More recent attacks are less well documented (the victims are increasingly unwilling to reveal the details), but hintthat botnet sizes have increased beyond thousands of sources,while undesired traffic is harder to identify—an article on a 2003attack against an online betting site reports that the undesiredtraffic came from more than 20 000 sources, its rate ranged from1.5 to 3 Gbps, and it was addressed at routers, DNS servers, mailservers, and web sites [3]. Despite the magnitude of the problemand the indications that it is getting worse, no effective solutionhas been deployed yet.There are two basic steps in stopping a bandwidth-floodingattack: 1) identifying undesired traffic and 2) blocking it; thispaper addresses the latter. To prevent undesired traffic fromcausing legitimate-traffic loss, it must be blocked before enteringthe target’s tail circuit, for example, inside the target’sISP. The first solution that comes to mind is to automate the approachfollowed by GRC: one can imagine an ISP service, inwhich a flooding target sends filtering requests to its ISP, and,in response, the ISP installs wire-speed filters (i.e., filters that donot affect packet-forwarding performance) in its routers to satisfythese requests; each filtering request specifies traffic fromone undesired-traffic source to the target.The problem with this approach is that it requires more resourcesthan ISPs can afford: Wire-speed filters in routers are ascarce resource, and this is not expected to change in the nearfuture. Modern hardware routers forward packets at high ratesthat allow only few lookups per forwarded packet; to reduce thenumber of per-packet lookups, router manufacturers store filters—as well as any state that must be looked up per packet,e.g., the router’s forwarding table—in TCAM (ternary contentaddressable memory), which allows for parallel accesses. However,because of its special features, TCAM is more expensiveand consumes more space and power [4] than conventionalmemory; as a result, a router linecard or supervisor-engine cardtypically supports a singleTCAM chip with tens of thousands ofentries. For example, at the time of writing, the Catalyst 4500, amid-range switch, provides a 64 000-entry TCAM to be sharedamong all its interfaces (from 48 to 384 100-Mbps interfaces);Cisco 12 000, a high-end router used at the Internet core, provides20 000 entries that operate at line-speed per linecard (eachlinecard has up to 4 1-Gbps interfaces). So, depending on howanISP connects its clients to its network, each client can typicallyclaim from a few hundred to a few thousand filters—not enoughto block the attacks observed today and not nearly enough toblock the attacks expected in the near future
Download full report
http://infoscience.epfl.ch/record/128395...fTon09.pdf