Design and Implementation of a Network Monitoring Tool
#1

Design and Implementation of a Network Monitoring Tool


The extensive use of computers and networks for exchange of information has also had ramifications on the growth and spread of crime through their use. Law enforcement agencies need to keep up with the emerging trends in these areas for crime detection and prevention. Among the several needs of such agencies is the need to monitor, detect and analyze undesirable network traffic. However, the monitoring, detecting, and analysis of this traffic may be against the goal of maintaining privacy of individuals whose network communications are being monitored.

In this thesis, we discuss the design and implementation of the basic framework of a network monitoring tool - PickPacket - that can address the conflicting issues of network monitoring and privacy through its judicious use. PickPacket comprises of three components - a packet filter, post-processing applications, and a GUI for providing a detailed analysis of the collected data. The packet filter filters packets based on IP addresses, transport layer protocol port numbers, and application layer data content present in them. The implementation of application layer protocol filters for Telnet and SMTP and a text string filter is discussed in this report. We also describe the design and implementation of post-processing applications.
Reply
#2
[attachment=15193]
Introduction
The use of computers has rapidly increased in the last few decades. Coupled with this has been the exponential growth of the Internet, Computers can now exchange large volumes of information. This has resulted in an ever increasing need for effective tools that can monitor the network.
Such monitoring tools help network administrators in evaluating and diagnosing performance problems with servers, the network wire, hubs and applications. Since machines cannot distinguish personalities and content, they can also be used for communication and exchange of information pertaining to unlawful activity. This is why law enforcing agencies have shown increased interest in network monitoring tools. It is felt that careful and judicious monitoring of data flowing across the net can help detect and prevent crime. Such monitoring tools, therefore, have an important role in intelligence gathering. Companies that want to safeguard their recent developments and research from falling into the hand of their competitors also resort to intelligence gathering. Thus there is a pressing need to monitor, detect and analyze undesirable network traffic.
However, the monitoring, detecting, and analysis of this traffic may be opposed to the goals of maintaining the privacy of individuals whose network communications are being monitored. This thesis describes PickPacket - a Network Monitoring Tool - that can address the conflicting issues of network monitoring and privacy through its judicious use. This tool was developed as a part of a research project sponsored by the Ministry of Communications and Information Technology, New Delhi, The basic framework for this tool has also been discussed in Reference [23],
1.1 Sniffers
Network monitoring tools are also called sniffers. Network sniffers are named after a product called Sniffer Network Analyzer introduced in 1988 by Network General Corporation (now Network Associates Incorporated) who have also trademarked the word sniffer. However this word continues to be in popular use for lack of other convenient synonyms.
Several tools exist that can monitor network traffic. Usually such tools will put the network card of a computer into the promiscuous mode. This enables the computer to listen to the entire traffic on that section of the network. There can be an additional level of filtering of these packets based on the IP related header data present in the packet. Usually such filtering specifies simple criteria for the IP addresses and ports present in the packet. Filtered packets are written on to the disk. Post capture analysis is done on these packets to gather the required information from these packets.
However, this simplistic model of packet sniffing and filtering has its drawbacks. First, as only a minimal amount of filtering of packets received is carried out, the amount of data for post processing becomes enormous. Second, no filtering is done on the basis of the content of the packet pavload. Third, as the entire data is dumped to the disk the privacy of innocent individuals who may be communicating during the time of monitoring the network may be violated. This motivates the design and implementation of PickPacket,
1.2 PickPacket
The purpose of PickPacket, like the simple filter discussed above is to monitor net¬work traffic and to copy only selected packets for further analysis. However, the scope and complexity of criteria that can be specified for selecting packets is greatly increased. The criteria for selecting packets can be specified at several layers of the protocol stack. Thus there can be criteria for the Network Layer - IP addresses, Transport Layer - Port numbers and Application Layer - Application dependent such as file names, email ids, URLs, text string searches etc. The filtering compo¬nent of this tool does not inject any packets onto the network. Once the packets have been selected based on these criteria they are dumped to permanent storage, A special provision has been made in the tool for two modes of capturing packets depending on the amount of granularity with which data has to be captured. These are the "PEN" mode and the "FULL" mode of operations. In the first mode it is only established that a packet corresponding to a particular criterion specified by the user was encountered and minimal information required for detailed investigation is captured. In the second mode the data of such a packet is also captured. Judiciously using these features can help protect the privacy of innocent users.
The packets dumped to the disk are analyzed in the off-line mode. Post dump analysis makes available to the investigator separate files for different connections. The tool provides a summary of all the connections and also provides an interface to view recorded traffic. This interface extensively uses existing software to render the captured data to the investigator. For instance, when rendering e-mail Outlook may be used through the interface provided, A GUI for generating the rules input to the filter is also provided,
1.3 Organization of the Report
This thesis focuses in detail on filtering data packets belonging to applications based on the File Transfer Protocol (FTP) [38] and the Hypertext Transfer Pro¬tocol (HTTP) [17], Chapter 2 and Chapter 3 prepare the background that will help understand sniffers and PickPacket in general. Chapter 2 discuses sniffers in greater detail. Chapter 3 describes the high level design of PickPacket, Chapter 4 discusses the design and implementation details of filtering based on FTP and Chapter 5 dis¬cusses the same for HTTP, The rest of the thesis describes testing strategies. The final chapter concludes the thesis with suggestions for further work.
Reply
#3
[attachment=15210]
Introduction
With the increase in use of computer networks for information exchange, regulation and control of the data transferred on these networks is required to secure the intel¬lectual property of an organization. Thus, highly customizable network monitoring tools that capture data transmitted on the network are being designed. Some of these tools also analyze the collected data and provide valuable information to the user.
Network monitoring tools perform their task by sniffing packets from the network and filtering them on the basis of user specified rules. The tools that provide facility of specifying simple rules for filtering packets are called packet filters. Tools that filter packets based on complex rules and perform post-capture analysis of the collected data are termed as network monitoring tools. The following section describes packet sniffers. Later in this chapter, we focus on different packet filtering mechanisms and network monitoring tools. This leads to a motivation for developing and designing a new network monitoring tool - PickPacket,
1.1 Packet Sniffers
The basic goal of network monitoring is to read packets from the network and analyze its contents. At the lowest level, it requires that the network interface be able to read all the packets, irrespective of the destination of the packet. This can be ensured by properly configuring the interface. This activity of monitoring packets on the network is known as Packet Sniffing. Packet sniffers are simple monitoring tools that can only dump the network traffic on the storage media,
1.2 Packet Filters
The amount of information that flows on a network is generally quite high with packets corresponding to different protocols and even a simple analysis of this data takes time. This time can be reduced considerably by allowing the user to specify some rules for capturing packets selectively. For example, the user should be allowed to specify rules that capture all the packets with a given IP address. This would reduce the amount of space required for saving the packets and also lessen the time required for analysis. Packet filters provide this facility of specifying such simple rules. These rules comprise of values corresponding to various fields present the protocol headers of a packet. If the the protocol headers of a packet contain these values then it is saved else it is dropped by the packet filter. We discuss below some existing packet filters.
The CMU/Stanford Packet Filter (CSPF) [9] was the first UNIX based kernel- resident, protocol independent packet demultiplexer developed. It provides indi¬vidual user processes great flexibility in selecting which packets they will receive. It eventually evolved into the Network Interface Tap (NIT) [10] under SunOS 3, and later into Berkeley Packet Filter (BPF) [8], Sun implemented NIT to capture packets and etherfind to print packet headers. These packet filters although being implemented inside the kernel, provide an architecture over which the user-level network monitoring tools can be built.
In 1993, a new packet filtering mechanism, the Berkeley Packet Filter (BPF) [8] was developed by Steve MeCane and Van Jacobson, BPF essentially comprises of two components: a filter code and an interpreter which executes the filter code over the packet read from the network. The filter code uses a hypothetical machine consisting of an accumulator, an index register, a scratch memory store, and a program counter. This filter code is in an assembly like language and includes operations like load, store, branch, return, some register transfer functions, etc, BPF offers substantial performance improvements over other packet filtering mechanisms due to the following two reasons:
1, There are two approaches for filtering packets: a boolean expression tree (used by CSPF) and a directed acyclic control flow graph or CFG (first used by NNstat [17] and then used by BPF), These two models are computationally equivalent. However, in implementation the tree model maps naturally into code for a stack machine while the CFG model naturally maps into code for a register machine. Since most machines are register based, the CFG approach leads to a more efficient implementation,
2, When a packet arrives at the network interface, the network interface driver saves it in its buffer and then copies it to the system protocol stack. But in the case of BPF, the driver after saving the packet in its buffer calls BPF which operates on the stored packet and decides whether it is to be accepted or not. No copy of the packet is made for this filtering process. This leads to a great performance advantage of BPF over other filtering mechanisms (e.g. NIT [10]) that make a copy of the packet before filtering it.
The BSD socket interface is a de-facto standard for writing network based ap¬plications, Thus the Linux operating system came up with the Linux Socket Filter (LSF) [16], LSF is an in-kernel packet filter derived from BPF, It provides the user with a packet filtering facility on BSD sockets. Among other packet filters, tcpdump [6] is probably the most popular packet capturing tool in the UNIX community. It is based on BPF and has the packet capturing and filtering facilities implemented in a separate library, pcap [5], There are a wide range of network monitoring tools that integrate the pcap library
Reply
#4

[attachment=15212]
Introduction
Most organizations today depend on networked computer systems as an essential infrastructure for doing business. Billions of dollars are transferred around the world over computer networks. Increased connectivity and the use of the Internet have exposed the organizations to subversion. The loss to an organization due to lack of availability of critical computing resources or theft of intellectual property because of malicious actions can be significant. It has therefore become critical to protect an organization's information systems and communication networks from malicious attacks and unauthorized access.
An Intrusion Detection System (IDS) is used to inspect data for malicious or anomalous activities and detect attacks or unauthorized use of systems, networks, and related resources. It seeks to increase the security and hence the availability, integrity, and confidentiality of computer systems by providing information that would enable the system administrator to take necessary actions.
There are broadly two types of Intrusion Detection Systems. These are host- based IDS and network-based IDS. Host-based IDS uses system and audit logs as its data source, while network-based IDS uses network traffic as its data source. A host-based Intrusion Detection System consists of an agent on a host which identi¬fies intrusions by analyzing system calls, application logs, file-system modifications (binaries, password files, etc.), and other host activities. In a network-based Intru¬sion Detection System, sensors are placed at strategic points within the network to capture all network traffic flows and analyze the content of individual packets for malicious activities such as denial of service attacks, buffer overflow attacks, etc. Each approach has its respective strengths and weaknesses. Some of the attacks can be detected only by host-based or only by network-based IDS. For example, certain types of encryption present challenges to network-based IDS. Depending on where the encryption resides within the protocol stack, it may leave a network-based sys¬tem blind to certain attacks which a host-based IDS can detect. Similarly, there are some attacks which can only be detected by examining packet headers for sign of malicious and suspicious activities. Host-based IDS do not see the packet header, so they cannot detect these type of attacks while network-based IDS can detect them.
The two main techniques used by Intrusion Detection Systems for detecting at-tacks are Misuse Detection and Anomaly Detection. In a Misuse Detection system, also known as signature-based system, well known attacks are represented by sig-natures. A signature is a pattern of activity which corresponds to the intrusion it represents. The IDS identifies intrusions by looking for these patterns in the data being analyzed. The accuracy of such a system depends on its signature database. Misuse Detection systems cannot detect novel attacks as well as slight variations of known attacks.
An anomaly-based IDS examines ongoing traffic, activity, transactions, or behav¬ior for anomalies on networks or systems that may indicate attack. The underlying principle is the notion that attack behavior differs enough from normal user behav¬ior that it can be detected by cataloging and identifying the differences involved. By creating baselines of normal behavior, anomaly-based IDS systems can observe when current behavior deviates statistically from the norm. This capability theoret¬ically gives anomaly-based IDS ability to detect new attacks for which the signatures have not been created. The disadvantage of this approach is that there is no clear cut method for defining normal behavior. Therefore, such an IDS can report an intrusion, even when the activity is legitimate.
Intrusion Detection Systems trigger thousands of alarms per day [7]. Evaluating intrusion detection alarms and conceiving an appropriate response is a challenging task. From a security administrator's point of view, it is important to reduce the redundancy of alarms, intelligently integrate and correlate security alerts, construct attack scenarios (defined as a sequence of related attack steps) and present high- level aggregated information. Correlating alerts to identify an attack scenario can also help forensic analysis, response and recovery and even prediction of forthcoming attacks. One of the current areas of research in Intrusion Detection Systems is to develop methodologies to give a succinct and high level view of attempted intrusions to the system administrator. Various approaches have been developed to correlate and aggregate alerts.
1.1 Problem Statement and Approach
Traditional Intrusion Detection Systems focus on low level attacks or anomalies and raise alerts. In situations where there are intensive attacks, the amount of alerts become unmanageable. As a result, it is difficult for human users or intrusion response systems to understand the alerts and take appropriate actions. Moreover, most of the alerts are not isolated. They are usually steps in multi-stage attacks which try to compromise the security of a system.
The approaches to correlate and aggregate alerts fall into three broad categories: Alert Fusion, Attack Scenario Construction, and Alert Clustering. In Alert Fusion, alerts that are generatted by different IDS in response to a single attack are identified and merged. The aim of Attack Scenario Construction is to identify multi-step attacks that represent a sequence of actions performed by the same attacker. Alert Clustering groups alerts in a cluster based on similarities between alert attributes and constructs a generalized attribute from the cluster, where 'similarity' can be defined in various ways.
In this thesis, we describe the design and implementation of Attack Scenario Construction scheme and Automated Report Generation for Sachet - A distributed, real-time, network-based intrusion detection system with centralized control, de-veloped at IIT Kanpur [14, 11], Sachet IDS employs both misuse detection and anomaly detection. The architecture of Sachet IDS is explained in Chapter 3.
The aim of Attack Scenario Construction is to identify logical relations among low level alerts, correlate them and to provide the system administrator with a condensed view of reported security issues known as Attack Scenarios. Most intrusions are not isolated, but related as different stages of a series of attacks, with the early stages preparing for the later ones. For example, attackers need to know what vulnerable services are running on a host before they can take advantage of these services. Thus, they typically scan for vulnerable services before they break into the system. As another example, in the Distributed Denial of Service (DDOS) attacks, the attacker has to install the DDOS daemon programs in vulnerable hosts before he instructs the daemons to launch an attack against another system. Therefore, in a series of attacks, one or more previous attacks usually prepare for the following attacks, and the success of the previous steps affects the success of the following ones. In other words, there are often logical steps or strategies behind a series of attacks.
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: network monitoring design implementation, homogenous network control and implementation pdf, optical network design and implementation free download, training labs tontation of a network monitoring tool, progressive tool design pdf, nit surathkal nri, automated network adiminstration tool,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  Service-Oriented Architecture for Weaponry and Battle Command and Control Systems in 1 1,045 15-02-2017, 03:40 PM
Last Post: jaseela123d
  Exploring the design space of social network-based Sybil defenses 1 899 15-02-2017, 02:55 PM
Last Post: jaseela123d
  Critical State-Based Filtering System for Securing SCADA Network Protocols 1 838 14-02-2017, 12:48 PM
Last Post: jaseela123d
  Remote Server Monitoring System For Corporate Data Centers smart paper boy 3 2,806 28-03-2016, 02:51 PM
Last Post: dhanabhagya
  Design of Intranet Mail System nit_cal 14 11,389 19-05-2015, 11:17 AM
Last Post: seminar report asees
  Design and Implementation of TARF: A Trust-Aware Routing Framework for WSNs Projects9 6 3,512 10-01-2015, 11:13 PM
Last Post: Guest
  A PROACTIVE APPROACH TO NETWORK SECURITY nit_cal 1 2,241 19-09-2014, 12:52 AM
Last Post: [email protected]
  darknet monitoring using honeypot erhhk 0 912 12-09-2014, 06:09 PM
Last Post: erhhk
  LGI Monitoring System full report seminar presentation 1 4,419 18-03-2014, 05:06 AM
Last Post: MichaelPn
  alert based monitoring of stock trading systems project topics 4 3,470 09-02-2014, 12:58 PM
Last Post: Guest

Forum Jump: