Posts: 1,061
Threads: 613
Joined: Feb 2010
Text categorization is the task of assigning predefined categories to natural language text. With the widely used Ëœbag of wordsâ„¢ representation, previous researches usually assign a word with values such that whether this word appears in the document concerned or how frequently this word appears. Although these values are useful for text categorization, they have not fully expressed the abundant information contained in the document. This paper explores the effect of other types of values, which express the distribution of a word in the document. These novel values assigned to a word are called distributional features, which include the compactness of the appearances of the word and the position of the first appearance of the word. The proposed distributional features are exploited by a tfidf style equation and different features are combined using ensemble learning techniques. Experiments show that the distributional features are useful for text categorization. In contrast to using the traditional term frequency values solely, including the distributional features requires only a little additional cost, while the categorization performance can be significantly improved. Further analysis shows that the distributional features are especially useful when documents are long and the writing style is casual.
Posts: 1,149
Threads: 370
Joined: Jun 2010
07-10-2010, 10:52 AM
[attachment=5248]
[u]Text Categorization
[/u]
Foundations of Statistical Natural Language Processing
Task Description
Goal: Given the classification scheme, the system can decide which class(es) a document is related to.
A mapping from document space to classification scheme.
1 to 1 / 1 to many
To build the mapping:
observe the known samples classified in the scheme,
Summarize the features and create rules/formula
Decide the classes for the new documents according to the rules.
Posts: 1
Threads: 0
Joined: Jan 2011
Hi friends,
I'm Rafi , doing my final year computer science and Engineering . I'm in need of a ppt or pdf file for Distributional of Text categorization .
Posts: 2,481
Threads: 1,434
Joined: Mar 2010
Posts: 5,362
Threads: 2,998
Joined: Feb 2011
[attachment=10301]
Distributional Features for Text Categorization
Abstract
Text categorization is the task of assigning predefined categories to natural language text. With the widely used “bag-of-word” representation, previous researches usually assign a word with values that express whether this word appears in the document concerned or how frequently this word appears. These features are not enough for fully capturing the information contained in a document. Although these values are useful for text categorization, they have not fully articulated the abundant information contained in the document. This project explores the effect of other types of values, which express the circulation of a word in the document. These novel values assigned to a word are called distributional features, which include the neatness of the appearances of the word and the position of the first appearance of the word. The proposed distributional features are exploited by a tfidf style equation, and different features are combined using ensemble learning techniques. Thus we conclude that the distributional features are useful for text categorization, especially when they are combined with term frequency or combined together.
Existing system:
The existing system assigns a word with values that express whether this word appears in the document concerned or how frequently this word appears. Another system uses a statistical phrase that is composed of a sequence of words that occur contiguously in text in a statistically interesting way, which is usually called n-gram.
Existing system disadvantages:
• The existing features are not enough for fully capturing the information contained in a document.
• The performance of the system is comparatively slow.
Proposed system:
The proposed distributional features are exploited by a tfidf style equation, and different features are combined using ensemble learning techniques. The extraction of the distributional features is efficiently implemented using the inverted index constructed for the corpus. Using such type of index, for a given word-document pair, we can obtain not only the frequencies of the word but also the positions where the word appears. With the position information and the length of the document, the distribution of the word is constructed and the distributional features are computed.
Proposed system advantages
• Distributional features for text categorization requires only a little additional cost.
• Combining traditional term frequency with the distributional features improves the performance of the system.
• The effect of the distributional features is obvious when the documents are long and when the writing style is informal.
Software Requirements:-
Operating System Windows XP
Platform Visual Studio .Net 2008
Database SQL Server 2005
Languages Asp.Net, C#.Net.
Hardware Requirements:-
Hard Disk 40 GB
Monitor 15’ Color with VGI card support
RAM Minimum 512 MB
Processor Pentium IV and Above (or) Equivalent
Processor speed Minimum 1.4 GHz