Computer Science Seminar Abstract And Report 5
#1
Lightbulb 

Brain Computer Interface

Introduction
Brain-Computer interface is a staple of science fiction writing. Init's earliest incarnations nomechanism was thought necessary, as the technology seemed so far fetched that no explanation was likely. As more became known about the brain however, the possibility has become more real and the science fiction more technically sophisticated. Recently, the cyberpunk movement has adopted the idea of 'jacking in', sliding 'biosoft' chips into slots implanted in the skull (Gibson, W. 1984).

Although such biosofts are still science fiction, there have been several recent steps toward interfacing the brain and computers. Chief among these are techniques for stimulating and recording from areas of the brain with permanently implanted electrodes and using conscious control of EEG to control computers.

Some preliminary work is being done on synapsing neurons on silicon transformers and on growing neurons into neural networks on top of computer chips.The most advanced work in designing a brain-computer interface has stemmed from the evolution of traditional electrodes. There are essentially two main problems, stimulating the brain (input) and recording from the brain (output).

Traditionally, both input and output were handled by electrodes pulled from metal wires and glass tubing.Using conventional electrodes, multi-unit recordings can be constructed from mutlibarrelled pipettes. In addition to being fragile and bulky, the electrodes in these arrays are often too far apart, as most fine neural processes are only .1 to 2 µm apart.

Pickard describes a new type of electrode, which circumvents many of the problems listed above. These printed circuit micro-electrodes (PCMs) are manufactured in the same manner of computer chips. A design of a chip is photoreduced to produce an image on a photosensitive glass plate. This is used as a mask, which covers a UV sensitive glass or plastic film.
A PCM has three essential elements:

1) the tissue terminals,
2) a circuit board controlling or reading from the terminals
3) a Input/Output controller-interpreter, such as a computer.
Mesotechnology
Mesotechnology

Introduction
Mesotechnology describes a budding research field which could replace nanotechnology in the future as the primary means to control matter at length scales ranging from a cluster of atoms to microscopic elements. The prefix meso- comes from the Greek word mesos, meaning middle, hence the technology spans a range of length scales as opposed to nanotechnology which is concerned only with the smallest atomic scales.

describes very well phenomena on the atomic to nanoscale while classical Newtonian Mechanics describes the behavior of objects on the microscale and up. However, the length scale in the middle ( Although the term itself is still quite new, the general concept is not. Many fields of science have traditionally focused either on single discrete elements or large statistical collections where many theories have been successfully applied. In the field of physics for example, Quantum Mechanicsmesoscale) is not well described by either theory. Similarly, psychologists focus heavily on the behavior and mental processes of the individual while sociologists study the behavior of large societal groups, but what happens when only 3 people are interacting, this is the mesoscale.
Bio-inspired computing
Bio-inspired computing

Introduction
Bio-inspired computing is a field of study that loosely knits together subfields related to the topics of connectionism, social behaviour and emergence. It is often closely related to the field of artificial intelligence, as many of its pursuits can be linked to machine learning. It relies heavily on the fields of biology, computer science and mathematics. Briefly put, it is the use of computers to model nature, and simultaneously the study of nature to improve the usage of computers. Biologically-inspired computing is a major subset of natural computation.

One way in which bio-inspired computing differs from artificial intelligence (AI) is in how it takes a more evolutionary approach to learning, as opposed to the what could be described as creationist methods used in traditional AI.

In traditional AI, intelligence is often programmed from above: the programmer is the creator, and makes something and imbues it with its intelligence. Bio-inspired computing, on the other hand, takes a more bottom-up, decentralised approach; bio-inspired techniques often involve the method of specifying a set of simple rules, a set of simple organisms which adhere to those rules, and a method of iteratively applying those rules. After several generations of rule application it is usually the case that some forms of complex behaviour arise.
Anomaly Detection
Anomaly Detection

Introduction
Network intrusion detection systems often rely on matching patterns that are gleaned from known attacks. While this method is reliable and rarely produces false alarms, it has the obvious disadvantage that it cannot detect novel attacks. An alternative approach is to learn a model of normal traffic and report deviations, but these anomaly models are typically restricted to modeling IP addresses and ports, and do not include the application payload where many attacks occur. We describe a novel approach to anomaly detection.

We extract a set of attributes from each event (IP packet or TCP connection),including strings in the payload, and induce a set of conditional rules which have a very low probability of being violated in a nonstationary model of the normal network traffic in the training data. In the 1999 DARPA intrusion detection evaluation data set, we detect about 60% of 190 attacks at a false alarm rate of 10 per day (100 total). We believe that anomaly detection can work because most attacks exploit software or configuration errors that escaped field testing, so are only exposed under unusual consitions.

Though our rule learning techniques are applied to network intrusion detection, they are general enough for detecting anomalies in other applications.
Automated Authentication of Identity Documents
Automated Authentication of Identity Documents

Introduction
Identity documents (IDs), such as passports and drivers' licenses are relied upon to deter fraud and stop terrorism. A multitude of document types and increased expertise in forgery make human inspection of such documents inconsistent and error prone. New generation reader/authenticator technology can assist in the ID screening process. Such devices can read the information on the ID, authenticate it, and provide an overall security risk analysis. This talk will discuss how image processing and pattern recognition technology were used in the implementation of one such commercial device, the AssureTec i-Dentify reader. The reader is based on a high resolution color CCD camera which automatically captures a presented ID under a variety of light sources (Visible, UV, IR, and others) in a few seconds.

Automated processing of IDs involves a number of interesting technical challenges which will be discussed: sensing the presence of a document in the reader viewing area; cropping the document and extracting its size; identifying the document type by rapid comparison to a known document library; locating, extracting, and image processing of data fields of various types (text, photo, symbols, barcodes); processing text fields with appropriate OCR engines; cross-checking data from different parts of a document for consistence; checking for the presence of security features (e.g., UV patterns); and providing an overall risk assessment that the document is falsified.
A live demonstration of the AssureTec i-Dentify reader will be given.
Beyond Power: Making Bioinformatics Tools User-centric
Beyond Power: Making Bioinformatics Tools User-centric

Introduction
Bioinformatics tools and software frameworks are becoming increasingly important in the analysis of genomic and biological data. To understand how to construct and design effective tools for Biologists, both their interaction behavior and task flow need to be captured. In this presentation, we will begin by modeling the biologist, including experiences with current tools, and demonstrate how this information can be transformed into UI design patterns.

We will then integrate these results into an iterative pattern-oriented design process, inspired by traditional user-centered design methodologies. We will demonstrate how the complicity of task models and patterns can lead to user-centric bioinformatics tools, with the objective of making them more usable. An empirical study carried out with the NCBI (National Center for Biotechnology Information) site will be discussed, including future avenues of integrating task- and process-based approaches.
IMAX
IMAX

Introduction
The IMAX (Image Maximum) system has its roots in Canada where multi-screen films were the hit of the fair. A small group of Canadian filmmakers Graeme Ferguson, Roman Kroitor and Robert Kerr decided to design a new system using a single, powerful projector, rather than the cumbersome multiple projectors used at that time. The result is the IMAX motion picture projection system, which would revolutionize the giantscreen cinema. IMAX delivers just that on a screen four times the size of conventional movie screens.
Autonomic Computing
Autonomic Computing

Introduction
IBM s proposed solution looks at the problem from the most important perspective: the end user s. How do IT customers want computing systems to function? They want to interact with them intuitively, and they want to have to be far less involved in running them. Ideally, they would like computing systems to pretty much take care of the mundane elements of management by themselves.

The most direct inspiration for this functionality that exists today is the autonomic function of the human central nervous system. Autonomic controls use motor neurons to send indirect messages to organs at a sub-conscious level. These messages regulate temperature, breathing, and heart rate without conscious thought. The implications for computing are immediately evident; a network of organized, smart computing components that give us what we need, when we need it, without a conscious mental or even physical effort.

IBM has named its vision for the future of computing autonomic computing. This new paradigm shifts the fundamental definition of the technology age from one of computing, to one defined by data. Access to data from multiple, distributed sources, in addition to traditional centralized storage devices will allow users to transparently access information when and where they need it. At the same time, this new view of computing will necessitate changing the industry s focus on processing speed and storage to one of developing distributed networks that are largely self-managing, self-diagnostic, and transparent to the user.
This new computer paradigm means the design and implementation of computer systems, software, storage and support must exhibit these basic fundamentals from a user perspective:

Flexible - The system will be able to sift data via a platform- and device-agnostic approach.
Accessible - The nature of the autonomic system is that it is always on.
Transparent - The system will perform its tasks and adapt to a user s needs without dragging the user into the intricacies of its workings.
Lightweight Directory Access Protocol
Lightweight Directory Access Protocol

Introduction
LDAP is actually a simple protocol that is used to access directory services. It is an open, vendor neutral information such as e-mail addresses and public keys for secure transmission of data. The information contained within an LDAP directory could be ASCII text files, JPEG photographs or sound files. One way to reduce the time taken to search for information is to replicate the directory information over different platforms so that the process of locating a specific data is streamlined and more resilient to failure of connections and computers. This is what is done with information in an LDAP structure.

LDAP, Lightweight Directory Access Protocol, is an Internet protocol runs over TCP/IP that e-mail programs use to lookup contact information from a server. A directory structure is a specialized database, which is optimized for browsing, searching, locating and reading information. Thus LDAP make it possible to obtain directory information such as e-mail addresses and public keys. LDAP can handle other information, but at present it is typically used to associate names with phone numbers and e-mail addresses.

LDAP is a directory structure and is completely based on entries for each piece of information. An entry is a collection of attributes that has a globally-unique Distinguished Name (DN). The information in LDAP is arranged in a hierarchical tree-like structure. LDAP services are implemented by using the client-server architecture. There are options for referencing and accessing information within the LDAP structure. An entry is referenced by the type of its uniquely distinguishable name. Unlike the other directory structure, which allows the user access to all the information available, LDAP allows information to be accessed only after authenticating the user. It also supports privacy and integrity security services. There are two daemons for LDAP which are slapd and slurpd.

THE LDAP DOMAIN THE COMPONENTS OF AN LDAP DOMAIN A small domain may have a single LDAP server, and a few clients. The server commonly runs slapd, which will serve LDAP requests and update data. The client software is comprised of system libraries translating normal lib calls into LDAP data requests and providing some form of update functionality .Larger domains may have several LDAP slaves (read-only replicas of a master read/write LDAP server). For large installations, the domain may be divided into sub domains, with referrals to Ëœglueâ„¢ the sub domains together. THE STRUCTURE OF AN LDAP DOMAIN A simple LDAP domain is structured on the surface in a manner similar to an NIS domain; there are masters, slaves, and clients. The clients may query masters or slaves for information, but all updates must go to the masters. The Ëœdomain nameâ„¢ under LDAP is slightly different than that under NIS. LDAP domains may use an organization name and country.

The clients may or may not authenticate themselves to the server when performing operations, depending on the configuration of the client and the type of information requested. Commonly access to no sensitive information (such as port to service mappings) will be unauthenticated requests, while password information requests or any updates are authenticated. Larger organizations may subdivide their LDAP domain into sub domains. LDAP allows for this type of scalability, and uses Ëœreferralsâ„¢ to allow the passing off of clients from one server to the next (the same method is used by slave servers to pass modification requests to the master).
Collective Intelligence
Collective Intelligence

Introduction
Collective intelligence, as characterized by Tom Atlee, Douglas Engelbart, Cliff Joslyn, Francis Heylighen, Ron Dembo, and other theorists, is a working form of intelligence which overcomes groupthink and individual cognitive bias in order to allow a collective to cooperate on one process”while maintaining reliable intellectual performance. In this context, it refers to robust consensus decision making, and may properly be considered a subfield of sociology.

Another CI pioneer, George, author of The Quest for Collective Intelligence (1995), defined this phenomenon in his Blog of Collective Intelligence as the capacity of a human community to evolve toward higher order complexity thought, problem-solving and integration through collaboration and innovation.

A less anthropomorphic conception is that a large number of cooperating entities can cooperate so closely as to become indistinguishable from a single organism, achieving a single focus of attention and standard of metrics which provide an appropriate threshold of action. These ideas are more closely explored in Society of Mind theory and sociobiology, as well as in biology proper. Another approach builds on work in sociology and anthropology of science as a foundation, e.g., Scientific Community Metaphor.
Face Recognition using the Techniques Base on Principal Component Analysis (PCA)
Face Recognition using the Techniques Base on Principal Component Analysis (PCA)

Introduction
During several past few years, Face Recognition has become more significant. Although, other identification methods such as Fingerprints or Iris recognition are currently in use, face recognition has its own application.

Principal Component Analysis (PCA) has been variously used in pattern recognition applications, especially in face recognition field. In this seminars, several techniques based on standard PCA, Fisher Linear Discriminant (FLD), Face Specific Subspace (FSS) will be discussed. We will review the developed idea used in the 2D-PCA method and its application on the FSS approach.

For the experimental results, several tests have been performed over on three face image databases: ORL, Yale, and UMIST; The approaches will be illustrated using a web-based face recognition program. The architecture and techniques used in the development phase (Object Orientation Model / Dot Net Inter-operability) will also be discussed. We conclude by discussing open problems that will encompass our future work.
Agile Software Development Methodologies
Agile Software Development Methodologies

Introduction
A software methodology consists of rules and practices for creating the computer programs. Heavyweight methodologies have a lot of such rules and practices, and many documents are produced as a result of applying these methodologies. Following these methodologies requires discipline and time, i.e. they are bureaucratic.

In the last few years some new methodologies have emerged as a reaction to these disciplined methodologies. These are known agile (or lightweight) methodologies. An agile methodology has not too much rules and practices to follow. It lies between no process and too much process, it provides just enough process. In this seminars, the agile methodologies will be introduced, their advantages and disadvantages in comparison to heavyweight methodologies will be discussed and then current researches in this area will be mentioned.
Position-Based Routing
Position-Based Routing

Introduction
Mobile ad-hoc networks (MANETs) are becoming more popular due to their ability to model interesting aspects of real wireless networks. It is easy and cheap to build such networks because no other equipment or infrastructure are required besides the mobile devices. An MANET is composed of a set of nodes distributed in the plane.

A signal sent by any node can be received by all the nodes within its transmission range, which is usually taken as one unit distance. One of the primary issues related to MANETs is the development of efficient routing algorithms which perform well in various practical situations.

One routing technique is called position-based routing. In this type of routing algorithm, the message or packet to be delivered is forwarded in the direction of the destination, assuming that the forwarding node knows the positions of all neighbors in its transmission range. This information on direct neighbors is gained from I-am-here-messages that each node sends out periodically.
This seminar will introduce and analyze several wireless position-based routing techniques, including a class of recently developed position-based routing algorithms called AB (above-below) algorithms.
Souped-Up Mesh Networks
Souped-Up Mesh Networks

Introduction
In an effort to make a better wireless network, the Cambridge MA-based company BBN Technologies announced last week that it has built a mesh network that uses significantly less power than traditional wireless networks, such as cellular and Wi-Fi, while achieving comparable data-transfer rates.

The technology, which is being funded by the Defense Advanced Research Projects Agency (DARPA), was developed to create ad hoc communication and surveillance networks on battlefields. But aspects of it are applicable to emergency or remote cell-phone networks, and could potentially even help to extend the battery life of consumer wireless devices, says Jason Redi, a scientist at BBN.

Mesh networks -- collections of wireless transmitters and receivers that send data hopping from one node to another, without the need of a centralized base station or tower -- are most often found in research applications, in which scientists deploy hordes of sensors to monitor environments from volcanoes to rainforests. In this setting, mesh networks are ideal because they can be deployed without a large infrastructure. Because they lack the need for costly infrastructure, mesh networks can also be used for bringing communication to remote areas where there isn t a reliable form of electricity. In addition, they can be established quickly, which is useful for building networks of phones or radios during a public emergency.
While mesh networks have quite a bit of flexibility in where they can be deployed and how quickly, so far they ve been less than ideal for a number of applications due to their power requirements and relatively slow data-transfer rates. All radios in a mesh network need to carry an onboard battery, and in order to conserve battery power, most low-power mesh networks send and receive data slowly -- at about tens of kilobits per second. You get the low power, says Redi, but you also get poor performance.

Especially in military surveillance, the data rates need to be much faster. If a soldier has set up a network of cameras, for example, he or she needs to react to the video as quickly as possible. So, to keep the power consumption to a minimum and increase data-transfer rates, the BBN team modified both the hardware and software of their prototype network. The result is a mesh network that can send megabits of data per second across a network (typical rates for Wi-Fi networks, and good enough to stream video), using one-hundredth the power of traditional networks.
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: masters of science in, epistemology and science, computer science jobs, bachelor of science, future computer science, science seekho com**on mode bridge inverter ppt, algorithm computer science,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  computer networks full report seminar topics 8 42,026 06-10-2018, 12:35 PM
Last Post: jntuworldforum
  OBJECT TRACKING AND DETECTION full report project topics 9 30,658 06-10-2018, 12:20 PM
Last Post: jntuworldforum
  Block Chain and Data Science jntuworldforum 0 7,962 06-10-2018, 12:15 PM
Last Post: jntuworldforum
  Optical Computer Full Seminar Report Download computer science crazy 46 66,346 29-04-2016, 09:16 AM
Last Post: dhanabhagya
  Digital Signature Full Seminar Report Download computer science crazy 20 43,691 16-09-2015, 02:51 PM
Last Post: seminar report asees
  HOLOGRAPHIC VERSATILE DISC A SEMINAR REPORT Computer Science Clay 20 39,233 16-09-2015, 02:18 PM
Last Post: seminar report asees
  Computer Sci Seminar lists7 computer science crazy 4 11,414 17-07-2015, 10:29 AM
Last Post: dhanyasoubhagya
  Steganography In Images (Download Seminar Report) Computer Science Clay 16 25,715 08-06-2015, 03:26 PM
Last Post: seminar report asees
  Mobile Train Radio Communication ( Download Full Seminar Report ) computer science crazy 10 27,940 01-05-2015, 03:36 PM
Last Post: seminar report asees
  Computer Architecture Requirements? shakir_ali 1 26,973 07-04-2015, 12:04 PM
Last Post: Kishore1

Forum Jump: