Computer Science Seminar Abstract And Report 10
#1
Lightbulb 

A More Usable Approach To OS Design
[color][/color]
Introduction
The fundamental purpose of an operating system (OS) is to enable a variety of programs to share a single computer efficiently and productively. This demands memory protection, preemptively scheduled timesharing, coordinated access to I/O peripherals, and other services. In addition, an OS can allow several users to share a computer. In this case, efficiency demands services that protect users from harming each other, enable them to share without prior arrangement, and mediate access to physical devices.

On today's computer systems, programmers usually implement these goals through a large program called the kernel. Since this program must be accessible to all user programs, it is the natural place to add functionality to the system. Since the only model for process interaction is that of specific, individual services provided by the kernel, no one creates other places to add functionality. As time goes by, more and more is added to the kernel.

A traditional system allows users to add components to a kernel only if they both understand most of it and have a privileged status within the system. Testing new components requires a much more painful edit-compile-debug cycle than testing other programs. It cannot be done while others are using the system. Bugs usually cause fatal system crashes, further disrupting others' use of the system. The entire kernel is usually non-pageable. (There are systems with pageable kernels, but deciding what can be paged is difficult and error prone. Usually the mechanisms are complex, making them difficult to use even when adding simple extensions.)

Because of these restrictions, functionality which properly belongs behind the wall of a traditional kernel is usually left out of systems unless it is absolutely mandatory. Many good ideas, best done with an open/read/write interface cannot be implemented because of the problems inherent in the monolithic nature of a traditional system. Further, even among those with the endurance to implement new ideas, only those who are privileged users of their computers can do so. The software copyright system darkens the mire by preventing unlicensed people from even reading the kernel source The Hurd removes these restrictions from the user. It provides an user extensible system framework without giving up POSIX compatibility and the unix security model.

When Richard Stallman founded the GNU project in 1983, he wanted to write an operating system consisting only of free software. Very soon, a lot of the essential tools were implemented, and released under the GPL. However, one critical piece was missing: The kernel. After considering several alternatives, it was decided not to write a new kernel from scratch, but to start with the Mach micro kernel.
Intel Centrino Mobile Technology
Intel Centrino Mobile Technology

Introduction
The world of mobile computing has seldom been so exciting. Not, at least, for last 3 years when all that the chip giants could think of was scaling down the frequency and voltage of the desktop CPUs, and labeling them as mobile processors. Intel Centrino mobile technology is based on the understanding that mobile customers value the four vectors of mobility: performance, battery life, small form factor, and wireless connectivity. The technologies represented by the Intel Centrino brand will include an Intel Pentium-M processor, Intel 855 chipset family, and Intel PRO/Wireless 2100 network connection .

The Intel Pentium-M processor is a higher performance, lower power mobile processor with several micro-architectural enhancements over existing Intel mobile processors. Some key features of the Intel Pentium-M processor Micro-architecture include Dynamic Execution, 400-MHz, on-die 1-MB second level cache with Advanced Transfer Cache Architecture, Streaming SIMD Extensions 2, and Enhanced Intel SpeedStep technology.

The Intel Centrino mobile technology also includes the 855GM chipset components GMCH and the ICH4-M. The Accelerated Hub Architecture is designed into the chipset to provide an efficient, high bandwidth, communication channel between the GMCH and the ICH4-M.The GMCH component contains a processor system bus controller, a graphics controller, and a memory controller, while providing an LVDS interface and two DVO ports.

The integrated Wi-Fi Certified Intel PRO/Wireless 2100 Network Connection has been designed and validated to work with all of the Intel Centrino mobile technology components and is able to connect to 802.11b Wi-Fi certified access points. It also supports advanced wireless LAN security including Cisco LEAP, 802.1X and WEP. Finally, for comprehensive security support, the Intel PRO/Wireless 2100 Network Connection has been verified with leading VPN suppliers like Cisco, CheckPoint, Microsoft and Intel NetStructure.
Pentium-M Processor
Pentium-M Processor

Introduction
The Intel Pentium-M processor is a high performance, low power mobile processor with several micro-architectural enhancements over existing Intel mobile processors. The following list provides some of the key features on this processor:

¢ Supports Intel Architecture with Dynamic Execution
¢ High performance, low-power core
¢ On-die, 1-MByte second level cache with Advanced Transfer Cache Architecture
¢ Advanced Branch Prediction and Data Prefetch Logic
¢ Streaming SIMD Extensions 2 (SSE2)
¢ 400-MHz, Source-Synchronous processor system bus
¢ Advanced Power Management features including Enhanced Intel SpeedStep technology
¢ Micro-FCPGA and Micro-FCBGA packaging technologies

The Intel Pentium-M processor is manufactured on Intel's advanced 0.13 micron process technology with copper interconnect. The processor maintains support for MMX technology and Internet Streaming SIMD instructions and full compatibility with IA-32 software. The high performance core features architectural innovations like Micro-op Fusion and Advanced Stack Management that reduce the number of micro-ops handled by the processor. This results in more efficient scheduling and better performance at lower power.

The on-die 32-kB Level 1 instruction and data caches and the 1-MB Level 2 cache with Advanced Transfer Cache Architecture enable significant performance improvement over existing mobile processors. The processor also features a very advanced branch prediction architecture that significantly reduces the number of mispredicted branches. The processor's Data Prefetch Logic speculatively fetches data to the L2 cache before an L1 cache requests occurs, resulting in reduced bus cycle penalties and improved performance.
MPEG Video Compression
MPEG Video Compression

Introduction
MPEG is the famous four-letter word which stands for the "Moving Pictures Experts Groups. To the real word, MPEG is a generic means of compactly representing digital video and audio signals for consumer distributionThe essence of MPEG is its syntax: the little tokens that make up the bitstream. MPEG's semantics then tell you (if you happen to be a decoder, that is) how to inverse represent the compact tokens back into something resembling the original stream of samples. These semantics are merely a collection of rules (which people like to called algorithms, but that would imply there is a mathematical coherency to a scheme cooked up by trial and error¦.). These rules are highly reactive to combinations of bitstream elements set in headers and so forth.

MPEG is an institution unto itself as seen from within its own universe. When (unadvisedly) placed in the same room, its inhabitants a blood-letting debate can spontaneously erupt among, triggered by mere anxiety over the most subtle juxtaposition of words buried in the most obscure documents. Such stimulus comes readily from transparencies flashed on an overhead projector. Yet at the same time, this gestalt will appear to remain totally indifferent to critical issues set before them for many months. It should therefore be no surprise that MPEG's dualistic chemistry reflects the extreme contrasts of its two founding fathers: the fiery Leonardo Chairiglione (CSELT, Italy) and the peaceful Hiroshi Yasuda (JVC, Japan). The excellent byproduct of the successful MPEG Processes became an International Standards document safely administered to the public in three parts: Systems (Part), Video (Part 2), and Audio (Part 3).

Pre MPEG

Before providence gave us MPEG, there was the looming threat of world domination by proprietary standards cloaked in syntactic mystery. With lossy compression being such an inexact science (which always boils down to visual tweaking and implementation tradeoffs), you never know what's really behind any such scheme (other than a lot of the marketing hype). Seeing this threat¦ that is, need for world interoperability, the Fathers of MPEG sought help of their colleagues to form a committee to standardize a common means of representing video and audio (a la DVI) onto compact discs¦. and maybe it would be useful for other things too.

MPEG borrowed a significantly from JPEG and, more directly, H.261. By the end of the third year (1990), a syntax emerged, which when applied to represent SIF-rate video and compact disc-rate audio at a combined bitrate of 1.5 Mbit/sec, approximated the pleasure-filled viewing experience offered by the standard VHS format.

After demonstrations proved that the syntax was generic enough to be applied to bit rates and sample rates far higher than the original primary target application ("Hey, it actually works!"), a second phase (MPEG-2) was initiated within the committee to define a syntax for efficient representation of broadcast video, or SDTV as it is now known (Standard Definition Television), not to mention the side benefits: frequent flier miles
Survivable Networks Systems
Survivable Networks Systems

Introduction
Survivability In Network Systems

Contemporary large-scale networked systems that are highly distributed improve the efficiency and effectiveness of organizations by permitting whole new levels of organizational integration. However, such integration is accompanied by elevated risks of intrusion and compromise. These risks can be mitigated by incorporating survivability capabilities into an organization's systems. As an emerging discipline, survivability builds on related fields of study (e.g., security, fault tolerance, safety, reliability, reuse, performance, verification, and testing) and introduces new concepts and principles. Survivability focuses on preserving essential services in unbounded environments, even when systems in such environments are penetrated and compromised.

The New Network Paradigm: Organizational Integration From their modest beginnings some 20 years ago, computer networks have become a critical element of modern society. These networks not only have global reach, they also have impact on virtually every aspect of human endeavor. Network systems are principal enabling agents in business, industry, government, and defense. Major economic sectors, including defense, energy, transportation, telecommunications, manufacturing, financial services, health care, and education, all depend on a vast array of networks operating on local, national, and global scales. This pervasive societal dependency on networks magnifies the consequences of intrusions, accidents, and failures, and amplifies the critical importance of ensuring network survivability.

As organizations seek to improve efficiency and competitiveness, a new network paradigm is emerging. Networks are being used to achieve radical new levels of organizational integration. This integration obliterates traditional organizational boundaries and transforms local operations into components of comprehensive, network-resident business processes. For example, commercial organizations are integrating operations with business units, suppliers, and customers through large-scale networks that enhance communication and services.

These networks combine previously fragmented operations into coherent processes open to many organizational participants. This new paradigm represents a shift from bounded networks with central control to unbounded networks. Unbounded networks are characterized by distributed administrative control without central authority, limited visibility beyond the boundaries of local administration, and lack of complete information about the network. At the same time, organizational dependencies on networks are increasing and risks and consequences of intrusions and compromises are amplified.

The Definition of Survivability

We define survivability as the capability of a system to fulfill its mission, in a timely manner, in the presence of attacks, failures, or accidents. We use the term system in the broadest possible sense, including networks and large-scale systems of systems. The term mission refers to a set of very high-level (i.e., abstract) requirements or goals.

Missions are not limited to military settings since any successful organization or project must have a vision of its objectives whether expressed implicitly or as a formal mission statement. Judgments as to whether or not a mission has been successfully fulfilled are typically made in the context of external conditions that may affect the achievement of that mission. For example, assume that a financial system shuts down for 12 hours during a period of widespread power outages caused by a hurricane.

If the system preserves the integrity and confidentiality of its data and resumes its essential services after the period of environmental stress is over, the system can reasonably be judged to have fulfilled its mission. However, if the same system shuts down unexpectedly for 12 hours under normal conditions (or under relatively minor environmental stress) and deprives its users of essential financial services, the system can reasonably be judged to have failed its mission, even if data integrity and confidentiality are preserved.
Self Organizing Maps
Self Organizing Maps

Introduction
These notes provide an introduction to unsupervised neural networks, in particular Kohonen self-organizing maps; together with some fundamental background material on statistical pattern recognition. One question which seems to puzzle many of those who encounter unsupervised learning for the first time is how can anything useful be achieved when input information is simply poured into a black box with no provision of any rules as to how this information should be stored, or examples of the various groups into which this information can be placed. If the information is sorted on the basis of how similar one input is with another, then we will have accomplished an important step in condensing the available information by developing a more compact representation.

We can represent this information, and any subsequent information, in a much reduced fashion. We will know which information is more likely. This black box will certainly have learned. It may permit us to perceive some order in what otherwise was a mass of unrelated information to see the wood for the trees.

In any learning system, we need to make full use of the all the available data and to impose any constrains that we feel are justified. If we know that what groups the information must fall into, that certain combinations of inputs preclude others, or that certain rules underlie the production of the information then we must use them. Often, we do not possess such additional information. Consider two examples of experiments. One designed to test a particular hypothesis, say, to determine the effects of alcohol on driving; the second to investigate any possible connection between car accidents and the driver's lifestyle. In the first experiment, we could arrange a laboratory-based experiment where volunteers took measured amounts of alcohol and then attempted some motor-skill activity (e.g., following a moving light on a computer screen by moving the mouse). We could collect the data (i.e., amount of alcohol vs. error rate on the computer test), conduct the customary statistical test and, finally, draw our conclusions. Our hypothesis may that the more alcohol consumed the greater the error rate we can confirm this on the basis of this experiment. Note, that we cannot prove the relationship only state that we are 99% certain (or whatever level we set ourselves) that the result is not due purely to chance.

The second experiment is much more open-ended (indeed, it could be argued that it is not really an experiment).Data is collected from a large number of drives those that have been involved in accidents and those that have not. This data could include the driver's age, occupation, health details, drinking habits, etc. From this mass of information, we can attempt to discover any possible connections. A number of conventional statistical tools exist to support this (e.g., factor analysis). We may discover possible relationships including one between accidents and drinking but perhaps many others as well. There could be a number of leads that need following up. Both approaches are valid in searching for causes underlying road accidents. This second experiment can be considered as an example of unsupervised learning.

The next section provides some introductory background material on statistical pattern recognition. The terms and concepts will be useful in understanding the later material on unsupervised neural networks. As the approach underlying unsupervised networks is the measurement of how similar (or different) various inputs are, we need to consider how the distances between these inputs are measured. This forms the basis Section Three, together with a brief description of non-neural approaches to unsupervised learning. Section Four discusses the background to and basic algorithm of Kohonen self-organizing maps. The next section details some of the properties of these maps and introduces several useful practical points. The final section provides pointers to further information on unsupervised neural networks.
Mobile IP
Mobile IP

Introduction
While Internet technologies largely succeed in overcoming the barriers of time and distance, existing Internet technologies have yet to fully accommodate the increasing mobile computer usage. A promising technology used to eliminate this current barrier is Mobile IP. The emerging 3G mobile networks are set to make a huge difference to the international business community. 3G networks will provide sufficient bandwidth to run most of the business computer applications while still providing a reasonable user experience.

However, 3G networks are not based on only one standard, but a set of radio technology standards such as cdma2000, EDGE and WCDMA. It is easy to foresee that the mobile user from time to time also would like to connect to fixed broadband networks, wireless LANs and, mixtures of new technologies such as Bluetooth associated to e.g. cable TV and DSL access points.

In this light, a common macro mobility management framework is required in order to allow mobile users to roam between different access networks with little or no manual intervention. (Micro mobility issues such as radio specific mobility enhancements are supposed to be handled within the specific radio technology.) IETF has created the Mobile IP standard for this purpose.

Mobile IP is different compared to other efforts for doing mobility management in the sense that it is not tied to one specific access technology. In earlier mobile cellular standards, such as GSM, the radio resource and mobility management was integrated vertically into one system. The same is also true for mobile packet data standards such as CDPD, Cellular Digital Packet Data and the internal packet data mobility protocol (GTP/MAP) of GPRS/UMTS networks. This vertical mobility management property is also inherent for the increasingly popular 802.11 Wireless LAN standard.

Mobile IP can be seen as the least common mobility denominator - providing seamless macro mobility solutions among the diversity of accesses. Mobile IP is defining a Home Agent as an anchor point with which the mobile client always has a relationship, and a Foreign Agent, which acts as the local tunnel-endpoint at the access network where the mobile client is visiting. Depending on which network the mobile client is currently visiting; its point of attachment Foreign Agent) may change. At each point of attachment, Mobile IP either requires the availability of a standalone Foreign Agent or the usage of a Co-located care-of address in the mobile client itself.

The concept of "Mobility" or "packet data mobility", means different things depending on what context the word is used within. In a wireless or fixed environment, there are many different ways of implementing partial or full mobility and roaming services. The most common ways of implementing mobility (discrete mobility or IP roaming service) support in today's IP networking environments includes simple "PPP dial-up" as well as company internal mobility solutions implemented by means of renewal of IP address at each new point of attachment. The most commonly deployed way of supporting remote access users in today's Internet is to utilize the public telephone network (fixed or mobile) and to use the PPP dial-up functionality.

Iris Scanning
Iris Scanning

Introduction
In today's information age it is not difficult to collect data about an individual and use that information to exercise control over the individual. Individuals generally do not want others to have personal information about them unless they decide to reveal it. With the rapid development of technology, it is more difficult to maintain the levels of privacy citizens knew in the past. In this context, data security has become an inevitable feature. Conventional methods of identification based on possession of ID cards or exclusive knowledge like social security number or a password are not altogether reliable. ID cards can be almost lost, forged or misplaced: passwords can be forgotten.

Such that an unauthorized user may be able to break into an account with little effort. So it is need to ensure denial of access to classified data by unauthorized persons. Biometric technology has now become a viable alternative to traditional identification systems because of its tremendous accuracy and speed. Biometric system automatically verifies or recognizes the identity of a living person based on physiological or behavioral characteristics.

Since the persons to be identified should be physically present at the point of identification, biometric techniques gives high security for the sensitive information stored in mainframes or to avoid fraudulent use of ATMs. This paper explores the concept of Iris recognition which is one of the most popular biometric techniques. This technology finds applications in diverse fields
Biometrics - Future Of Identity
Biometrics - Future Of Identity

Introduction
Biometric dates back to ancient Egyptians who measured people to identify them. Biometric devices have three primary components.

1. Automated mechanism that scans and captures a digital or analog image of a living personal characteristic
2. Compression, processing, storage and comparison of image with a stored data.
3. Interfaces with application systems.
A biometric system can be divided into two stages: the enrolment module and the identification module. The enrolment module is responsible for training the system to identity a given person. During an enrolment stage, a biometric sensor scans the person's physiognomy to create a digital representation. A feature extractor processes the representation to generate a more compact and expressive representation called a template. For an iris image these include the various visible characteristics of the iris such as contraction, Furrows, pits, rings etc. The template for each user is stored in a biometric system database.

The identification module is responsible for recognizing the person. During the identification stage, the biometric sensor captures the characteristics of the person to be identified and converts it into the same digital format as the template. The resulting template is fed to the feature matcher, which compares it against the stored template to determine whether the two templates match. The identification can be in the form of verification, authenticating a claimed identity or recognition, determining the identity of a person from a database of known persons. In a verification system, when the captured characteristic and the stored template of the claimed identity are the same, the system concludes that the claimed identity is correct. In a recognition system, when the captured characteristic and one of the stored templates are the same, the system identifies the person with matching template.
LWIP
LWIP

Introduction
Over the last few years, the interest for connecting computers and computer supported devices to wireless networks has steadily increased. Computers are becoming more and more seamlessly integrated with everyday equipment and prices are dropping. At the same time wireless networking technologies, such as Bluetooth and IEEE 802.11b WLAN , are emerging. This gives rise to many new fascinating scenarios in areas such as health care, safety and security, transportation, and processing industry. Small devices such as sensors can be connected to an existing network infrastructure such as the global Internet, and monitored from anywhere.

The Internet technology has proven itself flexible enough to incorporate the changing network environments of the past few decades. While originally developed for low speed networks such as the ARPANET, the Internet technology today runs over a large spectrum of page link technologies with vastly different characteristics in terms of bandwidth and bit error rate. It is highly advantageous to use the existing Internet technology in the wireless networks of tomorrow since a large amount of applications using the Internet technology have been developed. Also, the large connectivity of the global Internet is a strong incentive.

Since small devices such as sensors are often required to be physically small and inexpensive, an implementation of the Internet protocols will have to deal with having limited computing resources and memory. This report describes the design and implementation of a small TCP/IP stack called lwIP that is small enough to be used in minimal systems.

Overview

As in many other TCP/IP implementations, the layered protocol design has served as a guide for the design of the implementation of lwIP. Each protocol is implemented as its own module, with a few functions acting as entry points into each protocol. Even though the protocols are implemented separately, some layer violations are made, as discussed above, in order to improve performance both in terms of processing speed and memory usage. For example, when verifying the checksum of an incoming TCP segment and when demultiplexing a segment, the source and destination IP addresses of the segment has to be known by the TCP module. Instead of passing these addresses to TCP by the means of a function call, the TCP module is aware of the structure of the IP header, and can therefore extract this information by itself.

lwIP consists of several modules. Apart from the modules implementing the TCP/IP protocols (IP, ICMP, UDP, and TCP) a number of support modules are implemented.

The support modules consists of :-

" The operating system emulation layer (described in Chapter3)
" The buffer and memory management subsystems (described in Chapter 4)
" Network interface functions (described in Chapter 5)
" Functions for computing Internet checksum (Chapter 6)
" An abstract API (described in Chapter 8 )
Smart card
Smart card

Introduction
In this seminar ,is giving some basic concepts about smart cards. The physical and logical structure of the smart card and the corresponding security access control have been discussed in this seminar . It is believed that smart cards offer more security and confidentiality than the other kinds of information or transaction storage. Moreover, applications applied with smart card technologies are illustrated which demonstrate smart card is one of the best solutions to provide and enhance their system with security and integrity.

The seminar also covers the contactless type smart card briefly. Different kinds of scheme to organise and access of multiple application smart card are discussed. The first and second schemes are practical and workable on these days, and there is real applications developed using those models. For the third one, multiple independent applications in a single card, there is still a long way to go to make it becomes feasible because of several reasons.

At the end of the paper, an overview of the attack techniques on the smart card is discussed as well. Having those attacks does not mean that smart card is unsecure. It is important to realise that attacks against any secure systems are nothing new or unique. Any systems or technologies claiming 100% secure are irresponsible. The main consideration of determining whether a system is secure or not depends on whether the level of security can meet the requirement of the system.

The smart card is one of the latest additions to the world of information technology. Similar in size to today's plastic payment card, the smart card has a microprocessor or memory chip embedded in it that, when coupled with a reader, has the processing power to serve many different applications. As an access-control device, smart cards make personal and business data available only to the appropriate users. Another application provides users with the ability to make a purchase or exchange value. Smart cards provide data portability, security and convenience. Smart cards come in two varieties: memory and microprocessor.

Memory cards simply store data and can be viewed as a small floppy disk with optional security. A microprocessor card, on the other hand, can add, delete and manipulate information in its memory on the card. Similar to a miniature computer, a microprocessor card has an input/output port operating system and hard disk with built-in security features. On a fundamental level, microprocessor cards are similar to desktop computers. They have operating systems, they store data and applications, they compute and process information and they can be protected with sophisticated security tools. The self-containment of smart card makes it resistant to attack as it does not need to depend upon potentially vulnerable external resources. Because of this characteristic, smart cards are often used in different applications, which require strong security protection and authentication.
Quantum Information Technology
Quantum Information Technology

Introduction
The subject of quantum computing brings together ideas from classical information theory, computer science, and quantum physics. This document aims to summarize not just quantum computing, but the whole subject of quantum information theory. It turns out that information theory and quantum mechanics fit together very well. In order to explain their relationship, the paper begins with an introduction to classical information theory .The principles of quantum mechanics are then outlined.

The EPR-Bell correlation and quantum entanglement in general, form the essential new ingredient, which distinguishes quantum from classical information theory, and, arguably, quantum from classical physics. Basic quantum information ideas are described, including key distribution, teleportation, the universal quantum computer and quantum algorithms. The common theme of all these ideas is the use of quantum entanglement as a computational resource.

Experimental methods for small quantum processors are briefly sketched, concentrating on ion traps, super conducting cavities, Nuclear magnetic resonance imaging based techniques, and quantum dots. "Where a calculator on the Eniac is equipped with 18000 vacuum tubes and weighs 30 tons, computers in the future may have only 1000 tubes and weigh only 1 1/2 tons" Popular Mechanics, March 1949.

Now, if this seems like a joke, wait a second. "Tomorrows computer might well resemble a jug of water" This for sure is no joke. Quantum computing is here. What was science fiction two decades back is a reality today and is the future of computing. The history of computer technology has involved a sequence of changes from one type of physical realization to another --- from gears to relays to valves to transistors to integrated circuits and so on. Quantum computing is the next logical advancement.

Today's advanced lithographic techniques can squeeze fraction of micron wide logic gates and wires onto the surface of silicon chips. Soon they will yield even smaller parts and inevitably reach a point where logic gates are so small that they are made out of only a handful of atoms. On the atomic scale matter obeys the rules of quantum mechanics, which are quite different from the classical rules that determine the properties of conventional logic gates. So if computers are to become smaller in the future, new, quantum technology must replace or supplement what we have now.

Quantum technology can offer much more than cramming more and more bits to silicon and multiplying the clock-speed of microprocessors. It can support entirely new kind of computation with qualitatively new algorithms based on quantum principles!
Asynchronous Chips
Asynchronous Chips

Introduction
Computer chips of today are synchronous. They contain a main clock, which controls the timing of the entire chips. There are problems, however, involved with these clocked designs that are common today.One problem is speed. A chip can only work as fast as its slowest component. Therefore, if one part of the chip is especially slow, the other parts of the chip are forced to sit idle. This wasted computed time is obviously detrimental to the speed of the chip.

New problems with speeding up a clocked chip are just around the corner. Clock frequencies are getting so fast that signals can barely cross the chip in one clock cycle. When we get to the point where the clock cannot drive the entire chip, we'll be forced to come up with a solution. One possible solution is a second clock, but this will incur overhead and power consumption, so this is a poor solution. It is also important to note that doubling the frequency of the clock does not double the chip speed, therefore blindly trying to increase chip speed by increasing frequency without considering other options is foolish.

The other major problem with c clocked design is power consumption. The clock consumes more power that any other component of the chip. The most disturbing thing about this is that the clock serves no direct computational use. A clock does not perform operations on information; it simply orchestrates the computational parts of the computer.

New problems with power consumption are arising. As the number of transistors on a chi increases, so does the power used by the clock. Therefore, as we design more complicated chips, power consumption becomes an even more crucial topic. Mobile electronics are the target for many chips.

These chips need to be even more conservative with power consumption in order to have a reasonable battery lifetime.The natural solution to the above problems, as you may have guessed, is to eliminate the source of these headaches: the clock.
Cellular through remote control switch
Cellular through remote control switch

Introduction
Cellular through remote control switch implies control of devices at a remote location via circuit interfaced to the remote telephone line/device by dialing specific DTMF (dual tune multi frequency) digits from a local telephone. This project Cellular through remote control switch has the following features

1. It can control multiple load (on/off/status each load)
2. It provides you feedback when the circuit is in energized state and also sends an acknowledgement indicating action with respect to the switching on of each load and switching off of all loads (together).
It can selectively switch on any one or more loads one after the other and switch off all loads simultaneously

OPERATION

1. Dial the Phone Number - a OK tone produced
2. Password - 4321
3. Load number - 1, 2, 3, 4
4. Control number - 9/on, O/off, # / status When the phone number is dialed the ring detector sense the ring and the auto lifter works after some time. When the auto lifter works an OK tone is produced.
Then the password is entered. The password is 123451 Then to check the status of the corresponding load enter # and load number. To on the load enter 9 and load number. To off the load enter 0 and load number. The whole operation is done within 3 minutes. After 3 minutes the operation is timeout.
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: nus freshmen seminar science, how science and, lwip microblaze, survivability, centrino, lwip coldfire, jjcollege of arts and science,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  computer networks full report seminar topics 8 42,445 06-10-2018, 12:35 PM
Last Post: jntuworldforum
  OBJECT TRACKING AND DETECTION full report project topics 9 30,914 06-10-2018, 12:20 PM
Last Post: jntuworldforum
  Block Chain and Data Science jntuworldforum 0 8,046 06-10-2018, 12:15 PM
Last Post: jntuworldforum
  Optical Computer Full Seminar Report Download computer science crazy 46 66,715 29-04-2016, 09:16 AM
Last Post: dhanabhagya
  Digital Signature Full Seminar Report Download computer science crazy 20 44,096 16-09-2015, 02:51 PM
Last Post: seminar report asees
  HOLOGRAPHIC VERSATILE DISC A SEMINAR REPORT Computer Science Clay 20 39,341 16-09-2015, 02:18 PM
Last Post: seminar report asees
  Computer Sci Seminar lists7 computer science crazy 4 11,498 17-07-2015, 10:29 AM
Last Post: dhanyasoubhagya
  Steganography In Images (Download Seminar Report) Computer Science Clay 16 25,824 08-06-2015, 03:26 PM
Last Post: seminar report asees
  Mobile Train Radio Communication ( Download Full Seminar Report ) computer science crazy 10 28,041 01-05-2015, 03:36 PM
Last Post: seminar report asees
  Computer Architecture Requirements? shakir_ali 1 27,130 07-04-2015, 12:04 PM
Last Post: Kishore1

Forum Jump: