Computer Science Seminar Abstract And Report 3
#1
Lightbulb 

Symfony

Introduction
Symfony is a web application framework for PHP5 projects.

It aims to speed up the creation and maintenance of web applications, and to replace the repetitive coding tasks by power, control and pleasure.

The very small number of prerequisites make symfony easy to install on any configuration; you just need Unix or Windows with a web server and PHP 5 installed. It is compatible with almost every database system. In addition, it has a very small overhead, so the benefits of the framework don t come at the cost of an increase of hosting costs.

Using symfony is so natural and easy for people used to PHP and the design patterns of Internet applications that the learning curve is reduced to less than a day. The clean design and code readability will keep your delays short. Developers can apply agile development principles (such as DRY, KISS or the XP philosophy) and focus on applicative logic without losing time to write endless XML configuration files.

Symfony is aimed at building robust applications in an enterprise context. This means that you have full control over the configuration: from the directory structure to the foreign libraries, almost everything can be customized. To match your enterprise s development guidelines, symfony is bundled with additional tools helping you to test, debug and document your project. Last but not least, by choosing symfony you get the benefits of an active open-source community. It is entirely free and published under the MIT license.

Symfony is sponsored by Sensio, a French Web Agency.
Wi-Fi
Wi-Fi

Introduction
The typical Wi-Fi setup contains one or more Access Points (APs) and one or more clients. An AP broadcasts its SSID (Service Set Identifier, Network name) via packets that are called beacons, which are broadcasted every 100 ms. The beacons are transmitted at 1 Mbit/s, and are relatively short and therefore are not of influence on performance. Since 1 Mbit/s is the lowest rate of Wi-Fi it assures that the client who receives the beacon can communicate at at least 1 Mbit/s. Based on the settings (i.e. the SSID), the client may decide whether to connect to an AP. Also the firmware running on the client Wi-Fi card is of influence. Say two AP's of the same SSID are in range of the client, the firmware may decide based on signal strength (Signal-to-noise ratio) to which of the two AP's it will connect. The Wi-Fi standard leaves connection criteria and roaming totally open to the client. This is a strength of Wi-Fi, but also means that one wireless adapter may perform substantially better than the other. Since Windows XP there is a feature called zero configuration which makes the user show any network available and let the end user connect to it on the fly. In the future wireless cards will be more and more controlled by the operating system.

Microsoft's newest feature called SoftMAC will take over from on-board firmware. Having said this, roaming criteria will be totally controlled by the operating system. Wi-Fi transmits in the air, it has the same properties as non-switched ethernet network. Even collisions can therefore appear like in non-switched ethernet LAN's.

Wi-Fi vs. cellular

Some argue that Wi-Fi and related consumer technologies hold the key to replacing cellular telephone networks such as GSM. Some obstacles to this happening in the near future are missing roaming and authentication features (see 802.1x, SIM cards and RADIUS), the narrowness of the available spectrum and the limited range of Wi-Fi. It is more likely that WiMax could compete with other cellular phone protocols such as GSM, UMTS or CDMA. However, Wi-Fi is ideal for VoIP applications like in a corporate LAN or SOHO environment. Early adopters were already available in the late '90s, though not until 2005 did the market explode. Companies such as Zyxell, UT Starcomm, Samsung, Hitachi and many more are offering VoIP Wi-Fi phones for reasonable prices.

In 2005 ADSL ISP providers started to offer VoIP services to their customers (eg. the dutch ISP XS4All). Since calling via VoIP is low-cost and more often being free, VoIP enabled ISPs have the potential to open up the VoIP market. GSM phones with integrated Wi-Fi & VoIP capabilities are being introduced into the market and have the potential to replace land line telephone services.

Currently it seems unlikely that Wi-Fi will directly compete against cellular. Wi-Fi-only phones have a very limited range, and so setting up a covering network would be too expensive. Therefore these kinds of phones may be best reserved for local use such as corporate networks. However, devices capable of multiple standards may well compete in the market.

Commercial Wi-Fi

Commercial Wi-Fi services are available in places such as Internet cafes, coffee houses and airports around the world (commonly called Wi-Fi-cafés), although coverage is patchy in comparison with cellular:

¢ Ozone and OzoneParis In France, in September 2003, Ozone started deploying the OzoneParis network across the city of lights. The objective: to construct a wireless metropolitan network with full Wi-Fi coverage of Paris. Ozone Pervasive Network philosophy is based on a nationwide scale.

¢ WiSE Technologies provides commercial hotspots for airports, universities, and independent cafes in the US;

¢ T-Mobile provides hotspots in many Starbucks in the U.S, and UK;

¢ Pacific Century Cyberworks provides hotspots in Pacific Coffee shops in Hong Kong;

¢ a Columbia Rural Electric Association subsidiary offers 2.4 GHz Wi-Fi service across a 3,700 mi² (9,500 km²) region within Walla Walla and Columbia counties in Washington and Umatilla County, Oregon;

¢ Other large hotspot providers in the U.S. include Boingo, Wayport and iPass;

¢ Sify, an Indian internet service provider, has set up 120 wireless access points in Bangalore, India in hotels, malls and government offices.

¢ Vex offers a big network of hotspots spread over Brazil. Telefónica Speedy WiFi has started its services in a new and growing network distributed over the state of São Paulo.

¢ Link repository on Wi-Fi topics at AirHive Net

Universal Efforts

Another business model seems to be making its way into the news. The idea is that users will share their bandwidth through their personal wireless routers, which are supplied with specific software. An example is FON, a Spanish start-up created in November 2005. It aims to become the largest network of hotspots in the world by the end of 2006 with 30 000 access points. The users are divided into three categories: linus share Internet access for free; bills sell their personal bandwidth; and aliens buy access from bills. Thus the system can be described as a peer-to-peer sharing service, which we usually relate to software.

Although FON has received some financial support by companies like Google and Skype, it remains to be seen whether the idea can actually work. There are three main challenges for this service at the moment. The first is that it needs much media and community attention first in order to get though the phase of 'early adoption' and into the mainstream. Then comes the fact that sharing your Internet connection is often against the terms of use of your ISP. This means that in the next few months we can see ISPs trying to defend their interests in the same way music companies united against free MP3 distribution. And third, the FON software is still in Beta-version and it remains to be seen if it presents a good solution of the imminent security issues...

Free Wi-Fi

While commercial services attempt to move existing business models to Wi-Fi, many groups, communities, cities, and individuals have set up free Wi-Fi networks, often adopting a common peering agreement in order that networks can openly share with each other. Free wireless mesh networks are often considered the future of the internet.

Many municipalities have joined with local community groups to help expand free Wi-Fi networks. Some community groups have built their Wi-Fi networks entirely based on volunteer efforts and donations.

For more information, see wireless community network, where there is also a list of the free Wi-Fi networks one can find around the globe.

OLSR is one of the protocols used to set up free networks. Some networks use static routing; others rely completely on OSPF. Wireless Leiden developed their own routing software under the name LVrouteD for community wi-fi networks that consist of a completely wireless backbone. Most networks rely heavily on open source software, or even publish their setup under an open source license.

Some smaller countries and municipalities already provide free Wi-Fi hotspots and residential Wi-Fi internet access to everyone. Examples include the Kingdom of Tonga or Estonia which have already a large number of free Wi-Fi hotspots throughout their countries.

In Paris France, OzoneParis offers free Internet access for life to anybody who contributes to the Pervasive Networkâ„¢s development by making their rooftop available for the WiFi Network.

Many universities provide free WiFi internet access to their students, visitors, and anyone on campus. Similarly, some commercial entities such as Panera Bread offer free Wi-Fi access to patrons. McDonald's Corporation also offers Wi-Fi access, often branded 'McInternet'. This was launched at their flagship restaurant in Oak Brook, Illinois and is also available in many branches in London, UK.

However, there is also a third subcategory of networks set up by certain communities such as universities where the service is provided free to members and guests of the community such as students, yet used to make money by letting the service out to companies and individuals outside. An example of such a service is Sparknet in Finland. Sparknet also supports OpenSparknet, a project where people can name their own wireless access point as a part of Sparknet in return for certain benefits. Recently commercial Wi-Fi providers have built free Wi-Fi hotspots and hotzones. These providers hope that free Wi-Fi access would equate to more users and significant return on investment.

Wi-Fi vs. Amateur Radio

In the US, the 2.4 GHz Wi-Fi radio spectrum is also allocated to amateur radio users. FCC Part 15 rules govern non-licenced operators (i.e. most Wi-Fi equipment users). Amateur operators retain what the FCC terms 'primary status' on the band under a distinct set of rules (Part 97). Under Part 97, licensed amateur operators may construct their own equipment, use very high-gain antennas, and boost output power to 100 watts on frequencies covered by Wi-Fi channels 2-6. However, Part 97 rules mandate using only the minimum power necessary for communications, forbid obscuring the data, and require station identification every 10 minutes. Therefore, expensive automatic power-limiting circuitry is required to meet regulations, and the transmission of any encrypted data (for example https) is questionable.

In practice, microwave power amplifiers are expensive and decrease receive-sensitivity of page link radios. On the other hand, the short wavelength at 2.4 GHz allows for simple construction of very high gain directional antennas. Although Part 15 rules forbid any modification of commercially constructed systems, amateur radio operators may modify commercial systems for optimized construction of long links, for example. Using only 200 mW page link radios and two 24 dB gain antennas, an effective radiated power of many hundreds of watts in a very narrow beam may be used to construct reliable links of over 100 km with little radio frequency interference to other users.

Advantages of Wi-Fi

¢ Unlike packet radio systems, Wi-Fi uses unlicensed radio spectrum and does not require regulatory approval for individual deployers.

¢ Allows LANs to be deployed without cabling, potentially reducing the costs of network deployment and expansion. Spaces where cables cannot be run, such as outdoor areas and historical buildings, can host wireless LANs.

¢ Wi-Fi products are widely available in the market. Different brands of access points and client network interfaces are interoperable at a basic level of service.

¢ Competition amongst vendors has lowered prices considerably since their inception.

¢ Wi-Fi networks support roaming, in which a mobile client station such as a laptop computer can move from one access point to another as the user moves around a building or area.

¢ Many access points and network interfaces support various degrees of encryption to protect traffic from interception.

¢ Wi-Fi is a global set of standards. Unlike cellular carriers, the same Wi-Fi client works in different countries around the world.
SAFER
SAFER

Introduction
In cryptography, SAFER (Secure And Fast Encryption Routine) is the name of a family of block ciphers designed primarily by James Massey (one of the designers of IDEA) on behalf of Cylink Corporation. The early SAFER K and SAFER SK function, but differ in the number of rounds and the designs share the same encryptionkey schedule. More recent versions ” SAFER+ and SAFER++ were submitted as candidates to the AES process and the NESSIE project respectively. All of the algorithms in the SAFER family are unpatented and available for unrestricted use.

The first SAFER cipher was SAFER K-64, published by Massey in 1993, with a 64-bit block size. The K-64 denotes a key size of 64 bits. There was some demand for a version with a larger 128-bit key, and the following year Massey published such a variant incorporating new key schedule designed by the Singapore Ministry for Home affairs: SAFER K-128. However, both Lars Knudsen and Sean Murphy found minor weaknesses in this version, prompting a redesign of the key schedule to one suggested by Knudsen; these variants were named SAFER SK-64 and SAFER SK-128 respectively ” the SK standing for Strengthened Key schedule , though the RSA FAQ reports that, one joke has it that SK really stands for Stop Knudsen , a wise precaution in the design of any block cipher . Another variant with a reduced key size was published, SAFER SK-40, to comply with 40-bit export restrictions.

All of these ciphers use the same round function consisting of four stages, as shown in the diagram: a key-mixing stage, a substitution layer, another key-mixing stage, and finally a diffusion layer. In the first key-mixing stage, the plaintext block is divided into eight 8-bit segments, and subkeys are added using either addition modulo 256 (denoted by a + in a square) or XOR (denoted by a + in a circle). The substitution layer consists of two S-boxes, each the inverse of each other, derived from discrete exponentiation (45x) and logarithm (log45x) functions. After a second key-mixing stage there is the diffusion layer: a novel cryptographic component termed a pseudo-Hadamard transform (PHT). (The PHT was also later used in the Twofish cipher.)
WiFiber
WiFiber

Introduction
A new wireless technology could beat fiber optics for speed in some applications. Atop each of the Trump towers in New York City, there s a new type of wireless transmitter and receiver that can send and receive data at rates of more than one gigabit per second -- fast enough to stream 90 minutes of video from one tower to the next, more than one mile apart, in less than six seconds. By comparison, the same video sent over a DSL or cable Internet connection would take almost an hour to download.

This system is dubbed WiFiber by its creator, GigaBeam, a Virginia-based telecommunications startup. Although the technology is wireless, the company s approach -- high-speed data transferring across a point-to-point network -- is more of an alternative to fiber optics, than to Wi-Fi or Wi-Max, says John Krzywicki, the company s vice president of marketing. And it s best suited for highly specific data delivery situations.

This kind of point-to-point wireless technology could be used in situations where digging fiber-optic trenches would disrupt an environment, their cost be prohibitive, or the installation process take too long, as in extending communications networks in cities, on battlefields, or after a disaster.

Blasting beams of data through free space is not a new idea. LightPointe and Proxim Wireless also provide such services. What makes GigaBeam s technology different is that it exploits a different part of the electromagnetic spectrum. Their systems use a region of the spectrum near visible light, at terahertz frequencies. Because of this, weather conditions in which visibility is limited, such as fog or light rain, can hamper data transmission.

GigaBeam, however, transmits at 71-76, 81-86, and 92-95 gigahertz frequencies, where these conditions generally do not cause problems. Additionally, by using this region of the spectrum, GigaBeam can outpace traditional wireless data delivery used for most wireless networks.

Because so many devices, from Wi-Fi base stations to baby monitors, use the frequencies of 2.4 and 5 gigahertz, those spectrum bands are crowded, and therefore require complex algorithms to sort and route traffic -- both data-consuming endeavors, says Jonathan Wells, GigaBeam s director of product development. With less traffic in the region between 70 to 95 gigahertz, GigaBeam can spend less time routing data, and more time delivering it. And because of the directional nature of the beam, problems of interference, which plague more spread-out signals at the traditional frequencies, are not likely; because the tight beams of data will rarely, if ever, cross each other s paths, data transmission can flow without interference, Wells says.

Correction: As a couple of readers pointed out, our title was misleading. Although the emergence of a wireless technology operating in the gigabits per second range is an advance, it does not outperform current fiber-optic lines, which can still send data much faster.

Even with its advances, though, Gigabeam faces the same problem as other point-to-point technologies: creating a network with an unbroken sight line. Still, it could offer some businesses an alternative to fiber optics. Currently, a GigaBeam link, which consists of a set of transmitting and receiving radios, costs around $30,000. But Krzywicki says that improving technology is driving down costs. In addition to outfitting the Trump towers, the company has deployed a page link on the campuses of Dartmouth College and Boston University, and two links for San Francisco s Public Utility Commission. .
Holographic Memory
Holographic Memory

Introduction
Devices that use light to store and read data have been the backbone of data storage for nearly two decades. Compact discs revolutionized data storage in the early 1980s, allowing multi-megabytes of data to be stored on a disc that has a diameter of a mere 12 centimeters and a thickness of about 1.2 millimeters. In 1997, an improved version of the CD, called a digital versatile disc (DVD), was released, which enabled the storage of full-length movies on a single disc.

CDs and DVDs are the primary data storage methods for music, software, personal computing and video. A CD can hold 783 megabytes of data. A double-sided, double-layer DVD can hold 15.9 GB of data, which is about eight hours of movies. These conventional storage mediums meet today's storage needs, but storage technologies have to evolve to keep pace with increasing consumer demand. CDs, DVDs and magnetic storage all store bits of information on the surface of a recording medium. In order to increase storage capabilities, scientists are now working on a new optical storage method called holographic memory that will go beneath the surface and use the volume of the recording medium for storage, instead of only the surface area. Three-dimensional data storage will be able to store more information in a smaller space and offer faster data transfer times.

Holographic memory is developing technology that has promised to revolutionalise the storage systems. It can store data upto 1 Tb in a sugar cube sized crystal. Data from more than 1000 CDs can fit into a holographic memory System. Most of the computer hard drives available today can hold only 10 to 40 GB of data, a small fraction of what holographic memory system can hold. Conventional memories use only the surface to store the data. But holographic data storage systems use the volume to store data. It has more advantages than conventional storage systems.

It is based on the principle of holography. Scientist Pieter J. van Heerden first proposed the idea of holographic (three-dimensional) storage in the early 1960s. A decade later, scientists at RCA Laboratories demonstrated the technology by recording 500 holograms in an iron-doped lithium-niobate crystal and 550 holograms of high-resolution images in a light-sensitive polymer material. The lack of cheap parts and the advancement of magnetic and semiconductor memories placed the development of holographic data storage on hold.
Clockless Chips
Clockless Chips

Introduction
Clock speeds are now in the gigahertz range and there is not much room for speedup before physical realities start to complicate things. With gigahertz clock powering a chip, signals barely have enough time to make it across the chip before the next clock tick. At this point, speeding up the clock frequency could become disastrous. This is where a chip that is not constricted by clock comes in to action.

Clockless approach, which uses a technique known as asynchronous logic, differs from conventional computer circuit design in that the switching on and off of digital circuits is controlled individually by specific pieces of data rather than by a tyrannical clock that forces all of the millions of the circuits on a chip to march in unison.

A major hindrance to the development of the clockless chips is the competitiveness of the computer industry. Presently, it is nearly impossible for companies to develop and manufacture a Clockless chip while keeping the cost reasonable. Another problem is that there arenâ„¢t much tools used to develop asynchronous chips. Until this is possible, Clockless chips will not be a major player in the market.

In this seminar the topics covered are “ general concept of asynchronous circuits, their design issues and types of design. The major designs discussed are Bounded delay method, Delay insensitive method & the Null Conventional Logic (NCL).

The seminar also does a comparison of synchronous and asynchronous circuits and the applications in which asynchronous circuits are used.
A T M
A T M

Introduction
These computers include the entire spectrum of PCs, through professional workstations upto super-computers. As the performance of computers has increased, so too has the demand for communication between all systems for exchanging data, or between central servers and the associated host computer system.

The replacement of copper with fiber and the advancement sin digital communication and encoding are at the heart of several developments that will change the communication infrastructure. The former development has provided us with huge amount of transmission bandwidth. While the latter has made the transmission of all information including voice and video through a packet switched network possible.

With continuously work sharing over large distances, including international communication, the systems must be interconnected via wide area networks with increasing demands for higher bit rates.

For the first time, a single communications technology meets LAN and WAN requirements and handles a wide variety of current and emerging applications. ATM is the first technology to provide a common format for bursts of high speed data and the ebb and flow of the typical voice phone call. Seamless ATM networks provide desktop-to-desktop multimedia networking over single technology, high bandwidth, low latency network, removing the boundary between LAN WAN.

ATM is simply a Data Link Layer protocol. It is asynchronous in the sense that the recurrence of the cells containing information from an individual user is not necessarily periodic. It is the technology of choice for evolving B-ISDN (Board Integrated Services Digital Network), for next generation LANs and WANs. ATM supports transmission speeds of 155Mbits / sec. In the future. Photonic approaches have made the advent of ATM switches feasible, and an evolution towards an all packetized, unified, broadband telecommunications and data communication world based on ATM is taking place.
Blue Tooth
Blue Tooth

Introduction
Bluetooth wireless technology is a cable replacement technology that provides wireless communication between portable devices, desktop devices and peripherals. It is used to swap data and synchronize files between devices without having to connect each other with cable. The wireless page link has a range of 10m which offers the user mobility. There is no need for the user to open an application or press button to initiate a process. Bluetooth wireless technology is always on and runs in the background. Bluetooth devices scan for other Bluetooth devices and when these devices are in range they start to exchange messages so they can become aware of each others capabilities.

These devices do not require a line of sight to transmit data with each other. Within a few years about 80 percent of the mobile phones are expected to carry the Bluetooth chip. The Bluetooth transceiver operates in the globally available unlicensed ISM radio band of 2.4GHz, which do not require operator license from a regulatory agency. This means that Bluetooth technology can be used virtually anywhere in the world. Bluetooth is an economical, wireless solution that is convenient, reliable, and easy to use and operates over a longer distance.

The initial development started in 1994 by Ericsson. Bluetooth now has a special interest group (SIG) which has 1800 companies worldwide. Bluetooth technology enables voice and data transmission in a short-range radio. There is a wide range of devises which can be connected easily and quickly without the need for cables. Soon people world over will enjoy the convenience, speed and security of instant wireless connection. Bluetooth is expected to be embedded in hundreds of millions mobile phones, PCs, laptops and a whole range of other electronic devices in the next few years. This is mainly because of the elimination of cables and this makes the work environment look and feel comfortable and inviting.
Traffic Pulse Technology
Traffic Pulse Technology

Introduction
The Traffic Pulse network is the foundation for all of Mobility Technologies® applications. This network uses a process of data collection, data processing, and data distribution to generate the most unique traffic information in the industry. Digital Traffic Pulse® collects data through a sensor network, processes and stores the data in a data center, and distributes that data through a wide range of applications.

Unique among private traffic information providers in the U.S. , Mobility Technologies real-time and archived Traffic Pulse data offer valuable tools for a variety of commericial and governmental applications:

* Telematics - for mobile professionals and others, Mobility Technologies traffic information complements in-vehicle navigation devices, informing drivers not only how to get from point A to point B but how long it will take to get there ” or even direct them to an alternate route.

* Media - for radio and TV broadcasters, cable operators, and advertisers who sponsor local programming, Traffic Pulse Networks provides traffic information and advertising opportunities for a variety of broadcasting venues.

* Intelligent Transport business solutions (ITS) - for public agencies, Mobility Technologies applications aid in infrastructure planning, safety research, and livable community efforts; integrate with existing and future ITS technologies and deployments; and provide data reporting tools.
Softwear Computing
Softwear Computing

Introduction
In this talk, we introduce computational aspects of softwear, specifically fabric and body-based gestural controllers for realtime, time-based media.

Softwear is part of our approach to wearable computing that leverages the naturalized affordances and the social conditioning that fabrics, furniture and physical architecture already provide to our everyday interaction. We exploit physical plus computational materials and rely on expert craft from experimental performance, music and plastic arts in order to make a new class of personal and collective expressive media. In this talk, I will survey Topological Media Lab research areas including gesture tracking, realtime video synthesis, realtime audio synthesis, and media choreography based on continuous state evolution
Artificial Intelligence for Speech Recognition
Artificial Intelligence for Speech Recognition

Introduction
Artificial Intelligence (AI) involves two basic ideas. First, it involves studying the thought processes of human beings. Second, it deals with representing those processes via machines (computers, robots, etc). AI is the behavior of a machine, which, if performed by a human being, would be called intelligent. It makes machines smarter and more useful, is less expensive than natural intelligence. Natural Language Processing (NLP) refers to Artificial Intelligence methods of communicating with a computer in a natural language like English. The main objective of a NLP program is to understand input and initiate action.

The input words are scanned and matched against internally stored known words. Identification of a keyword causes some action to be taken. In this way, one can communicate with computer in oneâ„¢s language. One of the main benefits of speech recognition system is that it lets user do other works simultaneously
Cryptovirology
Cryptovirology

Introduction
Cryptovirology is a field that studies how to use cryptography to design powerful malicious software. It encompasses overt attacks such as cryptoviral extortion where a cryptovirus, cryptoworm, or cryptotrojan hybrid encrypts the victim's files and the user must pay the malware author to receive the needed session key (that is encrypted under the author's public key that is contained in the malware).

The field also encompasses covert attacks in which the attacker secretly steals private information such as private keys. An example of the latter type of attack are asymmetric backdoors. An asymmetric backdoor is a backdoor (e.g., in a cryptosystem) that can only be used by the attacker even after it is found. There are many other attacks in the field that are not mentioned here.
An Introduction to Artifical Life
An Introduction to Artifical Life

Introduction
Artifical Life also known as alife or a-life, is the study of life through the use of human-made analogs of living systems. Computer scientist Christopher Langton coined the term in the late 1980s when he held the first 'International Conference on the Synthesis and Simulation of Living Systems' (otherwise known as Artificial Life I) at the Los Alamos National Laboratory in 1987.

The focus of this seminar is Artificial Life in software. Topics which will be covered include: what Artificial Life (ALife) is and is about;

open research problems in Alife;

presuppositions underlying Alife in software;

basic requirements of an Alife software system and some guidelines for designing Alife in software. A few Alife software systems will also be introduced to help concretize the concepts
Real Time Operating System
Real Time Operating System

Introduction
A real time system is defined as follows - A real-time system is one in which the correctness of the computations not only depends upon the logical correctness of the computation but also upon the time at which the result is produced. If the timing constraints of the system are not met, system failure is said to be occurred.

Two types Hard real time operating system Strict time constraints Secondary storage limited or absent Conflicts with the time sharing systems Not supported by general purpose OS Soft real time operating system Reduced Time Constraints Limited utility in industrial control or robotics Useful in applications (multimedia, virtual reality) requiring advanced operating-system features. In the robot example, it would be hard real time if the robot arriving late causes completely incorrect operation. It would be soft real time if the robot arriving late meant a loss of throughput. Much of what is done in real time programming is actually soft real time system. Good system design often implies a level of fe/correct behaviour even if the computer system never completes the computation. So if the computer is only a little late, the system effects may be somewhat mitigated.

Hat makes an os a rtos

1. A RTOS (Real-Time Operating System) has to be multi-threaded and preemptible.

2. The notion of thread priority has to exist as there is for the moment no deadline driven OS.

3. The OS has to support predictable thread synchronisation mechanisms

4. A system of priority inheritance has to exist

5. For every system call, the maximum it takes. It should be predictable and independent from the number of objects in the system

6. the maximum time the OS and drivers mask the interrupts.

The following points should also be known by the developer:

1. System Interrupt Levels.

2. Device driver IRQ Levels, maximum time they take, etc.
Reply
#2
cud u plz send me the seminar report for"ARTIFICAL INTELLEGENCE FOR SPEECH RECOGNITION"
Reply
#3
can u please send me the complete seminar report for artificial intelligence for speech recognition.Please & if possible the presentation as well.Its very urgent otherwise i will have to prepare the PPT myself.I'l be really grateful.Please help.Thank you.
Reply
#4
i want d abstract of real time operating system fr my seminar presentation..
plz help me
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: abstract isef science fairs, seminar abstract computer science, atm network computer science seminar paper, pattern identification computer science, computer science degreescomputer, abstract for mindreading computer, opics for computer science,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  computer networks full report seminar topics 8 42,363 06-10-2018, 12:35 PM
Last Post: jntuworldforum
  OBJECT TRACKING AND DETECTION full report project topics 9 30,853 06-10-2018, 12:20 PM
Last Post: jntuworldforum
  Block Chain and Data Science jntuworldforum 0 8,031 06-10-2018, 12:15 PM
Last Post: jntuworldforum
  Optical Computer Full Seminar Report Download computer science crazy 46 66,660 29-04-2016, 09:16 AM
Last Post: dhanabhagya
  Digital Signature Full Seminar Report Download computer science crazy 20 43,982 16-09-2015, 02:51 PM
Last Post: seminar report asees
  HOLOGRAPHIC VERSATILE DISC A SEMINAR REPORT Computer Science Clay 20 39,324 16-09-2015, 02:18 PM
Last Post: seminar report asees
  Computer Sci Seminar lists7 computer science crazy 4 11,486 17-07-2015, 10:29 AM
Last Post: dhanyasoubhagya
  Steganography In Images (Download Seminar Report) Computer Science Clay 16 25,809 08-06-2015, 03:26 PM
Last Post: seminar report asees
  Mobile Train Radio Communication ( Download Full Seminar Report ) computer science crazy 10 28,031 01-05-2015, 03:36 PM
Last Post: seminar report asees
  Computer Architecture Requirements? shakir_ali 1 27,109 07-04-2015, 12:04 PM
Last Post: Kishore1

Forum Jump: