Computer Science Seminar Abstract And Report 8
#1
Lightbulb 

Y2K38

Introduction
The Y2K38 problem has been described as a non-problem, given that we are expected to be running 64-bit operating systems well before 2038. Well, maybe.

The Problem

Just as Y2K problems arise from programs not allocating enough digits to the year, Y2K38 problems arise from programs not allocating enough bits to internal time.Unix internal time is commonly stored in a data structure using a long int containing the number of seconds since 1970. This time is used in all time-related processes such as scheduling, file timestamps, etc. In a 32-bit machine, this value is sufficient to store time up to 18-jan-2038. After this date, 32-bit clocks will overflow and return erroneous values such as 32-dec-1969 or 13-dec-1901.

Machines Affected Currently (March 1998) there are a huge number of machines affected. Most of these will be scrapped before 2038. However, it is possible that some machines going into service now may still be operating in 2038. These may include process control computers, space probe computers, embedded systems in traffic light controllers, navigation systems etc. etc. Many of these systems may not be upgradeable. For instance, Ferranti Argus computers survived in service longer than anyone expected; long enough to present serious maintenance problems.

Note: Unix time is safe for the indefinite future for referring to future events, provided that enough bits are allocated. Programs or databases with a fixed field width should probably allocate at least 48 bits to storing time values. Hardware, such as clock circuits, which has adopted the Unix time convention, may also be affected if 32-bit registers are used. In my opinion, the Y2K38 threat is more likely to result in aircraft falling from the sky, glitches in life-support systems, and nuclear power plant meltdown than the Y2K threat, which is more likely to disrupt inventory control, credit card payments, pension plans etc. The reason for this is that the Y2K38 problem involves the basic system timekeeping from which most other time and date information is derived, while the Y2K problem (mostly) involves application programs.

Emulation and Megafunctions

While 32-bit CPUs may be obsolete in desktop computers and servers by 2038, they may still exist in microcontrollers and embedded circuits. For instance, the Z80 processor is still available in 1999 as an Embedded Function within Altera programmable devices. Such embedded functions present a serious maintenance problem for Y2K38 and similar rollover issues, since the package part number and other markings typically give no indication of the internal function.

Software Issues

Databases using 32-bit Unix time may survive through 2038. Care will have to be taken to avoid rollover issues.

Now that we've far surpassed the problem of "Y2K," can you believe that computer scientists and theorists are now projecting a new worldwide computer glitch for the year 2038? Commonly called the "Y2K38 Problem," it seems that computers using "long int" time systems, which were set up to start recording time from January 1, 1970 will be affected.
Satellite Radio
Satellite Radio

Introduction
We all have our favorite radio stations that we preset into our car radios, flipping between them as we drive to and from work, on errands and around town. But when travel too far away from the source station, the signal breaks up and fades into static. Most radio signals can only travel about 30 or 40 miles from their source. On long trips that find you passing through different cities, you might have to change radio stations every hour or so as the signals fade in and out. Now, imagine a radio station that can broadcast its signal from more than 22,000 miles (35,000 kill) away and then come through on your car radio with complete clarity without ever having to change the radio station.

Satellite Radio or Digital Audio Radio Service (DARS) is a subscriber based radio service that is broadcast directly from satellites. Subscribers will be able to receive up to100 radio channels featuring Compact Disk digital quality music, news, weather, sports. talk radio and other entertainment channels.

Satellite radio is an idea nearly 10 years in the making. In 1992, the U.S. Federal Communications Commission (FCC) allocated a spectrum in the "S" band (2.3 GHz) for nationwide broadcasting of satellite-based Digital Audio Radio Service (DARS).. In 1997. the FCC awarded 8-year radio broadcast licenses to two companies, Sirius Satellite Radio former (CD Radio) and XM Satellite Radio (former American Mobile Radio). Both companies have been working aggressively to be prepared to offer their radio services to the public by the end of 2000. It is expected that automotive radios would be the largest application of Satellite Radio.

The satellite era began in September 2001 when XM launched in selected markets. followed by full nationwide service in November. Sirius lagged slightly, with a gradual rollout beginning _n February, including a quiet launch in the Bay Area on June 15. The nationwide launch comes July 1.
Light Emitting Polymers (LEP)
Light Emitting Polymers (LEP)

Introduction
Light emitting polymers or polymer based light emitting diodes discovered by Friend et al in 1990 has been found superior than other displays like, liquid crystal displays (LCDs) vacuum fluorescence displays and electro luminescence displays. Though not commercialised yet, these have proved to be a mile stone in the filed of flat panel displays. Research in LEP is underway in Cambridge Display Technology Ltd (CDT), the UK.

In the last decade, several other display contenders such as plasma and field emission displays were hailed as the solution to the pervasive display. Like LCD they suited certain niche applications, but failed to meet broad demands of the computer industry. Today the trend is towards the non_crt flat panel displays. As LEDs are inexpensive devices these can be extremely handy in constructing flat panel displays. The idea was to combine the characteristics of a CRT with the performance of an LCD and added design benefits of formability and low power. Cambridge Display Technology Ltd is developing a display medium with exactly these characteristics.

The technology uses a light-emitting polymer (LEP) that costs much less to manufacture and run than CRTs because the active material used is plastic.

LEP is a polymer that emits light when a voltage is applied to it. The structure comprises a thin film semi conducting polymer sandwiched between two electrodes namely anode and cathode. When electrons and holes are injected from the electrodes, the recombination of these charge carriers takes place, which leads to emission of light that escape through glass substrate.
Sensors on 3D Digitization
Sensors on 3D Digitization

Introduction
Machine vision involves the analysis of the properties of the luminous flux reflected or radiated by objects. To recover the geometrical structures of these objects, either to recognize or to measure their dimension, two basic vision strategies are available [1].

Passive vision, attempts to analyze the structure of the scene under ambient light. [1] Stereoscopic vision is a passive optical technique. The basic idea is that two or more digital images are taken from known locations. The images are then processed to find the correlations between them. As soon as matching points are identified, the geometry can be computed.

Active vision attempts to reduce the ambiguity of scene analysis by structuring the way in which images are formed. Sensors that capitalize on active vision can resolve most of the ambiguities found with two-dimensional imaging systems. Lidar based or triangulation based laser range cameras are examples of active vision technique. One digital 3D imaging system based on optical triangulation were developed and demonstrated.

AUTOSYNCHRONIZED SCANNER

The auto-synchronized scanner, depicted schematically on Figure 1, can provide registered range and colour data of visible surfaces. A 3D surface map is captured by scanning a laser spot onto a scene, collecting the reflected laser light, and finally focusing the beam onto a linear laser spot sensor. Geometric and photometric corrections of the raw data give two images in perfect registration: one with x, y, z co-ordinates and a second with reflectance data. The laser beam composed of multiple visible wavelengths is used for the purpose of measuring the colour map of a scene
Robotic Surgery
Robotic Surgery

Introduction
The field of surgery is entering a time of great change, spurred on by remarkable recent advances in surgical and computer technology. Computer-controlled diagnostic instruments have been used in the operating room for years to help provide vital information through ultrasound, computer-aided tomography (CAT), and other imaging technologies. Only recently have robotic systems made their way into the operating room as dexterity-enhancing surgical assistants and surgical planners, in answer to surgeons' demands for ways to overcome the surgical limitations of minimally invasive laparoscopic surgery.

The Robotic surgical system enables surgeons to remove gallbladders and perform other general surgical procedures while seated at a computer console and 3-D video imaging system acrossthe room from the patient. The surgeons operate controls with their hands and fingers to direct a robotically controlled laparoscope. At the end of the laparoscope are advanced, articulating surgical instruments and miniature cameras that allow surgeons to peer into the body and perform the procedures.

Now Imagine : An army ranger is riddled with shrapnel deep behind enemy lines. Diagnostics from wearable sensors signal a physician at a nearby mobile army surgical hospital that his services are needed urgently. The ranger is loaded into an armored vehicle outfitted with a robotic surgery system. Within minutes, he is undergoing surgery performed by the physician, who is seated at a control console 100 kilometers out of harm's way.

The patient is saved. This is the power that the amalgamation of technology and surgical sciences are offering Doctors. Just as computers revolutionized the latter half of the 20th century, the field of robotics has the potential to equally alter how we live in the 21st century. We've already seen how robots have changed the manufacturing of cars and other consumer goods by streamlining and speeding up the assembly line.

We even have robotic lawn mowers and robotic pets now. And robots have enabled us to see places that humans are not yet able to visit, such as other planets and the depths of the ocean. In the coming decades, we will see robots that have artificial intelligence,coming to resemble the humans that create them. They will eventually become self-aware and conscious, and be able to do anything that a human can. When we talk about robots doing the tasks of humans, we often talk about the future, but the future of Robotic surgery is already here.
Ipv6 - The Next Generation Protocol
Ipv6 - The Next Generation Protocol

Introduction
The Internet is one of the greatest revolutionary innovations of the twentieth century.It made the 'global village utopia ' a reality in a rather short span of time. It is changing the way we interact with each other, the way we do business, the way we educate ourselves and even the way we entertain ourselves. Perhaps even the architects of Internet would not have foreseen the tremendous growth rate of the network being witnessed today.With the advent of the Web and multimedia services, the technology underlying t he Internet has been under stress.

It cannot adequately support many services being envisaged, such as real time video conferencing, interconnection of gigabit networks with lower bandwidths, high security applications such as electronic commerce, and interactive virtual reality applications. A more serious problem with today's Internet is that it can interconnect a maximum of four billion systems only, which is a small number as compared to the projected systems on the Internet in the twenty-first century.

Each machine on the net is given a 32-bit address. With 32 bits, a maximum of about four billion addresses is possible. Though this is a large a number, soon the Internet will have TV sets, and even pizza machines connected to it, and since each of them must have an IP address, this number becomes too small. The revision of IPv4 was taken up mainly to resolve the address problem, but in the course of refinements, several other features were also added to make it suitable for the next generation Internet.

This version was initially named IPng (IP next generation) and is now officially known as IPv6. IPv6 supports 128-bit addresses, the source address and the destination address, each being, 128 bits long. IPv5 a minor variation of IPv4 is presently running on some routers. Presently, most routers run software that support only IPv4. To switch over to IPv6 overnight is an impossible task and the transition is likely to take a very long time.

However to speed up the transition, an IPv4 compatible IPv6 addressing scheme has been worked out. Major vendors are now writing softwares for various computing environments to support IPv6 functionality. Incidentally, software development for different operating systems and router platforms will offer major jobs opportunities in coming years.
Nanorobotics
Nanorobotics

Introduction
Nanorobotics is an emerging field that deals with the controlled manipulation of objects with nanometer-scale dimensions. Typically, an atom has a diameter of a few Ã…ngstroms (1 Ã… = 0.1 nm = 10-10 m), a molecule's size is a few nm, and clusters or nanoparticles formed by hundreds or thousands of atoms have sizes of tens of nm. Therefore, Nanorobotics is concerned with interactions with atomic- and molecular-sized objects-and is sometimes called Molecular Robotics.

Molecular Robotics falls within the purview of Nanotechnology, which is the study of phenomena and structures with characteristic dimensions in the nanometer range. The birth of Nanotechnology is usually associated with a talk by Nobel-prize winner Richard Feynman entitled "There is plenty of room at the bottom", whose text may be found in [Crandall & Lewis 1992]. Nanotechnology has the potential for major scientific and practical breakthroughs.

Future applications ranging from very fast computers to self-replicating robots are described in Drexler's seminal book [Drexler 1986]. In a less futuristic vein, the following potential applications were suggested by well-known experimental scientists at the Nano4 conference held in Palo Alto in November 1995:

" Cell probes with dimensions ~ 1/1000 of the cell's size
" Space applications, e.g. hardware to fly on satellites
" Computer memory
" Near field optics, with characteristic dimensions ~ 20 nm
" X-ray fabrication, systems that use X-ray photons
" Genome applications, reading and manipulating DNA
" Nanodevices capable of running on very small batteries
" Optical antennas

Nanotechnology is being pursued along two converging directions. From the top down, semiconductor fabrication techniques are producing smaller and smaller structures-see e.g. [Colton & Marrian 1995] for recent work. For example, the line width of the original Pentium chip is 350 nm. Current optical lithography techniques have obvious resolution limitations because of the wavelength of visible light, which is in the order of 500 nm. X-ray and electron-beam lithography will push sizes further down, but with a great increase in complexity and cost of fabrication. These top-down techniques do not seem promising for building nanomachines that require precise positioning of atoms or molecules.

Alternatively, one can proceed from the bottom up, by assembling atoms and molecules into functional components and systems. There are two main approaches for building useful devices from nanoscale components. The first is based on self-assembly, and is a natural evolution of traditional chemistry and bulk processing-see e.g. [Gómez-López et al. 1996]. The other is based on controlled positioning of nanoscale objects, direct application of forces, electric fields, and so on. The self-assembly approach is being pursued at many laboratories. Despite all the current activity, self-assembly has severe limitations because the structures produced tend to be highly symmetric, and the most versatile self-assembled systems are organic and therefore generally lack robustness. The second approach involves Nanomanipulation, and is being studied by a small number of researchers, who are focusing on techniques based on Scanning Probe Microscopy.
Dual Core Processor
Dual Core Processor

Introduction
Seeing the technical difficulties in cranking higher clock speed out of the present single core processors, dual core architecture has started to establish itself as the answer to the development of future processors. With the release of AMD dual core opteron and Intel Pentium Extreme edition 840, the month of April 2005 officially marks the beginning of dual core endeavors for both companies.

The transition from a single core to dual core architecture was triggered by a couple of factors. According to Moore's Law, the number of transistors (complexity) on a microprocessor doubles approximately every 18 months. The latest 2 MB Prescott core possesses more than 160 million transistors; breaking the 200 million mark is just a matter of time. Transistor count is one of the reasons that drive the industry toward the dual core architecture. Instead of using the available astronomically high transistor counts to design a new, more complex single core processor that would offer higher performance than the present offerings, chip makers have decided to put these transistors to use in producing two identical yet independent cores and combining them in to a single package.

To them, this is actually a far better use of the available transistors, and in return should give the consumers more value for their money. Besides, with the single core's thermal envelope being pushed to its limit and severe current leakage issues that have hit the silicon manufacturing industry ever since the transition to 90 nm chip fabrication, it's extremely difficult for chip makers (particulary Intel) to squeeze more clock speed out of the present single core design. Pushing for higher clock speeds is not a feasible option at present because of transistor current leakage. And adding more features into the core will increase the complexity of the design and make it harder to manage. These are the factors that have made the dual core option the more viable alternative in making full use of the amount of transistors available.

What is a dual core processor?

A dual core processor is a CPU with two separate cores on the same die, each with its own cache. It's the equivalent of getting two microprocessors in one. In a single-core or traditional processor the CPU is fed strings of instructions it must order, execute, then selectively store in its cache for quick retrieval. When data outside the cache is required, it is retrieved through the system bus from random access memory (RAM) or from storage devices. Accessing these slows down performance to the maximum speed the bus, RAM or storage device will allow, which is far slower than the speed of the CPU. The situation is compounded when multi-tasking. In this case the processor must switch back and forth between two or more sets of data streams and programs. CPU resources are depleted and performance suffers.

In a dual core processor each core handles incoming data strings simultaneously to improve efficiency. Just as two heads are better than one, so are two hands. Now when one is executing the other can be accessing the system bus or executing its own code. Adding to this favorable scenario, both AMD and Intel's dual-core flagships are 64-bit.

To utilize a dual core processor, the operating system must be able to recognize multi-threading and the software must have simultaneous multi-threadi0ng technology (SMT) written into its code. SMT enables parallel multi-threading wherein the cores are served multi-threaded instructions in parallel. Without SMT the software will only recognize one core. Adobe Photoshop is an example of SMT-aware software. SMT is also used with multi-processor systems common to servers.

An attractive value of dual core processors is that they do not require a new motherboard, but can be used in existing boards that feature the correct socket. For the average user the difference in performance will be most noticeable in multi-tasking until more software is SMT aware. Servers running multiple dual core processors will see an appreciable increase in performance.
Cisco IOS Firewall

Introduction
The Cisco IOS Firewall, provides robust, integrated firewall and intrusion detection functionality for every perimeter of the network. Available for a wide range of Cisco IOS software-based routers, the Cisco IOS Firewall offers sophisticated security and policy enforcement for connections within an organization (intranet) and between partner networks (extranets), as well as for securing Internet connectivity for remote and branch offices.

A security-specific, value-add option for Cisco IOS Software, the Cisco IOS Firewall enhances existing Cisco IOS security capabilities, such as authentication, encryption, and failover, with state-of-the-art security features, such as stateful, application-based filtering (context-based access control), defense against network attacks, per user authentication and authorization, and real-time alerts.

The Cisco IOS Firewall is configurable via Cisco ConfigMaker software, an easy-to-use Microsoft Windows 95, 98, NT 4.0 based software tool.

A Firewall is a network security device that ensures that all communications attempting to cross it meet an organization's security policy. Firewalls track and control communications deciding whether to allow ,reject or encrypt communications.Firewalls are used to connect a corporate local network to the Internet and also within networks. In other words they stand in between the trusted network and the untrusted network.

The first and most important decision reflects the policy of how your company or organization wants to operate the system. Is the firewall in place to explicitly deny all services except those critical to the mission of connecting to the net, or is the firewall is in place to provide a metered and audited method of 'Queuing' access in a non-threatening manner. The second is what level of monitoring, reducing and control do you want? Having established the acceptable risk level you can form a checklist of what should be monitored, permitted and denied. The third issue is financial.

Implementation methods

Two basic methods to implement a firewall are
1.As a Screening Router:

A screening router is a special computer or an electronic device that screens (filters out) specific packets based on the criteria that is defined. Almost all current screening routers operate in the following manner.
a. Packet Filter criteria must be stored for the ports of the packet filter device. The packet filter criteria are called packet filter ruler.
b. When the packets arrive at the port, the packet header is parsed. Most packet filters examine the fields in only the IP, TCP and UDP headers.
c. The packet filter rules are stored in a specific order. Each rule is applied to the packet in the order in which the packet filter is stored.
d. If the rule blocks the transmission or reception of a packet the packet is not allowed.
e. If the rule allows the transmission or reception of a packet the packet is allowed.
f. If a packet does not satisfy any rule it is blocked.
Money Pad, The Future Wallet
Money Pad, The Future Wallet

Introduction
"Money in the 21st century will surely prove to be as different from the money of the current century as our money is from that of the previous century. Just as fiat money replaced specie-backed paper currencies, electronically initiated debits and credits will become the dominant payment modes, creating the potential for private money to compete

with government-issued currencies." Just as every thing is getting under the shadow of "e" today we have paper currency being replaced by electronic money or e-cash.

Hardly a day goes by without some mention in the financial press of new developments in "electronic money". In the emerging field of electronic commerce, novel buzzwords like smartcards, online banking, digital cash, and electronic checks are being used to discuss money. But how are these brand-new forms of payment secure? And most importantly, which of these emerging secure electronic money technologies will survive into the next century?

These are some of the tough questions to answer but here's a solution, which provides a form of security to these modes of currency exchange using the "Biometrics Technology". The Money Pad introduced here uses the biometrics technology for Finger Print recognition. Money Pad is a form of credit card or smartcard, which we name so.

Every time the user wants to access the Money Pad he has to make an impression of his fingers which will be scanned and matched with the one in the hard disk of data base server. If the finger print matches with the user's he will be allowed to access and use the Pad other wise the Money Pad is not accessible. Thus providing a form of security to the ever-lasting transaction currency of the future "e-cash".

Money Pad - A form of credit card or smart card similar to floppy disk, which is introduced to provide, secure e-cash transactions.
Low Power UART Design for Serial Data Communication
Low Power UART Design for Serial Data Communication

Introduction
With the proliferation of portable electronic devices, power efficient data transmission has become increasingly important. For serial data transfer, universal asynchronous receiver / transmitter (UART) circuits are often implemented because of their inherent design simplicity and application specific versatility. Components such as laptop keyboards, palm pilot organizers and modems are few examples of devices that employ UART circuits. In this work, design and analysis of a robust UART architecture has been carried out to minimize power consumption during both idle and continuous modes of operation. UART

An UART (universal asynchronous receiver / transmitter) is responsible for performing the main task in serial communications with computers. The device changes incoming parallel information to serial data which can be sent on a communication line. A second UART can be used to receive the information. The UART performs all the tasks, timing, parity checking, etc. needed for the communication. The only extra devices attached are line driver chips capable of transforming the TTL level signals to line voltages and vice versa.

To use the device in different environments, registers are accessible to set or review the communication parameters. Setable parameters are for example the communication speed, the type of parity check, and the way incoming information is signaled to the running software.

UART types

Serial communication on PC compatibles started with the 8250 UART in the XT. In the years after, new family members were introduced like the 8250A and 8250B revisions and the 16450. The last one was first implemented in the AT. The higher bus speed in this computer could not be reached by the 8250 series. The differences between these first UART series were rather minor. The most important property changed with each new release was the maximum allowed speed at the processor bus side.

The 16450 was capable of handling a communication speed of 38.4 kbs without problems. The demand for higher speeds led to the development of newer series which would be able to release the main processor from some of its tasks. The main problem with the original series was the need to perform a software action for each single byte to transmit or receive.

To overcome this problem, the 16550 was released which contained two on-board FIFO buffers, each capable of storing 16 bytes. One buffer for incoming, and one buffer for outgoing bytes.
Single Photon Emission Computed Tomography (SPECT)
Single Photon Emission Computed Tomography (SPECT)

Introduction
Emission Computed Tomography is a technique where by multi cross sectional images of tissue function can be produced , thus removing the effect of overlying and underlying activity. The technique of ECT is generally considered as two separate modalities. SINGLE PHOTON Emission Computed Tomography involves the use single gamma ray emitted per nuclear disintegration. Positron Emission Tomography makes use of radio isotopes such as gallium-68, when two gamma rays each of 511KeV, are emitted simultaneously where a positron from a nuclear disintegration annihilates in tissue.

SPECT, the acronym of Single Photon Emission Computed Tomography is a nuclear medicine technique that uses radiopharmaceuticals, a rotating camera and a computer to produce images which allow us to visualize functional information about a patient's specific organ or body system. SPECT images are functional in nature rather than being purely anatomical such as ultrasound, CT and MRI. SPECT, like PET acquires information on the concentration of radio nuclides to the patient's body.

SPECT dates from the early 1960 are when the idea of emission traverse section tomography was introduced by D.E.Kuhl and R.Q.Edwards prior to PET, X-ray, CT or MRI. THE first commercial Single Photon- ECT or SPECT imaging device was developed by Edward and Kuhl and they produce tomographic images from emission data in 1963. Many research systems which became clinical standards were also developed in 1980's.

SPECT is short for single photon emission computed tomography. As its name suggests (single photon emission) gamma rays are the sources of the information rather than X-ray emission in the conventional CT scan.

Similar to X-ray, CT, MRI, etc SPECT allows us to visualize functional information about patient's specific organ or body system. Internal radiation is administrated by means of a pharmaceutical which is labeled with a radioactive isotope. This pharmaceutical isotope decays, resulting in the emission of gamma rays. These gamma rays give us a picture of what's happening inside the patient's body.

By using the most essential tool in Nuclear Medicine-the Gamma Camera. The Gamma Camera can be used in planner imaging to acquire a 2-D image or in SPECT imaging to acquire a 3-D image.
Buffer overflow attack:A potential problem and its Implications
Buffer overflow attack:A potential problem and its Implications

Introduction
Have you ever thought of a buffer overflow attack ? It occurs through careless programming and due to patchy nature of the programs. Many C programs have buffer overflow vulnerabilities because the C language lacks array bounds checking, and the culture of C programmers encourages a performance-oriented style that avoids error checking where possible. Eg: gets and strcpy ( no bounds checking ). This paper presents a systematic solution to the persistent problem of buffer overflow attacks. Buffer overflow attack gained notoriety in 1988 as part of the Morris Worm incident on the Internet.

These problems are probably the result of careless programming, and could be corrected by elementary testing or code reviews along the way.

THE ATTACK :- A (malicious) user finds the vulnerability in a highly privileged program and someone else implements a patch to that particular attack, on that privileged program. Fixes to buffer overflow attacks attempt to solve the problem at the source (the vulnerable program) instead of at the destination (the stack that is being overflowed).

StackGuard :- It is a simple compiler extension that limits the amount of damage that a buffer overflow attack can inflict on a program. The paper discusses the various intricacies to the problem and the implementation details of the Compiler extension 'Stack Guard '.

Stack Smashing Attack :- Buffer overflow attacks exploit a lack of bounds checking on the size of input being stored in a buffer array. The most common data structure to corrupt in this fashion is the stack, called a ``stack smashing attack'' .

StackGuard For Network Access :- The paper also discusses the impacts on network access to the 'Buffer Overflow Attack'.

StackGuard prevents changes to active return addresses by either :-

1. Detecting the change of the return address before the function returns, or
2. Completely preventing the write to the return address. MemGuard is a tool developed
to help debug optimistic specializations by locating code statements that change quasi-invariant values.

STACKGUARD OVERHEAD

" Canary StackGuard Overhead
" MemGuard StackGuard Overhead
" StackGuard Macrobenchmarks

The paper presents the issues and their implications on the 'IT APPLICATIONS' and discusses the solutions through implementation details of 'Stack Guard'.
Hurd
Hurd

Introduction
When we talk about free software, we usually refer to the free software licenses. We also need relief from software patents, so our freedom is not restricted by them. But there is a third type of freedom we need, and that's user freedom.

Expert users don't take a system as it is. They like to change the configuration, and they want to run the software that works best for them. That includes window managers as well as your favourite text editor. But even on a GNU/Linux system consisting only of free software, you can not easily use the filesystem format, network protocol or binary format you want without special privileges. In traditional Unix systems, user freedom is severly restricted by the system administrator.

The Hurd is built on top of CMU's Mach 3.0 kernel and uses Mach's virtual memory management and message-passing facilities. The GNU C Library will provide the Unix system call interface, and will call the Hurd for needed services it can't provide itself. The design and implementation of the Hurd is being lead by Michael Bushnell, with assistance from Richard Stallman, Roland McGrath, Jan Brittenson, and others.
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: gamma biologicals, gamma butyrolactone, gamma bros, hurd bankruptcy, gamma tit, y2k cobra rims, science and,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  computer networks full report seminar topics 8 42,601 06-10-2018, 12:35 PM
Last Post: jntuworldforum
  OBJECT TRACKING AND DETECTION full report project topics 9 31,015 06-10-2018, 12:20 PM
Last Post: jntuworldforum
  Block Chain and Data Science jntuworldforum 0 8,082 06-10-2018, 12:15 PM
Last Post: jntuworldforum
  Optical Computer Full Seminar Report Download computer science crazy 46 66,864 29-04-2016, 09:16 AM
Last Post: dhanabhagya
  Digital Signature Full Seminar Report Download computer science crazy 20 44,228 16-09-2015, 02:51 PM
Last Post: seminar report asees
  HOLOGRAPHIC VERSATILE DISC A SEMINAR REPORT Computer Science Clay 20 39,381 16-09-2015, 02:18 PM
Last Post: seminar report asees
  Computer Sci Seminar lists7 computer science crazy 4 11,545 17-07-2015, 10:29 AM
Last Post: dhanyasoubhagya
  Steganography In Images (Download Seminar Report) Computer Science Clay 16 25,871 08-06-2015, 03:26 PM
Last Post: seminar report asees
  Mobile Train Radio Communication ( Download Full Seminar Report ) computer science crazy 10 28,085 01-05-2015, 03:36 PM
Last Post: seminar report asees
  Computer Architecture Requirements? shakir_ali 1 27,197 07-04-2015, 12:04 PM
Last Post: Kishore1

Forum Jump: