Computer Science Seminar Abstract And Report 7
#1
Lightbulb 

Synthetic Aperture Radar System

Introduction
When a disaster occurs it is very important to grasp the situation as soon as possible. But it is very difficult to get the information from the ground because there are a lot of things which prevent us from getting such important data such as clouds and volcanic eruptions. While using an optical sensor, large amount of data is shut out by such barriers. In such cases, Synthetic Aperture Radar or SAR is a very useful means to collect data even if the observation area is covered with obstacles or an observation is made at night at night time because SAR uses microwaves and these are radiated by the sensor itself. The SAR sensor can be installed in some satellite and the surface of the earth can be observed.

To support the scientific applications utilizing space-borne imaging radar systems, a set of radar technologies have been developed which can dramatically lower the weight, volume, power and data rates of the radar systems. These smaller and lighter SAR systems can be readily accommodated in small spacecraft and launch vehicles enabling significantly reduced total mission cost. Specific areas of radar technology development include the antenna, RF electronics, digital electronics and data processing. A radar technology development plan is recommended to develop and demonstrate these technologies and integrate them into the radar missions in a timely manner. It is envisioned that these technology advances can revolutionize the approach to SAR missions leading to higher performance systems at significantly reduced mission costs.

The SAR systems are placed on satellites for the imaging process. Microwave satellites register images in the microwave region of the electromagnetic spectrum. Two mode of microwave sensors exit- the active and the passive modes. SAR is an active sensor which carry on -board an instrument that sends a microwave pulse to the surface of the earth and register the reflections from the surface of the earth.

One way of collecting images from the space under darkness or closed cover is to install the SAR on a satellite . As the satellite moves along its orbit, the SAR looks out sideways from the direction of travel, acquiring and storing the radar echoes which return from a strip of earth's surface that was under observation.

The raw data collected by SAR are severely unfocussed and considerable processing is required to generate a focused image. The processing has traditionally been done on ground and a downlink with a high data rate is required. This is a time consuming process as well. The high data rate of the downlink can be reduced by using a SAR instrument with on-board processing
Unlicensed Mobile Access
Unlicensed Mobile Access

Introduction
During the past year, mobile and integrated fixed/mobile operators announced an increasing number of fixed-mobile convergence initiatives, many of which are materializing in 2006. The majority of these initiatives are focused around UMA, the first standardized technology enabling seamless handover between mobile radio networks and WLANs. Clearly, in one way or another, UMA is a key agenda item for many operators.

Operators are looking at UMA to address the indoor voice market (i.e. accelerate or control fixed-to-mobile substitution) as well as to enhance the performance of mobile services indoors. Furthermore, these operators are looking at UMA as a means to fend off the growing threat from new Voice-over-IP (VoIP) operators.

However, when evaluating a new 3GPP standard like UMA, many operators ask themselves how well it fits with other network evolution initiatives, including:

o UMTS
o Soft MSCs
o IMS Data Services
o I-WLAN
o IMS Telephony
This whitepaper aims to clarify the position of UMA in relation to these other strategic initiatives. For a more comprehensive introduction to the UMA opportunity, refer to "The UMA Opportunity," available on the Kineto web site (kineto.com).
Mobile Network Reference Model
To best understand the role UMA plays in mobile network evolution, it is helpful to first introduce a reference model for today's mobile networks. Figure 1 provides a simplified model for the majority of 3GPP-based mobile networks currently in deployment. Based on Release 99, they typically consist of the following:
o GSM/GPRS/EDGE Radio Access Network (GERAN): In mature mobile markets, the GERAN typically provides good cellular coverage throughout an operator's service territory and is optimized for the delivery of high-quality circuit-based voice services. While capable of delivering mobile data (packet) services, GERAN data throughput is typically under 80Kbps and network usage cost is high.
o Circuit Core/Services: The core circuit network provides the services responsible for the vast majority of mobile revenues today. The circuit core consists of legacy Serving and Gateway Mobile Switching Centers (MSCs) providing mainstream mobile telephony services as well as a number of systems supporting the delivery of other circuit-based services including SMS, voice mail and ring tones.
o Packet Core/Services: The core packet network is responsible for providing mobile data services. The packet core consists of GPRS infrastructure (SGSNs and GGSNs) as well as a number of systems supporting the delivery of packet-based services including WAP and MMS.

Introducing UMA into Mobile Networks

For mobile and integrated operators, adding UMA to existing networks is not a major undertaking. UMA essentially defines a new radio access network (RAN), the UMA access network. Like GSM/GPRS/EDGE (GERAN) and UMTS (UTRAN) RANs, a UMA access network (UMAN) leverages well-defined, standard interfaces into an operator's existing circuit and packet core networks for service delivery.

However, unlike GSM or UMTS RANs, which utilize expensive private backhaul circuits as well as costly base stations and licensed spectrum for wireless coverage, a UMAN enables operators to leverage their subscribers' existing broadband access connections for backhaul as well as inexpensive WLAN access points and unlicensed spectrum for wireless coverage.
Windows DNA
Windows DNA

Introduction
For some time now, both small and large companies have been building robust applications for personal computers that continue to be ever more powerful and available at increasingly lower costs. While these applications are being used by millions of users each day, new forces are having a profound effect on the way software developers build applications today and the platform in which they develop and deploy their application.

The increased presence of Internet technologies is enabling global sharing of information-not only from small and large businesses, but individuals as well. The Internet has sparked a new creativity in many, resulting in many new businesses popping up overnight, running 24 hours a day, seven days a week. Competition and the increased pace of change are putting ever-increasing demands for an application platform that enables application developers to build and rapidly deploy highly adaptive applications in order to gain strategic advantage.

It is possible to think of these new Internet applications needing to handle literally millions of users-a scale difficult to imagine a just a few short years ago. As a result, applications need to deal with user volumes of this scale, reliable to operate 24 hours a day and flexible to meet changing business needs. The application platform that underlies these types of applications must also provide a coherent application model along with a set of infrastructure and prebuilt services for enabling development and management of these new applications.

Introducing Windows DNA: Framework for a New Generation of Computing Solutions

Today, the convergence of Internet and Windows computing technologies promises exciting new opportunities for savvy businesses: to create a new generation of computing solutions that dramatically improve the responsiveness of the organization, to more effectively use the Internet and the Web to reach customers directly, and to better connect people to information any time or any place. When a technology system delivers these results, it is called a Digital Nervous System. A Digital Nervous System relies on connected PCs and integrated software to make the flow of information rapid and accurate. It helps everyone act faster and make more informed decisions. It prepares companies to react to unplanned events. It allows people focus on business, not technology. Creating a true Digital Nervous System takes commitment, time, and imagination. It is not something every company will have the determination to do. But those who do will have a distinct advantage over those who don't. In creating a Digital Nervous System, organizations face many challenges: How can they take advantage of new Internet technologies while preserving existing investments in people, applications, and data? How can they build modern, scalable computing solutions that are dynamic and flexible to change? How can they lower the overall cost of computing while making complex computing environments work?
Laptop Computer
Laptop Computer

Introduction
Laptops are becoming as common as your cellular phone, and now they share the hardware industry as that of desktop computers with a number of configurable options. The features, the price, the build quality, the weight and dimensions, the display, battery uptime or that matter, the ease of the trackball. Earlier, there were hardly any configurable options available but today, we have a variety of laptops n different configurations with the process and just about anything you want.

Companies such as Intel, AMD, Transmeta and nViad, to name only a few, are making laptops a hype and reality. Intel and AMD have brought out technologies such as speed step to preserve battery power in laptops.

If you are on the move all the time, you probably need a laptop that can do all the things that you will not be able to do all the things that you will not only able you to create documents, spreadsheets and presentations, but also send and receive e-mail, access the web and may be even play music CDs or watch a DVD movie to get that much deserved break. You need laptop that is also study enough to take the bumps and joints in its stride while you are on the move.

If on other hand, you want a laptop for basic tasks and primarily for the mobility so that your work does not get held up on the occasions that you need to travel, then you would not necessarily need the best in terms of the choice and power of its individual sub systems. There fore, if the CD-ROM drive, floppy drive is not integrated into the main unit, but it supplied as an additional peripheral, the frequent traveler would not only mind, because the overall weight of the laptop would be significantly lesser and would be easier on your shoulder after a long day commuting.

History

Alan Kay of Xerox Palio Alto Research Center originated the idea of a portable computer in the 1970's. Kay envisioned a notebook sized portable computer called the Dynabook that everyone could own and that could handle all of the user's informational needs. Kay also envisioned the Dynabook with wireless network capabilities. Arguably, the first laptop computer was designed in 1979 by laptop computer was designed in 1979 by William Moggvidge of Gvid systems Corp. It had 340 kilo bytes of bubble memory, a die cast magnesium case and a folding electroluminescent graphics display a screen.
In 1983 Gavilan Computer produced a laptop computer with the following features.

" 64 kilobytes (expandable to 128 kilobytes)
of Random Access Memory
" Gavilan operating system (also van MS-DOS)
" 8088 microprocessor
" Touchpad mouse
" Portable printer
" Weighed 9 lb(4kg) alone or 14 lb (6.4 kg) with printer
The Gavilan computer had a floppy drive that was not compatible with other computers and it primarily used its own operating system. The company failed. in 1984, Apple lle was a notebook sized computer but not a true laptop. It had a 65602 microprocessor, 128KB of memory, an internal 5.25 inch floppy drive two serial ports, a mouse port, modem card, external power supply and a soldering handle.
Intelligent Software Agents
Intelligent Software Agents

Introduction
Computers are as ubiquitous as automobiles and toasters, but exploiting their capabilities still seems to require the training of a supersonic test pilot. VCR displays blinking a constant 12 noon around the world testify to this conundrum. As interactive television, palmtop diaries and "smart" credit cards proliferate, the gap between millions of untrained users and an equal number of sophisticated microprocessors will become even more sharply apparent. With people spending a growing proportion of their lives in front of computer screens--informing and entertaining one another, exchanging correspondence, working, shopping and falling in love--some accommodation must be found between limited human attention spans and increasingly complex collections of software and data.

Computers currently respond only to what interface designers call direct manipulation. Nothing happens unless a person gives commands from a keyboard, mouse or touch screen. The computer is merely a passive entity waiting to execute specific, highly detailed instructions; it provides little help for complex tasks or for carrying out actions (such as searches for information) that may take an indefinite time.

If untrained consumers are to employ future computers and networks effectively, direct manipulation will have to give way to some form of delegation. Researchers and software companies have set high hopes on so called software agents, which "know" users' interests and can act autonomously on their behalf. Instead of exercising complete control (and taking responsibility for every move the computer makes), people will be engaged in a cooperative process in which both human and computer agents initiate communication, monitor events and perform tasks to meet a user's goals.

The average person will have many alter egos in effect, digital proxies-- operating simultaneously in different places. Some of these proxies will simply make the digital world less overwhelming by hiding technical details of tasks, guiding users through complex on-line spaces or even teaching them about certain subjects. Others will actively search for information their owners may be interested in or monitor specified topics for critical changes. Yet other agents may have the authority to perform transactions (such as on-line shopping) or to represent people in their absence. As the proliferation of paper and electronic pocket diaries has already foreshadowed, software agents will have a particularly helpful role to play as personal secretaries--extended memories that remind their bearers where they have put things, whom they have talked to, what tasks they have already accomplished and which remain to be finished.

Agent programs differ from regular software mainly by what can best be described as a sense of themselves as independent entities. An ideal agent knows what its goal is and will strive to achieve it. An agent should also be robust and adaptive, capable of learning from experience and responding to unforeseen situations with a repertoire of different methods. Finally, it should be autonomous so that it can sense the current state of its environment and act independently to make progress toward its goal.

1.2 DEFINITION OF INTELLIGENT SOFTWARE AGENTS:

Intelligent Software Agents are a popular research object these days. Because of the fact that currently the term "agent" is used by many parties in many different ways, it has become difficult for users to make a good estimation of what the possibilities of the agent technology are.Moreover these agents may have a wide range of applications which may significantly effect its definition,hence it is not easy to craft a rock-solid definition which could be generalized for all.However a informal definition of an Intelligent software agent may be given as:

"A piece of software which performs a given task using information gleaned from its environment to act in a suitable manner so as to complete the task successfully. The software should be able to adapt itself based on changes occurring in its environment, so that a change in circumstances will still yield the intended result."
IP spoofing
IP spoofing

Introduction
Criminals have long employed the tactic of masking their true identity, from disguises to aliases to caller-id blocking. It should come as no surprise then, that criminals who conduct their nefarious activities on networks and computers should employ such techniques. IP spoofing is one of the most common forms of on-line camouflage. In IP spoofing, an attacker gains unauthorized access to a computer or a network by making it appear that a malicious message has come from a trusted machine by "spoofing" the IP address of that machine. In the subsequent pages of this report, we will examine the concepts of IP spoofing: why it is possible, how it works, what it is used for and how to defend against it.

Brief History of IP Spoofing

The concept of IP spoofing was initially discussed in academic circles in the 1980's. In the April 1989 article entitled: "Security Problems in the TCP/IP Protocol Suite", author S. M Bellovin of AT & T Bell labs was among the first to identify IP spoofing as a real risk to computer networks. Bellovin describes how Robert Morris, creator of the now infamous Internet Worm, figured out how TCP created sequence numbers and forged a TCP packet sequence. This TCP packet included the destination address of his "victim" and using an IP spoofing attack Morris was able to obtain root access to his targeted system without a User ID or password.

Another infamous attack, Kevin Mitnick's Christmas Day crack of Tsutomu Shimomura's machine, employed the IP spoofing and TCP sequence prediction techniques. While the popularity of such cracks has decreased due to the demise of the services they exploited, spoofing can still be used and needs to be addressed by all security administrators. A common misconception is that "IP spoofing" can be used to hide your IP address while surfing the Internet, chatting on-line, sending e-mail, and so forth.

This is generally not true. Forging the source IP address causes the responses to be misdirected, meaning you cannot create a normal network connection. However, IP spoofing is an integral part of many network attacks that do not need to see responses (blind spoofing).
TCP/IP PROTOCOL Suite
TCP/IP PROTOCOL Suite

Introduction
IP Spoofing exploits the flaws in TCP/IP protocol suite. In order to completely understand how these attacks can take place, one must examine the structure of the TCP/IP protocol suite. A basic understanding of these headers and network exchanges is crucial to the process.

Internet Protocol - IP

The Internet Protocol (or IP as it generally known), is the network layer of the Internet. IP provides a connection-less service. The job of IP is to route and send a packet to the packet's destination. IP provides no guarantee whatsoever, for the packets it tries to deliver. The IP packets are usually termed datagrams.

The datagrams go through a series of routers before they reach the destination. At each node that the datagram passes through, the node determines the next hop for the datagram and routes it to the next hop. Since the network is dynamic, it is possible that two datagrams from the same source take different paths to make it to the destination. Since the network has variable delays, it is not guaranteed that the datagrams will be received in sequence.

IP only tries for a best-effort delivery. It does not take care of lost packets; this is left to the higher layer protocols. There is no state maintained between two datagrams; in other words, IP is connection-less.
Internet Access via Cable TV Network
Internet Access via Cable TV Network

Introduction
Internet is a network of networks in which various computers connect each other through out the world. The connection to other computers is possible with the help of ISP (Internet Service Provider). Each Internet users depend dialup connections to connect to Internet. This has many disadvantages like very poor speed, may time cut downs etc. To solve the problem, Internet data can be transferred through Cable networks wired to the user computer. Different type connections used are PSTN connection, ISDN connection and Internet via Cable networks. Various advantages are High availability, High bandwidth to low cost, high speed data access, always on connectivity etc.

The huge growth in the number of Internet users every year has resulted in the traffic congestion on the net, resulting in slower and expensive Internet access. As cable TV has a strong reach to homes, it is the best medium for providing the Internet to house - holds with faster access at feasible rates.

We are witnessing an unprecedented demand from residential and business customers, especially in the last few years, for access to the Internet, corporate intranets and various online information services. The Internet revolution is sweeping the country with a burgeoning number of the Internet users. As more and more people are being attracted towards the Internet, traffic congestion on the Net is continuously increasing due to limited bandwidths resulting in slower and expensive Internet access.

The number of household getting on the Internet has increased exponentially in the recent past. First time internet users are amazed at the internet's richness of content and personalization, never before offered by any other medium. But this initial awe last only till they experienced the slow speed of internet content deliver. Hence the popular reference "World Wide Wait"(not world wide web). There is a pent-up demand for the high-speed (or broad band) internet access for fast web browsing and more effective telecommuting.

India has a cable penetration of 80 million homes, offering a vast network for leveraging the internet access. Cable TV has a strong reach to the homes and therefore offering the Internet through cable could be a scope for furthering the growth of internet usage in the homes.

The cable is an alternative medium for delivering the Internet services in the US, there are already a million homes with cable modems, enabling the high-speed internet access over cable. In India, we are in the initial stages. We are experiencing innumerable local problems in Mumbai, Bangalore and Delhi, along with an acute shortage of international Internet connectivity.

Accessing the Internet on the public switched telephone networks (PSTN) still has a lot of problems. Such as drops outs. Its takes along time to download or upload large files. One has to pay both for the Internet connectivity as well as for telephone usages during that period. Since it is technically possible to offer higher bandwidth by their cable, home as well as corporate users may make like it. Many people cannot afford a PC At their premises. Hardware obsolescence in the main problem to the home user. Who cannot afford to upgrade his PC every year? Cable TV based ISP solution s offer an economic alternative.
IDS
IDS

Introduction
A correct firewall policy can minimize the exposure of many networks however they are quite useless against attacks launched from within. Hackers are also evolving their attacks and network subversion methods. These techniques include email based Trojan, stealth scanning techniques, malicious code and actual attacks, which bypass firewall policies by tunneling access over allowed protocols such as ICMP, HTTP, DNS, etc. Hackers are also very good at creating and releasing malware for the ever-growing list of application vulnerabilities to compromise the few services that are being let through by a firewall.

IDS arms your business against attacks by continuously monitoring network activity, ensuring all activity is normal. If IDS detects malicious activity it responds immediately by destroying the attacker's access and shutting down the attack. IDS reads network traffic and looks for patterns of attacks or signatures, if a signature is identified, IDS sends an alert to the Management Console and a response is immediately deployed.

What is intrusion?

An intrusion is somebody attempting to break into or misuse your system. The word "misuse" is broad, and can reflect something severe as stealing confidential data to something minor such as misusing your email system for Spam.

What is an IDS?

An IDS is the real-time monitoring of network/system activity and the analysing of data for potential vulnerabilities and attacks in progress.

Need For IDS

Who are attacked?

Internet Information Services (IIS) web servers - which host web pages and serve them to users - are highly popular among business organizations, with over 6 million such servers installed worldwide. Unfortunately, IIS web servers are also popular among hackers and malicious fame-seekers - as a prime target for attacks!

As a result, every so often, new exploits emerge which endanger your IIS web server's integrity and stability. Many administrators have a hard time keeping up with the various security patches released for IIS to cope with each new exploit, making it easy for malicious users to find a vulnerable web server on the Internet. There are multiple issues which can completely endanger your Web server - and possibly your entire corporate network and reputation.

People fell there is nothing on their system that anybody would want. But what they are unaware of is that, there is the issue of legal liability. You are potentially liable for damages caused by a hacker using your machine. You must be able to prove to a court that you took "reasonable" measures to defend yourself from hackers. For example, consider if you put a machine on a fast page link (cable modem or DSL) and left administrator/root accounts open with no password. Then if a hacker breaks into that machine, then uses that machine to break into a bank, you may be held liable because you did not take the most obvious measures in securing the machine.
10 Gigabit Ethernet
10 Gigabit Ethernet

Introduction
From its origin more than 25 years ago, Ethernet has evolved to meet the increasing demands of packet-switched networks. Due to its proven low implementation cost, its known reliability, and relative simplicity of installation and maintenance, its popularity has grown to the point that today nearly all traffic on the Internet originates or ends with an Ethernet connection. Further, as the demand for ever-faster network speeds has grown, Ethernet has been adapted to handle these higher speeds and the concomitant surges in volume demand that accompany them.

The One Gigabit Ethernet standard is already being deployed in large numbers in both corporate and public data networks, and has begun to move Ethernet from the realm of the local area network out to encompass the metro area network. Meanwhile, an even faster 10 Gigabit Ethernet standard is nearing completion. This latest standard is being driven not only by the increase in normal data traffic but also by the proliferation of new, bandwidth-intensive applications.

The draft standard for 10 Gigabit Ethernet is significantly different in some respects from earlier Ethernet standards, primarily in that it will only function over optical fiber, and only operate in full-duplex mode, meaning that collision detection protocols are unnecessary. Ethernet can now step up to 10 gigabits per second, however, it remains Ethernet, including the packet format, and the current capabilities are easily transferable to the new draft standard.

In addition, 10 Gigabit Ethernet does not obsolete current investments in network infrastructure. The task force heading the standards effort has taken steps to ensure that 10 Gigabit Ethernet is interoperable with other networking technologies such as SONET. The standard enables Ethernet packets to travel across SONET links with very little inefficiency.

Ethernet's expansion for use in metro area networks can now be expanded yet again onto wide area networks, both in concert with SONET and also end-to-end Ethernet. With the current balance of network traffic today heavily favoring packet-switched data over voice, it is expected that the new 10 Gigabit Ethernet standard will help to create a convergence between networks designed primarily for voice, and the new data centric networks.

10 Gigabit Ethernet Technology Overview

The 10 Gigabit Ethernet Alliance (10GEA) was established in order to promote standards-based 10 Gigabit Ethernet technology and to encourage the use and implementation of 10 Gigabit Ethernet as a key networking technology for connecting various computing, data and telecommunications devices.

The charter of the 10 Gigabit Ethernet Alliance includes:

" Supporting the 10 Gigabit Ethernet standards effort conducted in the IEEE 802.3 working group
" Contributing resources to facilitate convergence and consensus on technical specifications
" Promoting industry awareness, acceptance, and advancement of the 10 Gigabit Ethernet standard
" Accelerating the adoption and usage of 10 Gigabit Ethernet products and services
" Providing resources to establish and demonstrate multi-vendor interoperability and generally encourage and promote interoperability and interoperability events
Tripwire
Tripwire

Introduction
Tripwire is a reliable intrusion detection system. It is a software tool that checks to see what has changed in your system. It mainly monitors the key attribute of your files, by key attribute we mean the binary signature, size and other related data. Security and operational stability must go hand in hand, if the user does not have control over the various operations taking place then naturally the security of the system is also compromised. Tripwire has a powerful feature which pinpoints the changes that has taken place, notifies the administrator of these changes, determines the nature of the changes and provide you with information you need for deciding how to manage the change.

Tripwire Integrity management solutions monitor changes to vital system and configuration files. Any changes that occur are compared to a snapshot of the established good baseline. The software detects the changes, notifies the staff and enables rapid recovery and remedy for changes. All Tripwire installation can be centrally managed. Tripwire software's cross platform functionality enables you to manage thousands of devices across your infrastructure.

Security not only means protecting your system against various attacks but also means taking quick and decisive actions when your system is attacked. First of all we must find out whether our system is attacked or not, earlier system logs were certainly handy. You can see evidences of password guessing and other suspicious activities. Logs are ideal for tracing steps of the cracker as he tries to penetrate into the system. But who has the time and the patience to examine the logs on a daily basis?

Penetration usually involves a change of some kind, like a new port has been opened or a new service. The most common change you can see is that a file has changed. If you can identify the key subsets of these files and monitor them on a daily basis, then we will be able to detect whether any intrusion took place. Tripwire is an open source program created to monitor the changes in a key subset of files identified by the user and report on any changes in any of those files. When changes made are detected, the system administrator is informed. Tripwire 's principle is very simple, the system administrator identifies key files and causes tripwire to record checksum for those files.

He also puts in place a cron job, whose job is to scan those files at regular intervals (daily or more frequently), comparing to the original checksum. Any changes, addition or deletion, are reported to the administrator. The administrator will be able to determine whether the changes were permitted or unauthorized changes. If it was the earlier case the n the database will be updated so that in future the same violation wouldn't be repeated. In the latter case then proper recovery action would be taken immediately.

Tripwire For Servers

Tripwire for Servers is a software that is exclusively used by servers. This software can be installed on any server that needs to be monitored for any changes. Typical servers include mail servers, web servers, firewalls, transaction server, development server etc, Any server where it is imperative to identity if and when a file system change has occurred should b monitored with tripwire for servers. For the tripwire for servers software to work two important things should be present - the policy file and the database.

The tripwire for Servers software conducts subsequent file checks, automatically comparing the state of the system with the baseline database. Any inconsistencies are reported to the Tripwire Manager and to the host system log file. Reports can also be emailed to an administrator. If a violation is an authorized change, a user can update the database so changes no longer show up as violations.
Ubiquitous Networking
Ubiquitous Networking

Introduction
Mobile computing devices have changed the way we look at computing. Laptops and personal digital assistants (PDAs) have unchained us from our desktop computers. A group of researchers at AT&T Laboratories Cambridge are preparing to put a new spin on mobile computing. In addition to taking the hardware with you, they are designing a ubiquitous networking system that allows your program applications to follow you wherever you go.

By using a small radio transmitter and a building full of special sensors, your desktop can be anywhere you are, not just at your workstation. At the press of a button, the computer closest to you in any room becomes your computer for as long as you need it. In addition to computers, the Cambridge researchers have designed the system to work for other devices, including phones and digital cameras. As we move closer to intelligent computers, they may begin to follow our every move.

The essence of mobile computing is that a user's applications are available, in a suitably adapted form, wherever that user goes. Within a richly equipped networked environment such as a modern office the user need not carry any equipment around; the user-interfaces of the applications themselves can follow the user as they move, using the equipment and networking resources available. We call these applications Follow-me applications.

Typically, a context-aware application needs to know the location of users and equipment, and the capabilities of the equipment and networking infrastructure. In this paper we describe a sensor-driven, or sentient, computing platform that collects environmental data, and presents that data in a form suitable for context-aware applications.

Context-Aware Application

A context-aware application is one which adapts its behaviour to a changing environment. Other examples of context-aware applications are 'construction-kit computers' which automatically build themselves by organizing a set of proximate components to act as a more complex device, and 'walk-through videophones' which automatically select streams from a range of cameras to maintain an image of a nomadic user. Typically, a context-aware application needs to know the location of users and equipment, and the capabilities of the equipment and networking infrastructure. In this paper we describe a sensor-driven, or sentient, computing platform that collects environmental data, and presents that data in a form suitable for context-aware applications.

The platform we describe has five main components:

1. A fine-grained location system, which is used to locate and identify objects.
2. A detailed data model, which describes the essential real world entities that are involved in mobile applications.
3. A persistent distributed object system, which presents the data model in a form accessible to applications.
4. Resource monitors, which run on networked equipment and communicate status information to a centralized repository.
5. A spatial monitoring service, which enables event-based location-aware applications.
Finally, we describe an example application to show how this platform may be used.
Unicode And Multilingual Computing
Unicode And Multilingual Computing

Introduction
Unicode provides a unique number for every character,
no matter what the platform,
no matter what the program,
no matter what the language.
Fundamentally, computers just deal with numbers. They store letters and other characters by assigning a number for each one. Before Unicode was invented, there were hundreds of different encoding systems for assigning these numbers. No single encoding could contain enough characters: for example, the European Union alone requires several different encodings to cover all its languages. Even for a single language like English no single encoding was adequate for all the letters, punctuation, and technical symbols in common use.

These encoding systems also conflict with one another. That is, two encodings can use the same number for two different characters, or use different numbers for the same character. Any given computer (especially servers) needs to support many different encodings; yet whenever data is passed between different encodings or platforms, that data always runs the risk of corruption. This paper is intended for software developers interested in support for the Unicode standard in the Solaris„¢ 7 operating environment.
It discusses the following topics:

" An overview of multilingual computing, and how Unicode and the internationalization framework in the Solaris 7 operating environment work together to achieve this aim
" The Unicode standard and support for it within the Solaris operating environment
" Unicode in the Solaris 7 Operating Environment
" How developers can add Unicode support to their applications
" Codeset conversions
Unicode And Multilingual Computing

It is not a new idea that today's global economy demands global computing solutions. Instant communications and the free flow of information across continents - and across computer platforms - characterize the way the world has been doing business for some time. The widespread use of the Internet and the arrival of electronic commerce (e-commerce) together offer companies and individuals a new set of horizons to explore and master. In the global audience, there are always people and businesses at work - 24 hours of the day, 7 days a week. So global computing can only grow.

What is new is the increasing demand of users for a computing environment that is in harmony with their own cultural and linguistic requirements. Users want applications and file formats that they can share with colleagues and customers an ocean away, application interfaces in their own language, and time and date displays that they understand at a glance. Essentially, users want to write and speak at the keyboard in the same way that they always write and speak. Sun Microsystems addresses these needs at various levels, bringing together the components that make possible a truly multilingual computing environment.
XML Encryption
XML Encryption

Introduction
As XML becomes a predominant means of linking blocks of information together, there is a requirement to secure specific information. That is to allow authorized entities access to specific information and prevent access to that specific information from unauthorized entities. Current methods on the Internet include password protection, smart card, PKI, tokens and a variety of other schemes. These typically solve the problem of accessing the site from unauthorized users, but do not provide mechanisms for the protection of specific information from all those who have authorized access to the site.

Now that XML is being used to provide searchable and organized information there is a sense of urgency to provide a standard to protect certain parts or elements from unauthorized access. The objective of XML encryption is to provide a standard methodology that prevents unauthorized access to specific information within an XML document.

XML (Extensible Markup Language) was developed by an XML Working Group (originally known as the SGML Editorial Review Board) formed under the auspices of the World Wide Web Consortium (W3C) in 1996. Even though there was HTML, DHTML AND SGML XML was developed byW3C to achieve the following design goals.

" XML shall be straightforwardly usable over the Internet.
" XML shall be compatible with SGML.
" It shall be easy to write programs, which process XML documents.
" The design of XML shall be formal and concise.
" XML documents shall be easy to create.

XML was created so that richly structured documents could be used over the web. The other alternate is HTML and SGML are not practical for this purpose.HTML comes bound with a set of semantics and does not provide any arbitrary structure. Even though SGML provides arbitrary structure, it is too difficult to implement just for web browser. Since SGML is so comprehensive that only large corporations can justify the cost of its implementations.

The eXtensible Markup Language, abbreviated as XML, describes a class of data objects called XML documents and partially describes the behavior of computer programs which process them. Thus XML is a restricted form of SGML

A data object is an XML document if it is well-formed, as defined in this specification. A well-formed XML document may in addition be valid if it meets certain further constraints.Each XML document has both a logical and a physical structure. Physically, the document is composed of units called entities. An entity may refer to other entities to cause their inclusion in the document. A document begins in a "root" or document entity
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: conginative science, nus freshmen seminar science, ancering of waterboat in science accsibition, uma damle, who is heather morris, computer science and, jjcollege of arts and science,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  computer networks full report seminar topics 8 43,431 06-10-2018, 12:35 PM
Last Post: jntuworldforum
  OBJECT TRACKING AND DETECTION full report project topics 9 31,547 06-10-2018, 12:20 PM
Last Post: jntuworldforum
  Block Chain and Data Science jntuworldforum 0 8,267 06-10-2018, 12:15 PM
Last Post: jntuworldforum
  Optical Computer Full Seminar Report Download computer science crazy 46 67,676 29-04-2016, 09:16 AM
Last Post: dhanabhagya
  Digital Signature Full Seminar Report Download computer science crazy 20 44,945 16-09-2015, 02:51 PM
Last Post: seminar report asees
  HOLOGRAPHIC VERSATILE DISC A SEMINAR REPORT Computer Science Clay 20 39,680 16-09-2015, 02:18 PM
Last Post: seminar report asees
  Computer Sci Seminar lists7 computer science crazy 4 11,750 17-07-2015, 10:29 AM
Last Post: dhanyasoubhagya
  Steganography In Images (Download Seminar Report) Computer Science Clay 16 26,200 08-06-2015, 03:26 PM
Last Post: seminar report asees
  Mobile Train Radio Communication ( Download Full Seminar Report ) computer science crazy 10 28,323 01-05-2015, 03:36 PM
Last Post: seminar report asees
  Computer Architecture Requirements? shakir_ali 1 27,528 07-04-2015, 12:04 PM
Last Post: Kishore1

Forum Jump: