Tele-Graffiti (A Camera-Projector Based RemoteSketching System with Hand-Based User I
#1

Tele-Graffiti (A Camera-Projector Based RemoteSketching System with Hand-Based User Interface and Automatic Session Summarization)

Abstract


One way to build a remote sketching system is to use a video camera to image what each user draws at their site, transmit the video to the other sites, and display it there using an LCD projector. To make such a system usable, however, the users have to be able to move the paper on which they are drawing, they have to be able to interact with the system using a convenient interface, and sketching sessions must be stored in a compact format so that they can be replayed later. Tele- Graffiti is a newly developed remote sketching system with the following three features: (1) real-time paper tracking to allow the users to move their paper during system operation, (2) a hand based user interface, and (3) automatic session summarization and playback. In this paper, the design, implementation, and performance of Tele-Graffiti is described.

1. Introduction

There are several ways of building a remote sketching system. One way is to use a tablet and a stylus to input the sketch, and a computer monitor to display the sketch at the remote site. Such systems have a number of disadvantages. Writing with a stylus on a glass tablet is unnatural compared to sketching with a regular pen and paper. Shading and other effects are harder to achieve. Changing colour means using the computer to select a new colour rather than using a different pen. Incorporating existing hard-copy documents such as a graded exam is impossible. Another way of building a remote sketching system is to use a video camera to image the sketch at one end, transmit the video to the other end, and display it there using a projector.Figure 1 shows the schematic of such a system. The first such camera-projector based remotesketching system was Pierre Wellnerâ„¢s Xerox Double DigitalDesk

Figure 1:


Tele-Graffiti is a system allowing two or more users to communicate remotely via hand-drawn sketches. What one person writes at one site is captured using a video camera, transmitted to the other site(s), and displayed there using an LCD projector. Tele-Graffiti is a camera-projector based remote sketching system that has been recently designed and implemented. Although the Xerox Double Digital-Desk is an elegant idea, there are a number of technical problems that need to be solved to implement such a real-time system and to make it both practical and usable.

Real-Time Paper Tracking:


The users of a remote sketching system will want to move the pieces of paper on which they are writing during operation of the system. To allow this capability a camera-projector based remote sketching system must track the paper in real-time. Such functionality is not available in current systems .

Video Transmission:


To allow real-time interaction between two people, it is also necessary that the video transmission between the remote sites be fast enough for smooth communication. Providing a Suitable User Interface:


A remote sketching system would be much more useful if it could be controlled without using the keyboard or mouse. Instead it would be best if the user could control the system just using their hands.

Sketch Summarization:


While remote sketching systems help users to communicate through their sketches, such a system would be much more useful if it has functions for sketch summarization, recording, and replay. Overall, the system software, including paper tracking, user interface processing, and summarization, runs at 30Hz on a PC with dual 450MHz Pentium-II processors. The video quality depends on the network bandwidth. With typical network parameters a video rate of around 10 frames per second is achieved. In this paper we describe the design and implementation of Tele-Graffiti.

2.Tele-Graffiti Hardware and Calibration
In this section we briefly describe the Tele-Graffiti hardware and the geometric calibration of the camera and projector.

2.1. Hardware


A schematic diagram of a 2-site Tele-Graffiti system is contained in Figure 1. Figure 2 shows photos of 2 real Tele- Graffiti systems. Each Tele-Graffiti site contains the following components:

Computer:


Each Tele-Graffiti site has a PC with dual 450MHz Pentium-II processors, an NVidia GeForce2 GTS video card, and an OrangeLink Firewire (IEEE1394) card.


Projector: We use an XGA resolution (1024*768 pixels) Panasonic PT-L701U LCD projector.

Camera:


We use a Firewire (IEEE1394) Sony DFW-VL500 camera, which we run at VGA (640*480 pixels) resolution.

2 Stand:


We constructed 2 different prototype stands to hold the camera and projector in a compact configuration. In Figure 2(a) the projector is mounted horizontally on a supporting plate, while in Figure 2(b) it is mounted vertically on the pillar.

Network:


The two Tele-Graffiti sites are connected by a localarea network. We have experimented running the system over both 100Base-T and 10Base-T networks. Figure 2:

Fig 2(a) Fig 2(b)



2 .2 .Geometric Calibration of the Camera-Projector Relationship


We need to warp the video so that when it is displayed by the projector, it appears correctly aligned with the paper. We therefore need to know the relationship between camera coordinates(xc and yc) and projector coordinates (xp and yp). We assume that the projector follows the same perspective imaging model as the camera (with the light rays in the reverse direction). Assuming the paper is planar, the relationship between camera and projector coordinates is:

Where Hpc is a 3*3 homography, and denotes equality up to scale.Since Hpc doesnâ„¢t change if the paper remains in the same plane (i.e. the paper stays on the desktop),Hpc can be computed at system startup. This constant value of Hpc is precomputed by: (1) projecting a rectangular image with known corner locations onto the desktop, (2) capturing an image of this calibration image, (3) locating the vertices in the captured image using the paper tracking algorithm described in Section 3.3 (since there is no clipboard, we give the system enough prior information to break the four-fold orientation ambiguity), and (4) solving for Hpc using Equation (1) and the 4 pairs of projector-camera coordinates.

3 . Tele-Graffiti System Software


Each Tele-Graffiti site must continuously capture video from the camera, track the paper in the video, warp the video from the other site so that it is aligned with the paper, and communicate with the other site. The following describes the system software that performs all of these tasks. 3.1.System Architecture Tele-Graffiti runs under Linux (RedHat 7.1) and consists of 4 threads: the Drawing thread, the Paper Detection thread, the Sending thread (the Sending thread is actually implemented in a second process due to TCP/IP constraints), and the Receiving thread. Figure 3 shows the diagram of the 4 threads. Figure 3:


The 4 threads share the following data:

Image to Display:


The latest image received from the remote site. It is primarily shared between the Receiving thread and the Drawing thread.

Remote Paper Vertices:


The estimated paper vertices in the image to display.

Image to Send:


The image to send is a YUV image which is primarily shared between the Paper Detection thread and the Sending thread. It is a sub-image of the captured (640*480) image which is just large enough to include the detected paper.

Local Paper Vertices:
The estimated paper vertices in the captured (640*480) image. In the following sections, each thread is described in detail: 3.2.Drawing Thread The Drawing thread continuously warps and draws the image to display. The drawn image is output to the projector simply by plugging the monitor output of the PC into the projector. A dual headed video card could be used instead. The Drawing thread waits for updates to the image to display and upon update copies the new image into the OpenGL texture buffer. This thread also waits for changes to the local paper vertices. Whenever this occurs, the Drawing thread redraws(re-maps) the texture on the screen with the new local paper vertices.

3.3.Paper Detection Thread: Paper Tracking


The Paper Detection thread continuously does the following: 1. Grabs an image from the camera. 2. Detects or tracks the paper. See below for the details of the paper tracking algorithm. 3. Updates the image to send and the local paper vertices. Updating the image to send is done by cropping the grabbed image according to the estimated paper vertices. 4. Notifies the Drawing thread of the update.



3.4.Communication between Tele-Graffiti Sites
3.4.1.The Sending and Receiving Threads


Sending and receiving is conducted simultaneously. Each Tele-Graffiti site opens two TCP/IP sockets, one of which is dedicated to sending, and the other to receiving. For this reason, we have two communications threads, the Sending thread and the Receiving thread. Moreover, since it appears that Linux doesnâ„¢t allow one process to both receive and send on TCP/IP sockets at the same time (even in different threads), we implemented the Sending thread as a separate process, rather than just as another thread. The details of the communications threads are as follows:

Sending Thread:

The Sending thread continuously converts the most recent image at this site (the image to send) from YUV to RGB, compresses it, and transmits it to the other site along with the estimated local paper vertices. As the paper detection cycle (30Hz) is faster than the typical image sending cycle and multiple updates to the image to send can occur during one image transmission session, the Sending thread just transmits the most recent image when it starts the transmission. This image is copied to a buffer to avoid it being overwritten.

Receiving Thread:

This thread waits for the arrival of images and paper vertices from the other Tele-Graffiti site. Upon arrival, the Receiving thread decompresses the image, updates the image to display and the remote paper vertices, and notifies the Drawing thread of the update. Figure 4 shows the simple communication protocol: Figure 4:



3.5.Software System Timing Performance

Timing results for the Drawing thread and the Paper Detection thread are shown in Tables 1 and 2. On dual 450MHz Pentium PCs, these threads operate (comfortably) at 30Hz. Timing results for the communications threads are shown in Table 3 (Sending thread) and Table 4 (Receiving thread). The computational cost of compressing and decompressing the image is a substantial proportion of this time. On faster PCs, the transmission threads could run far faster. In these tables, all timing results are measured in real time, not in CPU time. Also note that the steps marked with * (asterisk) spend most of their time idling (i.e. waiting for events or resources) rather than actually computing. Table 1: Timing results for the Drawing thread. The Drawing thread operates at over 30Hz and even so most of the cycle is spent idling, waiting for the update to the shared data.


Table 2: Timing results for the Paper Detection thread. Paper detection operates comfortably at 30Hz.Around 10msec per loop is spent idling waiting for image capture. We also include an estimate of the average number of CPU operations spent for each of the 640*480=307K pixels in the captured image.


Table 3: Timing results for the Sending thread (with JPEG compression over a 100Base-T network). Overall the Sending thread operates at 8Hz. Approximately the same results are obtained over 10Base-T networks.


Table 4: Timing results for the Receiving thread (with JPEG compression over a 100Base-T network). The Receiving thread operates at 8Hz. Approximately the same results are obtained over 10Base-T networks.




4 .A Hand-Based User Interface

We have implemented a user interface for Tele-Graffiti. We use two kinds of triggers to activate user interface functions: hand over paper events and interactions of the hand with user interface objects (UI objects). Hand over paper events occur when the system determines that the user has just placed their hand over the paper or when they have just retracted their hand from over the paper. UI objects are rectangular shaped projections in the work area which enable users to invoke predefined commands using their hands. Currently we have implemented two types of UI objects: Button and Slider. Button objects are used to toggle a mode. Slider objects are used to select a value from a range.

4.1.Hand Tracking

There are several possible ways of tracking the userâ„¢s hands. One possibility is to use a colorbased hand tracking algorithm . We chose not to use a color-based algorithm because such algorithms are generally not sufficiently robust to the variation in the color of the userâ„¢s hand and to the lighting condition. An additional complication is that some LCD projectors timemultiplex color. Without synchronizing the projector and the camera, the color of the hands varies from frame to frame. Another way of detecting the userâ„¢s hands is to use an infrared camera.The color projected by the projector is then not important because the camera primarily only images the heat radiated by the users hands.We base our hand tracking algorithm on background subtraction because it is fast, robust, and does not require any special hardware. Figure 12 contains an overview of our algorithm.



4.1.1.Background Subtraction

We have to consider the following problems during background subtraction: 1. The paper and clipboard are on the desktop and they may move from frame to frame. 2. Because we are using the cameraâ„¢s auto gain control (AGC), the background intensity varies depending on what else is placed on the desktop. For example, when the paper is placed on the desktop the average intensity increases. As a result, the AGC reduces the gain of the camera and so the intensity of the background is reduced. To solve these problems, our background subtraction algorithm includes the following steps. Prepare the Background Image: A grayscale background image is captured at system startup before anything has been placed on the desktop. The average intensity is calculated from the background image. See Figure 3(a) for an example background image. Create a Mask for the Paper and Clipboard: Create a mask which covers the paper and clipboard. This mask is used in background subtraction. See Figure 3 © for an example. Calculate the Average Background Intensity: In order to compensate for the difference in background intensity caused by the cameraâ„¢s AGC, calculate the average intensity of the current Image , excluding the paper and clipboard area. Either the mean or the media could be used.




Fig 3(a) Fig 3(b) Fig 3© Fig 3(d) Fig 3(e) Fig 3(f) ( check it out in the PDF FILE )

4.1.2. Computing Connected Components

Next, we find the connected components in D(x,y) the result of background subtraction. We use 4-connectedness. Each connected component is given a unique label [1255] and the result is represented as another grayscale image where the pixel intensity is the label of the component to which the pixel belongs. Figure 3 (e) shows an example result.

4.1.3. Determining the Hand Component

We determine which of the components found in the previous step is the hand component using knowledge that the user™s hand: · should always touch the image boundary, and · is larger than most of the other objects on the desktop (except for the paper and clipboard,which are eliminated using the paper mask.) Thus, the algorithm to determine the hand component is to find the largest component that intersects the camera image boundary. See Figure 3(f) for an example result.

4.1.4. Hand Tip Localization Once the hand component has been determined,

we then determine its tip: i.e. where it is pointing.An X is displayed in the work area at the detected tip location to show the user that the hand has been detected correctly. The hand tip is 5 located with the algorithm below. See also Figure 4 for an illustration of the algorithm. 1. Compute the direction of the hand from the following two points: (a) the center edge pixel, i.e. the mean of the points where the hand component touches the camera image boundary (b) the center pixel, i.e. the mean of the XY coordinates of the hand component pixels. 2. Find the farthest pixel from the edge pixel in the hand component in the hand direction. See Figures 3(f) and 4(b) for examples of hand tip localization. Figure 4:



4.2. Hand Over Paper Detection

In general, it is hard to determine whether the user is drawing or not. We assume that if the user has their hand over the paper they are drawing. If their hand is away from the paper, they are definitely not drawing. Once we have detected the paper and found the hand component, we then determine whether the hand is over the paper. Our algorithm for handover- paper detection is as follows:

1. Create a hand sensitive mask around the paper. The hand sensitive mask is created based on the estimated paper vertices, just like the paper mask for the background subtraction step .The hand sensitive mask is slightly larger than the paper mask, however. Since no connected components exist within the paper mask because the background subtraction is fixed at 0 there, the effective hand sensitive area is the area between the larger mask and the paper mask. See Figure 5(b).

2. Intersect the hand component with the hand sensitive mask. If there is at least one pixel from the hand component within hand sensitive area, the hand is over the paper. See Figure 5(d) for an illustration of this step,


Fig (a) Fig (b) Fig © Fig (d) Figure
5: Hand-over-paper detection.

(a) Captured image. (b) Hand * sensitive area. A mask is created based on the paper vertices which is slightly larger than the paper mask used for background subtraction (see Figure 3©). The area between these two masks is the hand-sensitive area. © Detected hand component.Note that the hand is partly masked by the paper mask. (d) The result of intersecting (b) and ©. If there are any pixels in the result, the hand is over the paper, and vice versa. Fig (a) Fig (b) Fig© Figure 6: Results of Hand-over-paper detection. (a) Hand detection works simultaneously with paper tracking. (b)© When a hand is detected over the paper, a small blue rectangle is displayed in the top left corner of the work area so that the user knows the system has detected the event.

4.3.Button Objects

Button objects are used to invoke a one-time action or toggle a system mode. Each Button object has one of three states: normal, focused, and selected. Each Button objectâ„¢s state starts as normal and becomes focused when the hand moves over the Button object. After the hand remains over the Button for a certain period of time it becomes selected and the command that is associated with the button is invoked. See Figure 17 for an example of a user interacting with a Button object.Hand over Button detection operates in a similar way to hand-overpaper detection: the hand component from background subtraction is examined and it is determined if there is at least one pixel in the buttonâ„¢s hand sensitive area, a rectangular area slightly larger than the object itself. Figure 18 illustrates our hand-over-button detection algorithm.


Fig 7(a) Fig 7(b) Fig 7©

Figure 7: Interaction with a Button. (a) The Button in its normal state. * When the hand is detected over the Button object (b), its state becomes selected; changes its appearance (color and text label) ©, and invokes a pre-defined command (not shown, but in this case to turn of the summarization mode.)


Fig 8(a) Fig 8(b) Fig 8©

Figure 8: Hand over Button detection. (a) Captured image. One finger of the * hand is placed over the Button. (b) An illustration of the hand sensitive area of the Button. The hand sensitive area is a rectangular area slightly larger than the object itself. © The hand component in the hand sensitive area. As there are pixels from the hand component (the brighter pixels) in the hand sensitive area, it is determined that the hand is over the Button. Note that the hand component does not extend into the object area (the darker pixels) even when the hand tip is placed near the center of the object because the projected light needed to draw the object brightens the hand 6 tip. This does not affect hand over Button detection. All that is needed for robust hand over Button detection is that the hand sensitive area be slightly larger than the Button itself.

4.4.Slider Objects

Sliders are used to vary a numerical parameter within a range. Each Slider object holds its current value within the range [0,100] although any other range could be used. A bar is drawn in the slider object to denote the current value. The leftmost end of the Slider denotes 0 (the minimum value), the rightmost 100 (the maximum value). See Figure 19 for an example.The Slider object has a state mechanism just like the Button object. When the Slider is in the selected state it continues to: 1. estimate the position of the hand, 2. compute the value of the Slider parameter from the hand position, 3. update the location of the Slider bar according to the parameter value, and 4. notify the system of the parameter value. The user interface detects the hand component within the Sliderâ„¢s hand sensitive area just like for Button objects, and also estimates the horizontal position of the hand. This value is computed by averaging the x-coordinates of the pixels in the hand component in the hand sensitive area:


Where is the number of pixels from the hand component in the hand sensitive area, and is the X coordinate of each such pixel. Note that we do not include any pixels within the object area here because the image of the hand within the projected object area is not stable enough to rely on. See Figure 20(d) for an example. The value of the Slider parameter then computed as: Where denotes the width of the hand sensitive area in pixels. Figure 9 illustrates hand over Slider detection and the computation of the hand position.


Figure 9: A user interacting with a Slider. (a) The Slider in the normal state. (b) When the userâ„¢s hand is detected over the Slider, it is activated and changes its appearance (colour). The Slider continues to change the value of its parameter depending on how the finger moves ©(d)(e). The system is continuously notified of the change to the Slider parameter until the hand leaves the Slider (f).


Figure 10: Hand over Slider detection and computation of the hand position. (a) Captured image. One finger of the hand is placed over the Slider object. (b) An illustration of the hand sensitive area of the Slider, the same as for the Button. © The hand component in the hand sensitive area. As pixels from the hand component (the brighter pixels) are found in the hand sensitive area, it is determined that the hand is over the Slider. In addition, the hand position is computed by averaging the X coordinates of the pixels from the hand component between the object area and the hand sensitive area, which is shown by a small X. We donâ„¢t use the pixels in the object area to compute the hand position because the image of the hand within the object projection area is unstable. In some cases the hand and the Slider are recognized as a single connected component (d), while in © the components of the hand and the Slider are separated. Table 5: Timing results for the Tele-Graffiti hand based user interface on a dual 450MHz Pentium-II machine. As the total calculation time of 9.2msec is within the idle time waiting for the next frame in the Paper Detection thread (see Table 3), adding user interface processing doesnâ„¢t affect the performance of the paper tracking and video transmission. Paper tracking and UI processing operate together at 30Hz browses through the summarized session.



5.Uses

(1) It gives graphic articles a global reach Architects can tem up with experts abroad to work on designs simultaneously. (2) Distance education will be benefited as students and teacher can interact in their own handwritings and get minute points and doubts classified. (3) It can be considered as a substitute for internet chat as en chat for many could be more interesting than key board exercise. News, information, alerts, warnings can be exchanged in an attractive way. 7 (4) Forms can be filled u in your own hand writing without the need to download, scan and submit the same.

6. Future Prospects

· Researchers are trying to add audio signals and face-tracking mechanism to develop a remote sketching teleconferencing system. · Efforts are also going on to shrink the size of the system to the size of table lamp

7. Conclusion

We have described Tele-Graffiti, a camera-projector based remote sketching system. The major contributions of Tele- Graffiti over existing systems are: Real-Time Paper Tracking: We have developed a real-time paper tracking algorithm that allows the users of Tele-Graffiti to move the paper during operation of the system. Real-Time Video Transmission: We have developed a software architecture for Tele-Graffiti and implemented realtime video transmission over IP networks on standard 450MHz PCs. Hand-Based User Interface: We have added a user interface to Tele-Graffiti based on handtracking.The system requires no extra hardware, and operates with the Tele-Graffiti cameras.Infra-red cameras are not required. The user interface detects when the user has their hand over the paper and processes their interaction with UI objects such as buttons and sliders. Automatic Summarization: We have developed an automatic summarization system for Tele-Graffiti based on detecting when the users have their hands over the paper. Such a system can automatically summarize a several minute long sketching session into a few frames.The complete Tele-Graffiti system operates at 30Hz on a PC with dual 450MHz Pentium-II processors. With typical network parameters a video rate of around 10 frames per second is achieved.
Reply
#2
This article is presented by:
Naoya Takao
Jianbo Shi
Simon Baker
Tele-Graffiti: A Camera-Projector Based Remote Sketching System
with Hand-Based User Interface and Automatic Session Summarization



Abstract
One way to build a remote sketching system is to use a video camera to image what each user draws at their site, transmit the video to the other sites, and display it there using an LCD projector. Such camera-projector based remote sketching systems date back to Paul Wellner’s (largely unimplemented) Xerox Double DigitalDesk. To make such a system usable, however, the users have to be able to move the paper on which they are drawing, they have to be able to interact with the system using a convenient interface, and sketching sessions must be stored in a compact format so that they can be replayed later. We have recently developed Tele-Graffiti, a remote sketching system with the following three features: (1) real-time paper tracking to allow the users to move their paper during system operation, (2) a hand based user interface, and (3) automatic session summarization and playback. In this paper, we describe the design, implementation, and performance of Tele-Graffiti. Keywords: Camera-projector based remote sketching systems, remote communication and collaboration, video compression and transmission, paper detection and tracking, hand-based user interfaces, automatic summarization, archiving, and playback.
Introduction
There are several ways of building a remote sketching system. One way is to use a tablet and a stylus to input the sketch, and a computer monitor to display the sketch at the remote site. Such systems have a number of disadvantages. Writing with a stylus on a glass tablet is unnatural compared to sketching with a regular pen and paper. Shading and other effects are harder to achieve. Changing color means using the computer to select a new color rather than using a different pen. Incorporating existing hard-copy documents such as a graded exam is impossible. Another way of building a remote sketching system is to use a video camera to image the sketch at one end, transmit the video to the other end, and display it there using an projector. See Figure 1 for a schematic diagram of such a system. The first such camera-projector based remote sketching system was Pierre Wellner’s Xerox “Double DigitalDesk” [Wellner, 1993]. Since 1993 systems combining video cameras and projectors have become more and more prevalent. Besides the Xerox “DigitalDesk”, other such systems include the University of North Carolina’s “Office of the Future” [Raskar et al., 1998], INRIA Grenoble’s “MagicBoard” [Hall et al., 1999], and Yoichi Sato’s “Augmented Desk” [Sato et al., 2000]. A related projector system is Wolfgang Krueger’s “Responsive Workbench” [Krueger et al., 1995], used in Stanford University’s “ResponsiveWorkbench” project [Agrawala et al., 1997] and in Georgia Tech’s “Perceptive Workbench” [Leibe et al., 2000]. Recently, cameras and projectors have also been combined to develop smart displays [Sukthankar et al., 2001] with automatic keystone correction, laser pointer control, automatic alignment of multiple displays, and shadow elimination. Although this list is by no means comprehensive, it clearly demonstrates the growing interest in such systems.

For more information about this article,please follow the link:
http://googleurl?sa=t&source=web&cd=3&ve....1.13.4976%26rep%3Drep1%26type%3Dpdf&ei=puqzTJTyMoLcvQOWycCdCg&usg=AFQjCNGaHCr8fBiWLoFqximnMkSt9Su1pw
Reply
#3
plz send full report of teligraffiti
Reply
#4
to get information about the topic Tele Graffiti full report ,ppt and related topic refer the page link bellow

http://studentbank.in/report-tele-graffiti

http://studentbank.in/report-tele-graffi...sed-user-i

http://studentbank.in/report-steady-stat...e-graffiti

http://studentbank.in/report-tele-graffiti?pid=42848
Reply
#5
can me get the full seminar report about this topic to my mail venki.ht.77[at]gmail.comHuh
Reply
#6
to get information about the topic Tele Graffiti full report ,ppt and related topic refer the page link bellow

http://studentbank.in/report-tele-graffiti

http://studentbank.in/report-tele-graffi...sed-user-i

http://studentbank.in/report-steady-stat...e-graffiti

http://studentbank.in/report-tele-graffiti?pid=42848
Reply
#7
send me report for tele graffiti, and its documentation
Reply
#8
hi
you can see these pages to get the details on tele-graffiti

http://studentbank.in/report-tele-graffiti

http://studentbank.in/report-tele-graffiti
Reply
#9
(20-09-2009, 03:59 PM)computer science crazy Wrote: Tele-Graffiti (A Camera-Projector Based RemoteSketching System with Hand-Based User Interface and Automatic Session Summarization)

Abstract


One way to build a remote sketching system is to use a video camera to image what each user draws at their site, transmit the video to the other sites, and display it there using an LCD projector. To make such a system usable, however, the users have to be able to move the paper on which they are drawing, they have to be able to interact with the system using a convenient interface, and sketching sessions must be stored in a compact format so that they can be replayed later. Tele- Graffiti is a newly developed remote sketching system with the following three features: (1) real-time paper tracking to allow the users to move their paper during system operation, (2) a hand based user interface, and (3) automatic session summarization and playback. In this paper, the design, implementation, and performance of Tele-Graffiti is described.

1. Introduction

There are several ways of building a remote sketching system. One way is to use a tablet and a stylus to input the sketch, and a computer monitor to display the sketch at the remote site. Such systems have a number of disadvantages. Writing with a stylus on a glass tablet is unnatural compared to sketching with a regular pen and paper. Shading and other effects are harder to achieve. Changing colour means using the computer to select a new colour rather than using a different pen. Incorporating existing hard-copy documents such as a graded exam is impossible. Another way of building a remote sketching system is to use a video camera to image the sketch at one end, transmit the video to the other end, and display it there using a projector.Figure 1 shows the schematic of such a system. The first such camera-projector based remotesketching system was Pierre Wellnerâ„¢s Xerox Double DigitalDesk

Figure 1:


Tele-Graffiti is a system allowing two or more users to communicate remotely via hand-drawn sketches. What one person writes at one site is captured using a video camera, transmitted to the other site(s), and displayed there using an LCD projector. Tele-Graffiti is a camera-projector based remote sketching system that has been recently designed and implemented. Although the Xerox Double Digital-Desk is an elegant idea, there are a number of technical problems that need to be solved to implement such a real-time system and to make it both practical and usable.

Real-Time Paper Tracking:


The users of a remote sketching system will want to move the pieces of paper on which they are writing during operation of the system. To allow this capability a camera-projector based remote sketching system must track the paper in real-time. Such functionality is not available in current systems .

Video Transmission:


To allow real-time interaction between two people, it is also necessary that the video transmission between the remote sites be fast enough for smooth communication. Providing a Suitable User Interface:


A remote sketching system would be much more useful if it could be controlled without using the keyboard or mouse. Instead it would be best if the user could control the system just using their hands.

Sketch Summarization:


While remote sketching systems help users to communicate through their sketches, such a system would be much more useful if it has functions for sketch summarization, recording, and replay. Overall, the system software, including paper tracking, user interface processing, and summarization, runs at 30Hz on a PC with dual 450MHz Pentium-II processors. The video quality depends on the network bandwidth. With typical network parameters a video rate of around 10 frames per second is achieved. In this paper we describe the design and implementation of Tele-Graffiti.

2.Tele-Graffiti Hardware and Calibration
In this section we briefly describe the Tele-Graffiti hardware and the geometric calibration of the camera and projector.

2.1. Hardware


A schematic diagram of a 2-site Tele-Graffiti system is contained in Figure 1. Figure 2 shows photos of 2 real Tele- Graffiti systems. Each Tele-Graffiti site contains the following components:

Computer:


Each Tele-Graffiti site has a PC with dual 450MHz Pentium-II processors, an NVidia GeForce2 GTS video card, and an OrangeLink Firewire (IEEE1394) card.


Projector: We use an XGA resolution (1024*768 pixels) Panasonic PT-L701U LCD projector.

Camera:


We use a Firewire (IEEE1394) Sony DFW-VL500 camera, which we run at VGA (640*480 pixels) resolution.

2 Stand:


We constructed 2 different prototype stands to hold the camera and projector in a compact configuration. In Figure 2(a) the projector is mounted horizontally on a supporting plate, while in Figure 2(b) it is mounted vertically on the pillar.

Network:


The two Tele-Graffiti sites are connected by a localarea network. We have experimented running the system over both 100Base-T and 10Base-T networks. Figure 2:

Fig 2(a) Fig 2(b)



2 .2 .Geometric Calibration of the Camera-Projector Relationship


We need to warp the video so that when it is displayed by the projector, it appears correctly aligned with the paper. We therefore need to know the relationship between camera coordinates(xc and yc) and projector coordinates (xp and yp). We assume that the projector follows the same perspective imaging model as the camera (with the light rays in the reverse direction). Assuming the paper is planar, the relationship between camera and projector coordinates is:

Where Hpc is a 3*3 homography, and denotes equality up to scale.Since Hpc doesnâ„¢t change if the paper remains in the same plane (i.e. the paper stays on the desktop),Hpc can be computed at system startup. This constant value of Hpc is precomputed by: (1) projecting a rectangular image with known corner locations onto the desktop, (2) capturing an image of this calibration image, (3) locating the vertices in the captured image using the paper tracking algorithm described in Section 3.3 (since there is no clipboard, we give the system enough prior information to break the four-fold orientation ambiguity), and (4) solving for Hpc using Equation (1) and the 4 pairs of projector-camera coordinates.

3 . Tele-Graffiti System Software


Each Tele-Graffiti site must continuously capture video from the camera, track the paper in the video, warp the video from the other site so that it is aligned with the paper, and communicate with the other site. The following describes the system software that performs all of these tasks. 3.1.System Architecture Tele-Graffiti runs under Linux (RedHat 7.1) and consists of 4 threads: the Drawing thread, the Paper Detection thread, the Sending thread (the Sending thread is actually implemented in a second process due to TCP/IP constraints), and the Receiving thread. Figure 3 shows the diagram of the 4 threads. Figure 3:


The 4 threads share the following data:

Image to Display:


The latest image received from the remote site. It is primarily shared between the Receiving thread and the Drawing thread.

Remote Paper Vertices:


The estimated paper vertices in the image to display.

Image to Send:


The image to send is a YUV image which is primarily shared between the Paper Detection thread and the Sending thread. It is a sub-image of the captured (640*480) image which is just large enough to include the detected paper.

Local Paper Vertices:
The estimated paper vertices in the captured (640*480) image. In the following sections, each thread is described in detail: 3.2.Drawing Thread The Drawing thread continuously warps and draws the image to display. The drawn image is output to the projector simply by plugging the monitor output of the PC into the projector. A dual headed video card could be used instead. The Drawing thread waits for updates to the image to display and upon update copies the new image into the OpenGL texture buffer. This thread also waits for changes to the local paper vertices. Whenever this occurs, the Drawing thread redraws(re-maps) the texture on the screen with the new local paper vertices.

3.3.Paper Detection Thread: Paper Tracking


The Paper Detection thread continuously does the following: 1. Grabs an image from the camera. 2. Detects or tracks the paper. See below for the details of the paper tracking algorithm. 3. Updates the image to send and the local paper vertices. Updating the image to send is done by cropping the grabbed image according to the estimated paper vertices. 4. Notifies the Drawing thread of the update.



3.4.Communication between Tele-Graffiti Sites
3.4.1.The Sending and Receiving Threads


Sending and receiving is conducted simultaneously. Each Tele-Graffiti site opens two TCP/IP sockets, one of which is dedicated to sending, and the other to receiving. For this reason, we have two communications threads, the Sending thread and the Receiving thread. Moreover, since it appears that Linux doesnâ„¢t allow one process to both receive and send on TCP/IP sockets at the same time (even in different threads), we implemented the Sending thread as a separate process, rather than just as another thread. The details of the communications threads are as follows:

Sending Thread:

The Sending thread continuously converts the most recent image at this site (the image to send) from YUV to RGB, compresses it, and transmits it to the other site along with the estimated local paper vertices. As the paper detection cycle (30Hz) is faster than the typical image sending cycle and multiple updates to the image to send can occur during one image transmission session, the Sending thread just transmits the most recent image when it starts the transmission. This image is copied to a buffer to avoid it being overwritten.

Receiving Thread:

This thread waits for the arrival of images and paper vertices from the other Tele-Graffiti site. Upon arrival, the Receiving thread decompresses the image, updates the image to display and the remote paper vertices, and notifies the Drawing thread of the update. Figure 4 shows the simple communication protocol: Figure 4:



3.5.Software System Timing Performance

Timing results for the Drawing thread and the Paper Detection thread are shown in Tables 1 and 2. On dual 450MHz Pentium PCs, these threads operate (comfortably) at 30Hz. Timing results for the communications threads are shown in Table 3 (Sending thread) and Table 4 (Receiving thread). The computational cost of compressing and decompressing the image is a substantial proportion of this time. On faster PCs, the transmission threads could run far faster. In these tables, all timing results are measured in real time, not in CPU time. Also note that the steps marked with * (asterisk) spend most of their time idling (i.e. waiting for events or resources) rather than actually computing. Table 1: Timing results for the Drawing thread. The Drawing thread operates at over 30Hz and even so most of the cycle is spent idling, waiting for the update to the shared data.


Table 2: Timing results for the Paper Detection thread. Paper detection operates comfortably at 30Hz.Around 10msec per loop is spent idling waiting for image capture. We also include an estimate of the average number of CPU operations spent for each of the 640*480=307K pixels in the captured image.


Table 3: Timing results for the Sending thread (with JPEG compression over a 100Base-T network). Overall the Sending thread operates at 8Hz. Approximately the same results are obtained over 10Base-T networks.


Table 4: Timing results for the Receiving thread (with JPEG compression over a 100Base-T network). The Receiving thread operates at 8Hz. Approximately the same results are obtained over 10Base-T networks.




4 .A Hand-Based User Interface

We have implemented a user interface for Tele-Graffiti. We use two kinds of triggers to activate user interface functions: hand over paper events and interactions of the hand with user interface objects (UI objects). Hand over paper events occur when the system determines that the user has just placed their hand over the paper or when they have just retracted their hand from over the paper. UI objects are rectangular shaped projections in the work area which enable users to invoke predefined commands using their hands. Currently we have implemented two types of UI objects: Button and Slider. Button objects are used to toggle a mode. Slider objects are used to select a value from a range.

4.1.Hand Tracking

There are several possible ways of tracking the userâ„¢s hands. One possibility is to use a colorbased hand tracking algorithm . We chose not to use a color-based algorithm because such algorithms are generally not sufficiently robust to the variation in the color of the userâ„¢s hand and to the lighting condition. An additional complication is that some LCD projectors timemultiplex color. Without synchronizing the projector and the camera, the color of the hands varies from frame to frame. Another way of detecting the userâ„¢s hands is to use an infrared camera.The color projected by the projector is then not important because the camera primarily only images the heat radiated by the users hands.We base our hand tracking algorithm on background subtraction because it is fast, robust, and does not require any special hardware. Figure 12 contains an overview of our algorithm.



4.1.1.Background Subtraction

We have to consider the following problems during background subtraction: 1. The paper and clipboard are on the desktop and they may move from frame to frame. 2. Because we are using the cameraâ„¢s auto gain control (AGC), the background intensity varies depending on what else is placed on the desktop. For example, when the paper is placed on the desktop the average intensity increases. As a result, the AGC reduces the gain of the camera and so the intensity of the background is reduced. To solve these problems, our background subtraction algorithm includes the following steps. Prepare the Background Image: A grayscale background image is captured at system startup before anything has been placed on the desktop. The average intensity is calculated from the background image. See Figure 3(a) for an example background image. Create a Mask for the Paper and Clipboard: Create a mask which covers the paper and clipboard. This mask is used in background subtraction. See Figure 3 © for an example. Calculate the Average Background Intensity: In order to compensate for the difference in background intensity caused by the cameraâ„¢s AGC, calculate the average intensity of the current Image , excluding the paper and clipboard area. Either the mean or the media could be used.




Fig 3(a) Fig 3(b) Fig 3© Fig 3(d) Fig 3(e) Fig 3(f) ( check it out in the PDF FILE )

4.1.2. Computing Connected Components

Next, we find the connected components in D(x,y) the result of background subtraction. We use 4-connectedness. Each connected component is given a unique label [1255] and the result is represented as another grayscale image where the pixel intensity is the label of the component to which the pixel belongs. Figure 3 (e) shows an example result.

4.1.3. Determining the Hand Component

We determine which of the components found in the previous step is the hand component using knowledge that the user™s hand: · should always touch the image boundary, and · is larger than most of the other objects on the desktop (except for the paper and clipboard,which are eliminated using the paper mask.) Thus, the algorithm to determine the hand component is to find the largest component that intersects the camera image boundary. See Figure 3(f) for an example result.

4.1.4. Hand Tip Localization Once the hand component has been determined,

we then determine its tip: i.e. where it is pointing.An X is displayed in the work area at the detected tip location to show the user that the hand has been detected correctly. The hand tip is 5 located with the algorithm below. See also Figure 4 for an illustration of the algorithm. 1. Compute the direction of the hand from the following two points: (a) the center edge pixel, i.e. the mean of the points where the hand component touches the camera image boundary (b) the center pixel, i.e. the mean of the XY coordinates of the hand component pixels. 2. Find the farthest pixel from the edge pixel in the hand component in the hand direction. See Figures 3(f) and 4(b) for examples of hand tip localization. Figure 4:



4.2. Hand Over Paper Detection

In general, it is hard to determine whether the user is drawing or not. We assume that if the user has their hand over the paper they are drawing. If their hand is away from the paper, they are definitely not drawing. Once we have detected the paper and found the hand component, we then determine whether the hand is over the paper. Our algorithm for handover- paper detection is as follows:

1. Create a hand sensitive mask around the paper. The hand sensitive mask is created based on the estimated paper vertices, just like the paper mask for the background subtraction step .The hand sensitive mask is slightly larger than the paper mask, however. Since no connected components exist within the paper mask because the background subtraction is fixed at 0 there, the effective hand sensitive area is the area between the larger mask and the paper mask. See Figure 5(b).

2. Intersect the hand component with the hand sensitive mask. If there is at least one pixel from the hand component within hand sensitive area, the hand is over the paper. See Figure 5(d) for an illustration of this step,


Fig (a) Fig (b) Fig © Fig (d) Figure
5: Hand-over-paper detection.

(a) Captured image. (b) Hand * sensitive area. A mask is created based on the paper vertices which is slightly larger than the paper mask used for background subtraction (see Figure 3©). The area between these two masks is the hand-sensitive area. © Detected hand component.Note that the hand is partly masked by the paper mask. (d) The result of intersecting (b) and ©. If there are any pixels in the result, the hand is over the paper, and vice versa. Fig (a) Fig (b) Fig© Figure 6: Results of Hand-over-paper detection. (a) Hand detection works simultaneously with paper tracking. (b)© When a hand is detected over the paper, a small blue rectangle is displayed in the top left corner of the work area so that the user knows the system has detected the event.

4.3.Button Objects

Button objects are used to invoke a one-time action or toggle a system mode. Each Button object has one of three states: normal, focused, and selected. Each Button objectâ„¢s state starts as normal and becomes focused when the hand moves over the Button object. After the hand remains over the Button for a certain period of time it becomes selected and the command that is associated with the button is invoked. See Figure 17 for an example of a user interacting with a Button object.Hand over Button detection operates in a similar way to hand-overpaper detection: the hand component from background subtraction is examined and it is determined if there is at least one pixel in the buttonâ„¢s hand sensitive area, a rectangular area slightly larger than the object itself. Figure 18 illustrates our hand-over-button detection algorithm.


Fig 7(a) Fig 7(b) Fig 7©

Figure 7: Interaction with a Button. (a) The Button in its normal state. * When the hand is detected over the Button object (b), its state becomes selected; changes its appearance (color and text label) ©, and invokes a pre-defined command (not shown, but in this case to turn of the summarization mode.)


Fig 8(a) Fig 8(b) Fig 8©

Figure 8: Hand over Button detection. (a) Captured image. One finger of the * hand is placed over the Button. (b) An illustration of the hand sensitive area of the Button. The hand sensitive area is a rectangular area slightly larger than the object itself. © The hand component in the hand sensitive area. As there are pixels from the hand component (the brighter pixels) in the hand sensitive area, it is determined that the hand is over the Button. Note that the hand component does not extend into the object area (the darker pixels) even when the hand tip is placed near the center of the object because the projected light needed to draw the object brightens the hand 6 tip. This does not affect hand over Button detection. All that is needed for robust hand over Button detection is that the hand sensitive area be slightly larger than the Button itself.

4.4.Slider Objects

Sliders are used to vary a numerical parameter within a range. Each Slider object holds its current value within the range [0,100] although any other range could be used. A bar is drawn in the slider object to denote the current value. The leftmost end of the Slider denotes 0 (the minimum value), the rightmost 100 (the maximum value). See Figure 19 for an example.The Slider object has a state mechanism just like the Button object. When the Slider is in the selected state it continues to: 1. estimate the position of the hand, 2. compute the value of the Slider parameter from the hand position, 3. update the location of the Slider bar according to the parameter value, and 4. notify the system of the parameter value. The user interface detects the hand component within the Sliderâ„¢s hand sensitive area just like for Button objects, and also estimates the horizontal position of the hand. This value is computed by averaging the x-coordinates of the pixels in the hand component in the hand sensitive area:


Where is the number of pixels from the hand component in the hand sensitive area, and is the X coordinate of each such pixel. Note that we do not include any pixels within the object area here because the image of the hand within the projected object area is not stable enough to rely on. See Figure 20(d) for an example. The value of the Slider parameter then computed as: Where denotes the width of the hand sensitive area in pixels. Figure 9 illustrates hand over Slider detection and the computation of the hand position.


Figure 9: A user interacting with a Slider. (a) The Slider in the normal state. (b) When the userâ„¢s hand is detected over the Slider, it is activated and changes its appearance (colour). The Slider continues to change the value of its parameter depending on how the finger moves ©(d)(e). The system is continuously notified of the change to the Slider parameter until the hand leaves the Slider (f).


Figure 10: Hand over Slider detection and computation of the hand position. (a) Captured image. One finger of the hand is placed over the Slider object. (b) An illustration of the hand sensitive area of the Slider, the same as for the Button. © The hand component in the hand sensitive area. As pixels from the hand component (the brighter pixels) are found in the hand sensitive area, it is determined that the hand is over the Slider. In addition, the hand position is computed by averaging the X coordinates of the pixels from the hand component between the object area and the hand sensitive area, which is shown by a small X. We donâ„¢t use the pixels in the object area to compute the hand position because the image of the hand within the object projection area is unstable. In some cases the hand and the Slider are recognized as a single connected component (d), while in © the components of the hand and the Slider are separated. Table 5: Timing results for the Tele-Graffiti hand based user interface on a dual 450MHz Pentium-II machine. As the total calculation time of 9.2msec is within the idle time waiting for the next frame in the Paper Detection thread (see Table 3), adding user interface processing doesnâ„¢t affect the performance of the paper tracking and video transmission. Paper tracking and UI processing operate together at 30Hz browses through the summarized session.



5.Uses

(1) It gives graphic articles a global reach Architects can tem up with experts abroad to work on designs simultaneously. (2) Distance education will be benefited as students and teacher can interact in their own handwritings and get minute points and doubts classified. (3) It can be considered as a substitute for internet chat as en chat for many could be more interesting than key board exercise. News, information, alerts, warnings can be exchanged in an attractive way. 7 (4) Forms can be filled u in your own hand writing without the need to download, scan and submit the same.

6. Future Prospects

· Researchers are trying to add audio signals and face-tracking mechanism to develop a remote sketching teleconferencing system. · Efforts are also going on to shrink the size of the system to the size of table lamp

7. Conclusion

We have described Tele-Graffiti, a camera-projector based remote sketching system. The major contributions of Tele- Graffiti over existing systems are: Real-Time Paper Tracking: We have developed a real-time paper tracking algorithm that allows the users of Tele-Graffiti to move the paper during operation of the system. Real-Time Video Transmission: We have developed a software architecture for Tele-Graffiti and implemented realtime video transmission over IP networks on standard 450MHz PCs. Hand-Based User Interface: We have added a user interface to Tele-Graffiti based on handtracking.The system requires no extra hardware, and operates with the Tele-Graffiti cameras.Infra-red cameras are not required. The user interface detects when the user has their hand over the paper and processes their interaction with UI objects such as buttons and sliders. Automatic Summarization: We have developed an automatic summarization system for Tele-Graffiti based on detecting when the users have their hands over the paper. Such a system can automatically summarize a several minute long sketching session into a few frames.The complete Tele-Graffiti system operates at 30Hz on a PC with dual 450MHz Pentium-II processors. With typical network parameters a video rate of around 10 frames per second is achieved.

Reply
#10
To get full information or details of Tele-Graffiti (A Camera-Projector Based RemoteSketching System with Hand-Based User I please have a look on the pages

http://studentbank.in/report-tele-graffi...r-i?page=3

http://studentbank.in/report-tele-graffiti

http://studentbank.in/report-tele-graffi...e=threaded

http://studentbank.in/report-tele-graffiti?page=2

if you again feel trouble on Tele-Graffiti (A Camera-Projector Based RemoteSketching System with Hand-Based User I please reply in that page and ask specific fields
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: technical tele graffiti ppt, 3d dlp projector hdmi 1, screen and projector package, mini projector for laptop price in india, lcd projector cost, 200inch electric projector, what makes a dlp projector 3d ready,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  FPGA-Based Embedded System Implementation of Finger Vein Biometrics seminar project explorer 3 4,446 20-06-2016, 05:09 PM
Last Post: computer science crazy
  SEASON BASED STREET LIGHT SWITCHING BASED ON SENSORS project report helper 2 3,937 29-04-2015, 01:47 PM
Last Post: seminar report asees
  A Solar Panels Automatic Tracking System Based on PLC seminar class 1 3,493 22-03-2015, 11:25 PM
Last Post: [email protected]
  MICRO CONTROLLER BASED SECURITY SYSTEM USING SONAR ajukrishnan 5 7,146 31-01-2015, 11:55 PM
Last Post: Guest
  User Identification Through Keystroke Biometrics computer science crazy 1 3,300 18-03-2014, 12:18 AM
Last Post: DallasKirm
  ARTIFICIAL NEURAL NETWORK AND FUZZY LOGIC BASED POWER SYSTEM STABILIZER project topics 4 6,139 28-02-2014, 04:00 AM
Last Post: Guest
  MICROCONTROLLER BASED AUTOMATIC POWER FACTOR CONTROLLING SYSTEM projectsofme 2 5,590 20-07-2013, 10:39 AM
Last Post: Mitesh Diwakar
  pill camera sandhya m.v 27 31,721 20-03-2013, 07:17 PM
Last Post: Guest
  Tele-Graffiti computer science crazy 11 9,012 27-12-2012, 01:20 PM
Last Post: seminar details
  Design & Development of a GSM Based Vehicle Theft Control System seminar class 9 11,418 29-11-2012, 01:15 PM
Last Post: seminar details

Forum Jump: