SYMMETRY BASED PHOTO EDITING full report
#1

ABSTRACT
Based on the understanding of high-level geometric knowledge, especially symmetry, imposed upon objects in images, we demonstrate in this paper how to edit images in terms of correct 3-D shape and relationships of the objects, without explicitly performing full 3-D reconstruction. Symmetry is proposed as the central notation that unifies both conceptually and algorithmically all types geometric regularities such as parallelism, orthogonality, and similarity. The methods are extremely simple, accurate, and easy to implement, and demonstrate the power of applying scene knowledge on image understanding and editing.
CONTENTS
CHAPTERS INDEX PAGES
Chapter 1. INTRODUCTION 1
Chapter 2. GEOMETRIC KNOWLEDGE 2
Chapter 3 SYMMETRY BASED CELL RECONSTRUCTION 4
3.1. Reconstruction from a single view of one symmetry cell
3.2. Alignment of multiple symmetry cells in one image
Chapter 4 SYMMETRY BASED PLANE RECONSTRUCTION 7
4.1. Plane pose determination
4.2. Plane distance determination
4.3. Object registration
Chapter 5 PHOTO EDITING 10
5.1. Copy-and-paste within one image
5.2. Copy an image patch onto another image
5.3. Image mosaicing
Chapter 6 CONCLUSION 17
REFERENCE 19
CHAPTER 1
INTRODUCTION
Photo editing has become an important part in digital photography to achieve a high quality or special effects. In most of existing popular photo editing packages, such as Adobe Photoshop, the photos are manipulated at the level of pixels in regions or layers. However, in many cases it is desired to edit the photos such that consistent geometry and perspective effects are conserved. Examples of such actions include removing undesired reflections from a building facade and adding a logo on the ground for a football game scene (watermarks). Unfortunately, in the common photo editing software, the preservation of geometry and perspective is lack of easy interface and usually achieved by manual and intuitive adjustment which is tedious and not fully photorealistic. In this paper we introduce a set of interactive symmetry-based techniques to edit digital photographs. These symmetry-based algorithms enable us to manipulate 2-D image regions by understanding their correct 3-D shapes and relationships and therefore to preserve the scene geometry with minimal amount of manual intervention.
Understanding or recovering 3-D shapes from images is a classic problem in computer vision. A common approach to achieve this goal is the structure from motion technique that involves reconstruction from multiple images. This line of work has led to the development of multiple view geometry. In classic multiple view geometry, we usually do not apply any knowledge about the scene. Typically, only image primitives such as points, lines and planes are used and no knowledge about their 2D scene recovery. Spatial relationships are assumed.
CHAPTER 2
GEOMETRIC KNOWLEDGE
However, recently more and more work has unveiled the usefulness of scene knowledge in reconstruction .While various types of scene knowledge and simplifying assumptions can be imposed upon photometry and shape, it is the geometric knowledge that we will be focusing on in this paper. Geometric knowledge, such as patterns, parallelism and orthoganality, prevails in man-made environment. It provides useful cues in retrieving from images shapes of objects and spatial relationships between objects. As we will demonstrate in this paper, if we no longer confine ourselves to primitive geometric features but instead begin to apply global geometric information, it opens many new avenues and possibilities such as editing images even without explicitly performing 3-D reconstruction. For instance, when we apply the knowledge that one object is rectangular in 3-D, its pose and size is automatically determined from its image. This opens many new functionalities and applications that would otherwise be very difficult without applying such scene knowledge. Geometric scene knowledge such as object shapes and spatial relationships between objects are always related to some types of regularities. For object shapes, regular shapes such as rectangles, squares, diamonds, and circles always capture our attention more than the others. For spatial relationship between objects, parallelism, orthoganality and similarity are the most conspicuous ones. Interestingly, all such regularities can be encompassed under the notion of symmetry. For instance, a rectangular window has one rotational symmetry (by 180°) and two reflective symmetries; windows on the side of a wall display translational symmetry; the corner of a cube admits a rotational symmetry, etc.
Recently, a set of algorithms using symmetry for reconstruction from a single image or multiple images have been developed, which leads to further studies on geometric segmentation, large baseline matching and 3-D reconstruction . In each image, by identifying symmetry cells“regions that are images of symmetric objects such as rectangles, 3-D information about these symmetry cells are obtained. The image is then segmented based on the geometric information such as coplanarity and shape similarity of the symmetry cells. Identified symmetry cells can also be used as high-level features for matching purposes . For symmetry cells found in different images, by comparing their 3-D shape and color information, feature matching is established and camera motion is calculated. With known camera motions, 3-D reconstruction is efficient and accurate. These are examples of utilizing high-level geometric knowledge in modeling and motion analysis. In this paper, we will extend these techniques to another application “ photo editing.As a continuation, this paper is to show some possible applications of applying symmetry knowledge about the scene.
CHAPTER 3
SYMMETRY-BASED CELL RECONSTRUCTION
Here we briefly introduce some techniques for 3-D pose and shape recovery using symmetry knowledge. To use the symmetry-based algorithm, we start from images the basic symmetric objects called symmetry cells. While the symmetry cell can be image of any symmetric object, we just use one of the simplest symmetric object rectangle to illustrate the reconstruction process. Once a (rectangle) symmetry cell in a plane is identified, it is then used to recover the 3-D pose of the plane. When multiple planes are present, a further step of alignment is necessary to obtain their correct 3-D relationships.
3.1 RECONSTRUCTION FROM A SINGLE VIEW OF ONE SYMMETRY CELL
First let look at 3-D reconstruction of plane pose using symmetry cells. However, in the case of a rectangle, the reconstruction process can be significantly simplified using the fact that the two pairs of parallel edges of the rectangle give rise to two vanishing points in the image, as shown in Figure 3.1. A vanishing point v = [x, y, z]T R3, expressed in homogeneous coordinates, is exactly the direction of the parallel lines in space that generates v . The two vanishing points v1 and v2 R3 associated with the image of a rectangle should be perpendicular to each other
:v1T v2 = 0
In addition, the unit normal vector N R3 of rectangle plane can be obtained by N ~ v 1v2, where ~ means equality up to a scalar factor.
Fig 3.1. Image formation for a rectangle. v1 and v2 are two vanishing points. u is the intersection of the diagonal of the four-sided polygon and is the image of the rectangle center.
If we attach an object coordinate frame on the plane with the frame origin being the center of the rectangle, the normal vector of the plane being the z-axis and the other two axis parallel to the two pairs of edges of the rectangle, then our goal is to find out the transformation g = (R, T) SE(3) between the camera frame and the object frame. Here R SO(3) is the rotation and T R3 in the translation. Note that R is independent of the choice of the object frame origin. In the absence of noise, the pose (R, T) is simply:
where u R 3 is the (homogeneous) image of the center of the rectangle and a R + is some scale factor to be determined. In the presence of noise, the so-obtained R may not be in SO(3), and we need to project it onto SO(3). The projection can be obtained by taking the singular value decomposition (SVD) of R = USVT with U, V O(3). Then the rotation is R = UVT . To fix the scale in T, we typically choose the distance d from the camera center to the rectangle plane to be 1, which means that T = au with
3.2 ALIGNMENT OF MULTIPLE SYMMETRY CELLS IN ONE IMAGE
In practice, we may have multiple rectangular symmetry cells in different planes. Using the methods from above section, we can recover the pose of each cell up to a scale. However, the scales for different cells often are different. Therefore, we must take a further step to align the scales. For example, as shown in the left panel of Figure 3.2, each plane is recovered with the assumption that the distance from the camera center to the plane is d =1. However, if we choose the reference plane to be the one on which cell q1 resides with d1=1, our goal is to find the distances from the camera center to the other two planes. Taking the plane on which cell q2 resides, in order to find the distance d2, we can examine the intersection line L12 of the two planes.The length of L12 is recovered as ¦L121¦ in the reference plane and¦L122¦ in the second plane. We then have the relationship
So the pose of the second symmetry cell is modified as g2 = (R2, T2) (R2, a T2). The results are shown in the right panel of Figure 3.2
For planes without explicit intersection line, as long as line segments with same length in 3-D space can be identified on each plane, the above scheme can also be used.
Fig.3.2 Left. Recovered plane poses using symmetry cells before alignment. Right. Recovered plane poses after alignment. Now the axis are all in the same scale.
CHAPTER 4
SYMMETRY-BASED PLANE RECONSTRUCTION
In order to perform photo editing on the level of 3-D objects, we need to register these objects. In this paper, we only demonstrate this using planar object while our system is not limited to planar objects. To register a planar object, first we characterize the pose and location of the plane on which it reside; then we can define the object from the image and project it into 3-D space.
In order to characterize a plane, we only need to know its pose (normal vector N) and its distance d to the camera center. This can be accomplished by introducing a coordinate frame on the plane such that the z-axis is the normal vector. Then the pose and distance information are contained in the transformation g = (R, T) between this coordinate frame and the camera frame with the third column of R being the normal vector N and TT N =d. Apparently, the choice of such coordinate system is not unique since the origin of the frame can be anywhere on the plane and the coordinate frame can rotate around the z-axis with arbitrary angle. The two components R and T of the transformation g can usually be determined separately. Since the pose N is only related to R and the distance d is mainly related to T, we can determine N and d in different steps.
4.1 PLANE POSE DETERMINATION
There are basically two means of determining R and hence N. The first is a direct method using symmetry cells. By identifying a rectangular symmetry cell in the plane from the image and applying the method provided in previous section, the transformation g = (R, T) between the camera frame and the frame attached to the cell can be obtained. From Section 3.1 we know that R is a desired rotation and N is the third column of R.
The second method is to use the spatial relationship between the plane and other plane with known pose. This usually occurs in the case when it is difficult to identify a good symmetry cell on the plane or the spatial relationship is compelling and easy to apply. The spatial relationship considered here is symmetry relationship between planes; it includes reflective, translational and rotational symmetry. The commonly seen parallelism can be viewed as either reflective or translational symmetry while orthoganality is an example of rotational symmetry. In any case, the rotation R of the plane frame can be easily determined from the known plane having symmetry relationship with the plane in query.
4.2 PLANE DISTANCE DETERMINATION
Knowing N, we only need to solve T to obtain d. Here we discuss three ways of deciding T. Note that any point p on the 3-D plane can be chosen as the object frame origin and hence T = X with X R 3 being the coordinate of p in the camera frame. Therefore, if the image of p is x in homogeneous coordinates, T = a x with a R + being the depth of p. Our first method is to directly determine a for a point with known 3-D location. If a point with image x on the intersection line between the plane in query and a plane with known pose and distance, its coordinate X in the camera frame can be determined using the knowledge about the second plane. Then
The second method is to apply the alignment technique introduced in previous section to obtain a . By identifying one line segment on the plane of query and another line segment on a known plane with the understanding that these two line segments are of the same length in 3-D space, a can be solved using the alignment technique in Section 3.2.
Finally, if none of the above techniques can be applied but we have some knowledge about the spatial relationship between the plane in query and some unknown plane(s), we can use these knowledge to solve T. The useful knowledge are also the symmetry relationships. An example of reflective symmetry is illustrated in Figure 4.1. In any case, if the symmetry transformation between the desired plane and the known plane is gs and the transformation between the camera frame and the known plane frame is gr , then the desired transformation is g = grgs-1 . So the key point is to find out the symmetry transformation gs in 3-D space. For reflective symmetry, this means to identify the reflective plane. For translational symmetry, this means to find out the translational vector in 3-D. And for rotational symmetry, we need to known the location of the rotational axis and the rotation angle. These information can be obtained from the known planes.
Fig. 4.1. Obtain the transformation between a plane coordinate frame and the camera frame using known reflective symmetry
4.3 OBJECT REGISTRATION
When planes are defined, objects can be chosen by selecting polygons in the image. For any image x of a point in a plane with pose (R, T), its depth can be calculated as
where N is the the normal of the plane or the third column of the R. And the coordinate of the point in 3-D is X = x . Therefore, for any image polygon S = { x 1, ¢ ¢ ¢ , x n }, the corresponding 3-D object in the plane can be determined easily.
CHAPTER 5
PHOTO EDITING
The above technique allows us to perform many operations on images. Most of these operations are related to the copy-and-paste function. That is, by identifying objects in the image, we can put them onto different locations, which is not limited to a single image. The symmetry-based techniques allow us to perform all such actions with our high-level knowledge.
5.1 COPY-AND-PASTE WITHIN ONE IMAGE
Given a photo, many times we want to overwrite certain places with other objects or images. For example, we may want to eliminate some unwanted occlusion or shadow. Or we want to copy the image of an object from one place to another. While in commercial photo editing software such as Photoshop we can do this purely on image, it is usually hard to get the correct geometry due to perspective effects. However, with the knowledge of the scene and the registered 3-D objects, correct perspective image can be obtained.
The key point for the copy-and-paste is the following: for a given image point, find the image of its symmetry correspondence. To do this we need to specify the symmetry transformation gs in 3-D space. For reflective and translational symmetries, this can be achieved by selecting only one pair of symmetric points on known planes. For rotational symmetry, we need also to point out the direction of rotation axis and rotation angles besides the pair of symmetric points. For any point x ,first we can project it to 3-D space as X; then perform symmetry transformation on it to obtain its 3-D symmetric correspondence gs (X); finally we can obtain the corresponding image of gs (X).
So the copy-and-paste operation can be performed in three steps:
(1) Define the source region (object) on an known plane;
(2) Define the symmetry between the destiny and source regions;
(3) For each point in the destiny region, find its symmetric corresponding image point.
Here we show three examples involving this copy-and-paste function. As shown in Figure 5.1, we want to recover the occlusion caused by the lights; this is accomplished by simply copying the region above using the translational symmetry of the wall pattern. In Figure 5.2, we want to re-render the region with sunlight shadows, this is done by copying the region on the other side of the wall using the reflective symmetry between them. Besides removing unwanted objects or regions, we can also add virtual objects use the same technique. Figure 5.3 shows the added windows on both sides of the walls using both translational and reflective symmetry. The last demo is an example of extending current pictures. For example, if we extend the picture in Figure 5.4 to one, our scene knowledge tells us that the extension is just the translational copy of part of the wall. Therefore, by applying translational symmetry on the new regions, we can obtain the result in the right.
Fig. 5.1. Using multiple translation symmetry to remove the occlusion caused by lights
.
Fig. 5.2. Using reflective symmetry to re-render the areas with shadows and occlusion.
Figure 5.5 is an example of applying multiple copy-and-paste actions based on reflective and translational symmetry on an outdoor scene. The complicated foreground has been successfully removed and the window panels with reflections of the trees have been replaced by clean ones. This provides a good basis for further graphical manipulation on the building image.
5.2 COPY AN IMAGE PATCH ONTO ANOTHER IMAGE
Fig. 5.3 Using translational symmetry to copy-and-paste the areas with windows onto another blank area (the bottom row of the right side windows is the copy of the row above it).
Fig.5.4. Extending the picture to the right using translational symmetry (the image is extended to the right by 300 pixels).
The above techniques can also be extended to copy a patch in a plane of one image onto a plane in another image. This is done by aligning corresponding planes in different images. The aligning process includes three aspects:
(1) Alignment of plane orientations. Aligning plane orientations means that the two coordinate frames on the two planes should be aligned in the every axis.
(2) Alignment of scales. Aligning scales means that the corresponding object in the two planes should have the same size. This usually affects the scale of translation T of the second plane coordinate frame.
(3) Alignment of corresponding points. Usually one pair of corresponding points are enough.
Fig. 5.5. Using multiple symmetry to clear the foreground of the building as well as reflections of the trees on the window panels.
.
.The above three steps are for the case of calibrated cameras. For the case of un-calibrated camera, it is necessary to identify all four corresponding vertices of the corresponding rectangle region. Figure 5.6 is an example of copy-and-paste between two calibrated images. While Figure 5.7 shows copy-and-paste with an uncalibrated picture.
Fig.5.6. Paste the painting Creation on the wall of in the indoor picture. The calibration information about the left bottom painting is known.
Fig. 5.7. Combining the two photos in the left. The calibration information about the photo on the left bottom is not available.
Finally, Figure 5.8 shows a comprehensive example of above operations. A long Chinese caligraphy strip is folded correctly on the two adjacent walls; and windows on the left side are overwritten by windows from the right using reflective symmetry.
Fig. 5.8. A comprehensive example of various operations on the original image of Figure 3.2.
5.3 IMAGE MOSAICING
Generating panorama or image mosaicing from multiple images is always an interesting problem in computer graphics. Traditional approach usually requires the images are taken with camera center fixed. However, by applying knowledge about planes in the scene and using symmetry cells to align the planes, we can piece together different images taken at different viewpoints. The key point is to use the corresponding symmetry cell to align the two images of the same plane using the alignment steps in the previous section. The byproduct is to recover the transformation between the two cameras. As shown in Figure 5.9, by aligning symmetry cell from different images, we can obtain the camera motion and then combining these two pictures.
Fig. 5.9. Joining the two pictures on the top using corresponding symmetry cells (in this case windows) on the front side of the building. The middle picture is the bird view of the recovered 3-D shape of the two sides of then building and the camera orientations (the two coordinate frames).
CHAPTER 6
CONCLUSION
In this paper, we have demonstrated several applications in photo editing by applying high-level knowledge and using symmetry based techniques.The symmetry based approach admits a much broader range of applications than traditional approaches. With the characterization of symmetry cells, we can generate panoramas without fixing camera center. With symmetry-based matching techniques , it is possible to build an automatic matching and reconstruction system. The symmetry-based approach displays the power of high-level knowledge in motion analysis and structure recovery.
Despite its advantage, applying high-level knowledge also provides us some new challenges. First, we need to know how to effectively represent and compute the knowledge. This paper gives an example of representing the knowledge of the existence of rectangles in 3-D space. The characterization of general symmetric shapes can be performed similarly. Secondly, we need to know how to integrate results obtained from different types of knowledge. For each image, when the spatial relationships between objects are taken into account, the objects can no longer be treated individually. Then the reconstruction process needs to find an optimal solution that is compatible with assumptions on the shapes of individual objects as well as their relationships. In Figure 3.2 we dealt this problem by performing alignment of the symmetry cells with respect to a reference cell. If more symmetry cells were involved, we would have to balance the alignment process by consideringall adjacent symmetry cells. Last but not the least, in the process of incorporating knowledge in any new applications, we want to identify which part can be computed automatically by the machine and how much manual intervention is really needed. For the photo editing process, we point out the minimal amount of input required from the user for different actions. In general, such minimal requirement might not be unique and should be taken into account in designing user interfaces. Besides above challenges, there are also many graphics problems to be solved in the future for this photo editing system. For example, some graphical artifacts arise in the process of copy-and-paste. Although traditional image processing techniques can be applied, it is also possible to process the image by incorporating the correct 3-D geometry. For instance, at the boundary between the copied region and the original region, it is better to filter (blur) along the direction of the vanishing point. Other problems such as how to combine results from multiple symmetry cues are also an on-going research topic.
REFERENCES
[1] W. Hong. Geometry and reconstruction from spatial symmetry. Master Thesis, UIUC, July 2003.
[2 ] W. Hong, A. Y. Yang, and Y. Ma. On symmetry and multiple view geometry:Structure, pose and calibration from a single image. Int. Journal on Computer Vision, submitted 2002.
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: calgarls photo, photo explosion deluxe, umar ugala rupanyan photo hd today, kerala anty photo, 2011 udayavani paper editing, about the photo dayod, editing video courses,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  computer networks full report seminar topics 8 42,397 06-10-2018, 12:35 PM
Last Post: jntuworldforum
  OBJECT TRACKING AND DETECTION full report project topics 9 30,872 06-10-2018, 12:20 PM
Last Post: jntuworldforum
  imouse full report computer science technology 3 25,111 17-06-2016, 12:16 PM
Last Post: ashwiniashok
  Implementation of RSA Algorithm Using Client-Server full report seminar topics 6 26,828 10-05-2016, 12:21 PM
Last Post: dhanabhagya
  Optical Computer Full Seminar Report Download computer science crazy 46 66,692 29-04-2016, 09:16 AM
Last Post: dhanabhagya
  ethical hacking full report computer science technology 41 74,806 18-03-2016, 04:51 PM
Last Post: seminar report asees
  broadband mobile full report project topics 7 23,572 27-02-2016, 12:32 PM
Last Post: Prupleannuani
  steganography full report project report tiger 15 41,621 11-02-2016, 02:02 PM
Last Post: seminar report asees
  Digital Signature Full Seminar Report Download computer science crazy 20 44,010 16-09-2015, 02:51 PM
Last Post: seminar report asees
  Mobile Train Radio Communication ( Download Full Seminar Report ) computer science crazy 10 28,037 01-05-2015, 03:36 PM
Last Post: seminar report asees

Forum Jump: