putpixel(): is a procedure which draws the pixel with the specified color. B : is the boundary color. Drawbacks: in Seed Fill algorithm We have 2 drawbacks. 1. If some inside pixels are already painted with color F then therecursive branch terminates leaving further internal pixelsunfilled. 2. The procedure required stacking of neighboring pixels. If thepolygon is too large the stack space may became insufficient forall the neighboring pixels. To remove the above drawbacks we use the second approach. Scan Line Filling algorithm. Scan Line Algorithm This algorithm works by intersecting scanline with polygon edges and fills the polygon between pairs of intersections. The following steps depict how this algorithm works. Step 1 − Find out the Ymin and Ymax from the given polygon. Figure 6.4: Scan line algorithm Step 2 − ScanLine intersects with each edge of the polygon from Ymin to Ymax. Name each intersection point of the polygon. As per the figure shown above, they are named as p0, p1, p2, p3. Step 3 − Sort the intersection point in the increasing order of X coordinate i.e. p0,p1p0,p1, p1,p2p1,p2, and p2,p3p2,p3. Step 4 − Fill all those pair of coordinates that are inside polygons and ignore the alternate pairs. 101 CU IDOL SELF LEARNING MATERIAL (SLM)
Flood Fill Algorithm Sometimes we come across an object where we want to fill the area and its boundary with different colours. We can paint such objects with a specified interior colour instead of searching for particular boundary colour as in boundary filling algorithm.Instead of relying on the boundary of the object, it relies on the fill colour. In other words, it replaces the interior colour of the object with the fill colour. When no more pixels of the original interior colour exist, the algorithm is completed.Once again, this algorithm relies on the Four-connect or Eight-connect method of filling in the pixels. But instead of looking for the boundary colour, it is looking for all adjacent pixels that are a part of the interior. Figure 6.5: Flood fill algorithm Boundary Fill Algorithm The boundary fill algorithm works as its name. This algorithm picks a point inside an object and starts to fill until it hits the boundary of the object. The colour of the boundary and the colour that we fill should be different for this algorithm to work. In this algorithm, we assume that colour of the boundary is same for the entire object. The boundary fill algorithm can be implemented by 4-connected pixels or 8-connected pixels. 4-Connected Polygon In this technique 4-connected pixels are used as shown in the figure. We are putting the pixels above, below, to the right, and to the left side of the current pixels and this process will continue until we find a boundary with different colour. 102 CU IDOL SELF LEARNING MATERIAL (SLM)
Figure 6.6: 4 connected polygon Algorithm Step 1 − Initialize the value of seed point seedx,seedyseedx,seedy, fcolor and dcol. Step 2 − Define the boundary values of the polygon. Step 3 − Check if the current seed point is of default colour, then repeat the steps 4 and 5 till the boundary pixels reached. If getpixel(x, y)= dcol then repeat step 4and5 Step 4 − Change the default colour with the fill colour at the seed point. setPixel(seedx, seedy, fcol) Step 5 − Recursively follow the procedure with four neighbourhood points. FloodFill (seedx – 1, seedy, fcol, dcol) FloodFill (seedx + 1, seedy, fcol, dcol) FloodFill (seedx, seedy - 1, fcol, dcol) FloodFill (seedx – 1, seedy + 1, fcol, dcol) Step 6 − Exit There is a problem with this technique. Consider the case as shown below where we tried to fill the entire region. Here, the image is filled only partially. In such cases, 4-connected pixels technique cannot be used. 103 CU IDOL SELF LEARNING MATERIAL (SLM)
Figure 6.7: Algorithm 8-Connected Polygon In this technique 8-connected pixels are used as shown in the figure. We are putting pixels above, below, right and left side of the current pixels as we were doing in 4-connected technique.In addition to this, we are also putting pixels in diagonals so that entire area of the current pixel is covered. This process will continue until we find a boundary with different colour. Figure 6.8: 8 Connected polygon 104 Algorithm Step 1 − Initialize the value of seed point seedx,seedyseedx,seedy, fcolor and dcol. CU IDOL SELF LEARNING MATERIAL (SLM)
Step 2 − Define the boundary values of the polygon. Step 3 − Check if the current seed point is of default colour then repeat the steps 4 and 5 till the boundary pixels reached If getpixel(x,y) = dcol then repeat step 4 and 5 Step 4 − Change the default colour with the fill colour at the seed point. setPixel(seedx, seedy, fcol) Step 5 − Recursively follow the procedure with four neighbourhood points FloodFill (seedx – 1, seedy, fcol, dcol) FloodFill (seedx + 1, seedy, fcol, dcol) FloodFill (seedx, seedy - 1, fcol, dcol) FloodFill (seedx, seedy + 1, fcol, dcol) FloodFill (seedx – 1, seedy + 1, fcol, dcol) FloodFill (seedx + 1, seedy + 1, fcol, dcol) FloodFill (seedx + 1, seedy - 1, fcol, dcol) FloodFill (seedx – 1, seedy - 1, fcol, dcol) Step 6 − Exit The 4-connected pixel technique failed to fill the area as marked in the following figure which won’t happen with the 8-connected technique. Figure 6.9: Algorithm Inside-outside Test This method is also known as counting number method. While filling an object, we often need to identify whether particular point is inside the object or outside it. There are two methods by which we can identify whether particular point is inside an object or outside. 105 CU IDOL SELF LEARNING MATERIAL (SLM)
Odd-Even Rule Nonzero winding number rule Odd-Even Test (Inside Outside Test) We assume that the vertex list for the polygon is already stored and proceed as follows. 1. Draw any point outside the range Xmin and Xmax and Ymin and Ymax. Draw a scanline through P up to a point A under study Figure 6.10: Inside-Outside test 2. If this scan line i. Does not pass through any of the vertices then its contribution is equal to the number of times it intersects the edges of the polygon. Say C if a. C is odd then A lies inside the polygon. b. C is even then it lies outside the polygon. ii. If it passes through any of the vertices then the contribution of this intersection say V is, a. Taken as 2 or even. If the other points of the two edges lie on one side of the scan line. b. Taken as 1 if the other end points of the 2 edges lie on the opposite sides of the scan- line. c. Here will be total contribution is C + V. Here, the points on the boundary are taken care of bycalling the procedure for polygon generation. Odd-Even Rule In this technique, we will count the edge crossing along the line from any point x, y to infinity. If the number of interactions is odd, then the point x, y is an interior point; and if the number of interactions is even, then the point x, y is an exterior point. The following example depicts this concept. 106 CU IDOL SELF LEARNING MATERIAL (SLM)
Figure 6.11: Odd even rule From the above figure, we can see that from the point x,ythe number of interactions point on the left side is 5 and on the right side is 3. From both ends, the number of interaction points is odd, so the point is considered within the object. Winding Number Method and Coherence Property Winding number algorithm This is used for non- overlapping regions and polygons only. Steps 1. Take a point A within the range (0,0) to ( Xmax, Ymax ) Joint it to any point Q outside this range. 2. Give directions to all the edges in anticlockwise direction. Figure 6.12: Winding number algortihm 107 CU IDOL SELF LEARNING MATERIAL (SLM)
3. Check whether Q passing through any of the vertices. If so ignored the position of Q. choose a new Q so that AQ does not pass through any of the vertices but passes through only edges. 4. Initialize winding number w=0. Observe the edges intersecting AQ and i. Add 1 to w if cross edge moves from right to left ii. Subtract 1 from w, if crosses edge moves from left to right. 5. If final count of w is zero i. A lies outside the polygon ii. Non zero, A lies inside the polygon. iii. Illuminate the interior position till all the pixels in the above set range are painted. Nonzero Winding Number Rule This method is also used with the simple polygons to test the given point is interior or not. It can be simply understood with the help of a pin and a rubber band. Fix up the pin on one of the edge of the polygon and tie-up the rubber band in it and then stretch the rubber band along the edges of the polygon. When all the edges of the polygon are covered by the rubber band, check out the pin which has been fixed up at the point to be test. If we find at least one wind at the point consider it within the polygon, else we can say that the point is not inside the polygon. Figure 6.13: Nonzero Winding Number Rule In another alternative method, give directions to all the edges of the polygon. Draw a scan line from the point to be test towards the left most of X direction. Give the value 1 to all the edges which are going to upward direction and all other -1 as direction values. 108 CU IDOL SELF LEARNING MATERIAL (SLM)
Check the edge direction values from which the scan line is passing and sum up them. If the total sum of this direction value is non-zero, then this point to be tested is an interior point, otherwise it is an exterior point. In the above figure, we sum up the direction values from which the scan line is passing then the total is 1 – 1 + 1 = 1; which is non-zero. So the point is said to be an interior point. PriorityAlgorithm In the context of computer graphics, priority fill is a Hidden Line/Surface Removal algorithm which establishes a priority list based upon the depth of parts of an object, so that the parts farthest from the viewer are rendered first. The algorithm continues in reverse priority, just as an artist would create a painting starting with the background, then elements or objects at an intermediate distance would be added and finally those objects in the foreground. Priority fill is also known as the Painter's algorithm. Scan Conversion of Character Meanings Glyph - In information technology, a glyph (pronounced GLIPH from a Greek word meaning carving) is a graphic symbol that provides the appearance or form a character. A glyph can be an alphabetic or numeric font or some other symbol that pictures an encoded character. Contour: A line drawn on a map connecting points of equal height or an outline especially of curving or irregular figure: SHAPE. Character fonts, such as letters and digits, are the building blocks of textural content of an image presented in variety of styles and attributes. Character fonts on raster scanned display devices are usually represented by arrays of bits that are displayed as a matrix of black and white dots. Value for Black - 0 and White - 1. There are three basic kinds of computer font file data formats: Bitmap font consists of a series of dots or pixels, representing the image of each glyph in each face and size. Outline fonts use Bezier curves, drawing instructions and mathematical formulas to describe each glyph, which make the character outline scalable to any size. Stroke fonts use a series of specified lines and additional informational information to define the profile, size and shape of a line in a specific face and size, which together describe the appearance of the glyph. A scan conversion is essentially the job of coloring inside the character outlines contained in the font; scan converter is able to maintain the continuity of character bitmaps by performing 109 CU IDOL SELF LEARNING MATERIAL (SLM)
dropout control. Dropouts occur when the space within the outlines becomes so narrow that pixel centers are missed. The process of a scan conversion consists of four steps: 1. Measurement: The outline of the character is traversed point by point and contour by contour in order to find the maximum and minimum coordinate values of the outline. In addition, the amount of workspace memory that will be needed to perform steps 2 and 3 is calculated. 2. Rendering: Every contour is broken into lines and splines. Calculations are made to find the point at which each line or spline intersects with scan lines. The intersections for each scanline are scaled from left to right. 3. Filling: Using the sorted intersections, runs of pixels are set for each scan line of the bitmap from top to bottom. 4. Dropout control: If dropout control is enabled, the intersection list is checked again looking for dropouts. If various criteria are met, it is decided which dropout pixel to set, and then it is set. The dropout control requires scanning in the vertical as well as the horizontal directions. Aliasing Aliasing is the distortion of information due to low- frequency sampling. Low- frequency sampling results in highly periodic images being rendered incorrectly. For example, a fence or building might appear as a few broad stripes rather than many individual smaller stripes. Anti-Aliasing Anti-aliasing is the process of blurring sharp edges in pictures to get rid of the jagged edges on lines. After an image is rendered, some applications automatically anti-alias images. The program looks for edges in an image, and then blurs adjacent pixels to produce a smoother edge. In order to anti-alias an image when rendering, the computer has to take samples smaller than a pixel in order to figure out exactly where to blur and where not to. Figure 6.14: Anti-Aliasing 110 CU IDOL SELF LEARNING MATERIAL (SLM)
Half Toning Many hardcopy devices are bi-level: they produce just two intensity levels. Then to expand the range of available intensities there is Half toning or clustered-dot ordered dither It makes the most use of the spatial integration that our eyes perform. If we view a very small area from a sufficiently large viewing distance, our eyes average fine detail within the small area and record only the overall intensity of the area.spatial integration that our eyes perform. If we view a very small area from a sufficiently large It make the most use of the viewing distance, our eyes average fine detail within the small area and record only the overall intensity of the area. In halftoning approximation, we have two different cases. First when the image array being shown is smaller than the display device’s pixel array. In this case multiple display pixels can be used for one image pixel. And second when the image array has the same size of display device arrays Figure 6.15 Dither pattern Dithering Dithering is the process of converting an image with a certain bit depth to one with a lower bit depth. For example: Original image Dithered to 256 colors When an application dithers an image, it converts colors that it cannot display into patterns of two or more colors that closely resemble the original. You can see that in the B&W image. Patterns of different intensities of black and white pixels are converted represent different shades of gray. Thresholding Thresholding is a process where an image gets divided in two different colors i.e. Black and White. This kind of image is also called as binary image, since it is divided in to two colors Black – 0 and White – 1. In this process one or more than one thresholdingpoints get decided and then the gray level values in the givenimage are get adjusted accordingly. 111 CU IDOL SELF LEARNING MATERIAL (SLM)
Table 6.1: Thresholding point 4 6.4 BOUNDARY FILLED ALGORITHMS Boundary Fill Algorithm starts at a pixel inside the polygon to be filled and paints the interior proceeding outwards towards the boundary. This algorithm works only if the colour with which the region has to be filled and the colour of the boundary of the region are different. If the boundary is of one single colour, this approach proceeds outwards pixel by pixel until it hits the boundary of the region. Boundary Fill Algorithm is recursive in nature. It takes an interior point(x, y), a fill colour, and a boundary colour as the input. The algorithm starts by checking the colour of (x, y). If it’s colour is not equal to the fill colour and the boundary colour, then it is painted with the fill colour and the function is called for all the neighbours of (x, y). If a point is found to be of fill colour or of boundary colour, the function does not call its neighbours and returns. This process continues until all points up to the boundary colour for the region have been tested. The boundary fill algorithm can be implemented by 4-connected pixels or 8-connected pixels.4-connected pixels : After painting a pixel, the function is called for four neighbouring points. These are the pixel positions that are right, left, above and below the current pixel. Areas filled by this method are called 4-connected. Below given is the algorithm : Algorithm : void boundaryFill4(int x, int y, int fill_color,int boundary_color) { if(getpixel(x, y) != boundary_color && getpixel(x, y) != fill_color) { putpixel(x, y, fill_color); boundaryFill4(x + 1, y, fill_color, boundary_color); boundaryFill4(x, y + 1, fill_color, boundary_color); boundaryFill4(x - 1, y, fill_color, boundary_color); 112 CU IDOL SELF LEARNING MATERIAL (SLM)
boundaryFill4(x, y - 1, fill_color, boundary_color); 113 } } Below is the implementation of above algorithm : // C Implementation for Boundary Filling Algorithm #include <graphics.h> // Function for 4 connected Pixels void boundaryFill4(int x, int y, int fill_color,int boundary_color) { if(getpixel(x, y) != boundary_color && getpixel(x, y) != fill_color) { putpixel(x, y, fill_color); boundaryFill4(x + 1, y, fill_color, boundary_color); boundaryFill4(x, y + 1, fill_color, boundary_color); boundaryFill4(x - 1, y, fill_color, boundary_color); boundaryFill4(x, y - 1, fill_color, boundary_color); } } //driver code int main() { // gm is Graphics mode which is // a computer display mode that // generates image using pixels. // DETECT is a macro defined in // \"graphics.h\" header file int gd = DETECT, gm; CU IDOL SELF LEARNING MATERIAL (SLM)
// initgraph initializes the // graphics system by loading a // graphics driver from disk initgraph(&gd, &gm, \"\"); int x = 250, y = 200, radius = 50; // circle function circle(x, y, radius); // Function calling boundaryFill4(x, y, 6, 15); delay(10000); getch(); // closegraph function closes the // graphics mode and deallocates // all memory allocated by // graphics system . closegraph(); return 0; } Output 8-connected pixels : More complex figures are filled using this approach. The pixels to be tested are the 8 neighbouring pixels, the pixel on the right, left, above, below and the 4 diagonal pixels. Areas filled by this method are called 8-connected. Below given is the algorithm : 114 CU IDOL SELF LEARNING MATERIAL (SLM)
Algorithm : 115 void boundaryFill8(int x, int y, int fill_color,int boundary_color) { if(getpixel(x, y) != boundary_color && getpixel(x, y) != fill_color) { putpixel(x, y, fill_color); boundaryFill8(x + 1, y, fill_color, boundary_color); boundaryFill8(x, y + 1, fill_color, boundary_color); boundaryFill8(x - 1, y, fill_color, boundary_color); boundaryFill8(x, y - 1, fill_color, boundary_color); boundaryFill8(x - 1, y - 1, fill_color, boundary_color); boundaryFill8(x - 1, y + 1, fill_color, boundary_color); boundaryFill8(x + 1, y - 1, fill_color, boundary_color); boundaryFill8(x + 1, y + 1, fill_color, boundary_color); } } Below is the implementation of above algorithm : // C Implementation for Boundary Filling Algorithm #include <graphics.h> // Function for 8 connected Pixels void boundaryFill8(int x, int y, int fill_color,int boundary_color) { if(getpixel(x, y) != boundary_color && getpixel(x, y) != fill_color) { putpixel(x, y, fill_color); boundaryFill8(x + 1, y, fill_color, boundary_color); boundaryFill8(x, y + 1, fill_color, boundary_color); CU IDOL SELF LEARNING MATERIAL (SLM)
boundaryFill8(x - 1, y, fill_color, boundary_color); 116 boundaryFill8(x, y - 1, fill_color, boundary_color); boundaryFill8(x - 1, y - 1, fill_color, boundary_color); boundaryFill8(x - 1, y + 1, fill_color, boundary_color); boundaryFill8(x + 1, y - 1, fill_color, boundary_color); boundaryFill8(x + 1, y + 1, fill_color, boundary_color); } } //driver code int main() { // gm is Graphics mode which is // a computer display mode that // generates image using pixels. // DETECT is a macro defined in // \"graphics.h\" header file int gd = DETECT, gm; // initgraph initializes the // graphics system by loading a // graphics driver from disk initgraph(&gd, &gm, \"\"); // Rectangle function rectangle(50, 50, 100, 100); // Function calling boundaryFill8(55, 55, 4, 15); CU IDOL SELF LEARNING MATERIAL (SLM)
delay(10000); getch(); // closegraph function closes the // graphics mode and deallocates // all memory allocated by // graphics system . closegraph(); return 0; } Output : 4-Connected Pixels Vs 8-Connected Pixels Let us take a figure with the boundary colour as GREEN and the fill colour as RED. The 4- connected method fails to fill this figure completely. This figure will be efficiently filled using the 8-connected technique. Flood fill Vs Boundary Fill Though both Flood fill and Boundary fill algorithms colour a given figure with a chosen colour, they differ in one aspect. In Flood fill, all the connected pixels of a selected colour get replaced by a fill colour. On the other hand, in Boundary fill, the program stops when a given colour boundary is found. 6.5 SUMMARY In graphics, primitives are basic elements, such as lines, curves, and polygons, which can be combined to create more complex graphical images. ... To creative any drawing in the computer these primitives form a part of the software and the type of display to store these in the form of data is important. All other graphic elements are built up from these primitives. In three dimensions, triangles or polygons positioned in three-dimensional space can be used as primitives to model more complex 3D forms. In some cases, curves (such as Bézier curves, circles, etc.) may be considered primitives; in other cases, curves are complex forms created from many straight, primitive shapes. 117 CU IDOL SELF LEARNING MATERIAL (SLM)
2D graphics are widely used in animation and video games, providing a realistic, but flat, view of movement on the screen. 3D graphics provide realistic depth that allows the viewer to see into spaces, notice the movement of light and shadows, and gain a fuller understanding of what’s being shown. 3D motion graphics work with the brain’s natural tendency to explore what we see and enrich our understanding of the world. While both 2D and 3D graphics can connect with people on some level, richly- detailed 3D graphics are more effective at communicating complex information and inspiring genuine feelings. They illuminate abstract ideas, providing a much richer experience that feels real. The simplest area to fill is a polygon because each scanline intersection point with a polygon boundary is obtained by solving a pair of simultaneous linear equations, where the equation for the scan line is simply y = constant. Instead of relying on the boundary of the object, it relies on the fill colour. In other words, it replaces the interior colour of the object with the fill colour. When no more pixels of the original interior colour exist, the algorithm is completed. Once again, this algorithm relies on the Four-connect or Eight-connect method of filling in the pixels. But instead of looking for the boundary colour, it is looking for all adjacent pixels that are a part of the interior. We are putting pixels above, below, right and left side of the current pixels as we were doing in 4-connected technique. In addition to this, we are also putting pixels in diagonals so that entire area of the current pixel is covered. This process will continue until we find a boundary with different colour. Polygon filling, Scan-line filling, Edge-Table, Active-Table, Odd–Even Parity. A polygon is a shape formed by connecting line segments end to end, creating closed path. Three types of polygon are recognized, figure (1): convex, concave, and complex. Boundary Fill Algorithm starts at a pixel inside the polygon to be filled and paints the interior proceeding outwards towards the boundary. This algorithm works only if the color with which the region has to be filled and the color of the boundary of the region are different. 6.6 KEYWORDS Polygon - In geometry, a polygon is a plane figure that is described by a finite number of straight line segments connected to form a closed polygonal chain. The 118 CU IDOL SELF LEARNING MATERIAL (SLM)
bounded plane region, the bounding circuit, or the two together, may be called a polygon. The segments of a polygonal circuit are called its edges or sides. Boundary - A boundary is a border and it can be physical, such as a fence between two properties, or abstract, such as a moral boundary that society decides it is wrong to cross. If you have no sense of boundaries, you probably annoy people sometimes by getting too close to them or talking about inappropriate topics. Scan Line - A scan line is one line, or row, in a raster scanning pattern, such as a line of video on a cathode ray tube display of a television set or computer monitor. Winding Number - In mathematics, the winding number or winding index of a closed curve in the plane around a given point is an integer representing the total number of times that curve travels counterclockwise around the point. Region - In mathematical analysis, the word region usually refers to a subset of \\mathbb {R} ^{n} or {\\displaystyle \\mathbb {C} ^{n}} that is open, simply connected and non-empty. A closed region is sometimes defined to be the closure of a region 6.7 LEARNING ACTIVITY 1. Create a session for defining the term polygon for high school students. ___________________________________________________________________________ ___________________________________________________________________________ 2. Create a session on writing polygon filling algorithm. ___________________________________________________________________________ ___________________________________________________________________________ 6.8 UNIT END QUESTIONS A. Descriptive Questions Short Questions 1. What is the effectiveness of graphics? 2. List some ways where company uses animated graphics. 3. Define 3D graphics 4. Define 2D graphics 5. What do you mean by polygon? Long Questions 119 CU IDOL SELF LEARNING MATERIAL (SLM)
1. Explain filled area algorithm. 2. Explain polygon filling algorithm 3. Explain boundary filled algorithm. 4. What is two dimensional graphic primitives? 5. Explain the geometric shape polygon. B. Multiple Choice Questions 1. Which approach is used to fill the polygon? a. Seed fill b. Scan fill c. Both seed fill and scan fill d. None of these 2. What do you termthe polygon filling algorithm which fills interior defined regions? a. Flood fill b. Boundary fill c. Scan line d. Edge fill 3. What do you term the polygon filling algorithm which fills boundary defined regions? a. Flood fill b. Boundary fill c. Edge line d. Both flood fill and boundary fill 4. Which algorithm is used for filling polygon? a. Recursive b. Non-recursive c. Cohesive d. None of these 5. What is term used for a closed polyline? 120 a. Polychoir b. Polygon c. Poly closed CU IDOL SELF LEARNING MATERIAL (SLM)
d. Closed chain Answers 1-c, 2-a, 3-d, 4-b, 5-b 6.9 REFERENCES References C. Coelho, M. Straforani, M. Campani \" Using Geometrical Rules and a priori Knowledge for the Understanding of Indoor Scenes\" Proceedings. Computer Graphics, Donald Hearn, M P. Baker, PHI. Procedural elements of Computer Graphics, David F. Rogers,Tata McGraw Hill. Textbooks Computer Graphics, Amarendra Sinha, A. Udai,, Tata McGraw Hill. Computer Graphics,A. P. Godase, Technical Publications Pune. Procedural elements of Computer Graphics, David F. Rogers, Tata McGraw Hill. Websites https://www.tutorialspoint.com/computer_graphics/polygon_filling_algorithm.htm https://webeduclick.com/architecture-of-raster-and-random-scan-display/ https://en.wikipedia.org/wiki/Orthographic_projection 121 CU IDOL SELF LEARNING MATERIAL (SLM)
UNIT - 7 TWO-DIMENSIONAL VIEWING STRUCTURE 7.0 Learning Objectives 7.1 Introduction 7.2 Viewing Pipeline 7.3 Window to View Port Transformation 7.4 Window to View Port Mapping 7.5 Summary 7.6 Keywords 7.7 Learning Activity 7.8 Unit End Questions 7.9 References 7.0 LEARNING OBJECTIVES After studying this unit, you will be able to: Illustrate the concept of two dimensional viewing. Explain the basics of viewing pipeline. Describe the concept of window to view port transformation. Illustrate the concept of window to view port mapping. 7.1 INTRODUCTION Two dimensional viewing The viewing pipeline A world coordinate area selected for display is called a window. An area on a display device to which a window is mapped is called a view port The two dimensional viewing is a transformation process of real world object into position point which is relative to the viewing volume, especially, the points behind the viewer. 122 CU IDOL SELF LEARNING MATERIAL (SLM)
Figure 7.1: Viewing Clipping is a computer graphics process to remove the lines, objects, or line segments, all of which are outside the viewing pane. The clippings are categorized as Point clipping Point clipping uses the clipping window and checks whether the given point is within the window or not, and based on it, decides the usage of minimum and maximum window coordinates. Figure 7.2: Point Clipping Line Clipping The portion which is located outside the window is clipped and the remaining available is retained. 123 CU IDOL SELF LEARNING MATERIAL (SLM)
Figure 7.3: Before Clipping Figure 7.4: After clipping Polygon Clipping Polygons are clipped based on the window; the portion which is inside the window is kept as it is and the outside portions are clipped. Figure 7.5: Before Clipping 124 CU IDOL SELF LEARNING MATERIAL (SLM)
Figure 7.6: After clipping After Clipping Figure to represent the polygon clipping Curve Clipping The curved line which is residing inside the box is left as it is and the outside portions are clipped. Text Clipping Various graphical techniques are used for text clipping; usually the text which is residing inside the box is retained and the other parts are clipped. Figure 7.7: Before Clipping Figure 7.8: After clipping 125 CU IDOL SELF LEARNING MATERIAL (SLM)
7.2 VIEWING PIPELINE The term Viewing Pipeline describes a series of transformations, which are passed by geometry data to end up as image data being displayed on a device. The 2D viewing pipeline describes this process for 2D data. Figure 7.9: Viewing Pipeline 7.3 WINDOW TO VIEW PORT TRANSFORMATION A window-viewport transformation describes the mapping of a (rectangular) window in onecoordinate system into another (rectangular) window in another coordinate system. Thistransformation is defined by the section of the original image that is transformed (clippingwindow), the location of the resulting window (viewport), and how the window is translated,scaled or rotated. The following derivation shows how easy this transformation generally is (i.e. without rotation): The transformation is linear in x and y, and (xwmin/ywmin) (xvmin/yvmin), (xwmax/ywmax)(xvmax/yvmax). For a given point (xw/yw) which is transformed to (xv,yv), we get: xw = xwmin + λ(xwmax-xwmin) where 0<λ<1 => xv = xvmin + λ(xvmax -xvmin) Calculating λ from the first equation and substituting it into the second yields: xv = xvmin + (xvmax – xvmin)(xw – xwmin)/(xwmax – xwmin) = xw(xvmax – xvmin)/(xwmax – xwmin) + tx where tx is constant for all points, as is the factor sx = (xvmax – xvmin)/(xwmax – xwmin).Processing y analogically, we get an ordinary linear transformation: xv = sxxw + tx, yv = syyw + ty, 126 CU IDOL SELF LEARNING MATERIAL (SLM)
Figure 7.10: World Coordinates and view port coordinates Line Clipping Clipping is the method of cutting away partsof a picture that lie outside the displayingwindow (see also picture above). The earlierclipping is done within the viewing pipeline,the more unnecessary transformations ofparts which are invisible anyway can beavoided: Figure 7.11: Line clipping In world coordinates, which means analytical calculation as early as possible. During raster conversion, i.e. during the algorithm, which transforms a graphic primitive to points? Per pixel, i.e. after each calculation, just before drawing a pixel. Since clipping is a very common operation it has to be performed simply and quickly. 127 CU IDOL SELF LEARNING MATERIAL (SLM)
Clipping lines: Cohen-Sutherland-Method Generally, line clipping algorithms benefit from the fact, that in a rectangular window each line can have atmost one visible part. Furthermore, they should exploit the basic principles of efficiency, like earlyelimination of simple and common cases and avoiding needless expensive operations (e.g. intersectioncalculations). Simple line clipping could look like this: The Cohen-Sutherland algorithm first classifies the endpoints of a linewith respect to their relative location to the clipping window: above,below, left, right, and codes this information in 4 bits. Now the followingverification can be performed quickly with the codes of the two endpoints. Figure 7.12: Cohen-Sutherland algorithm 1. OR of the codes = 0000 which implies line completely visible 2. AND of the codes ≠ 0000 which implies line completely invisible 3. Otherwise, intersect the line with the relevant edge of the window and replace the cut away point by the intersection point. GOTO 1. Intersection Calculations With vertical window edges: y = y0 + m(xwmin – x0), y = y0 + m(xwmax – x0) With horizontal window edges: x = x0 + (ywmin – y0)/m, x = x0 + (ywmax – y0)/m Endpoints which lie exactly on an edge of the window have to be interpreted as inside, of course. Then the algorithm needs 4 iterations at most. As we can see, intersection calculations are performed only if it is really necessary. There are similar methods for clipping circles, 128 CU IDOL SELF LEARNING MATERIAL (SLM)
however they have to consider that circles can be divided into more than one part when being clipped. Polygon Clipping When clipping a polygon we have to make sure, that the result is one single polygon again, even if the clipping procedure cuts the original polygon into more than one piece. The figure 7.3 shows a polygon clipped by a line clipping algorithm. Afterwards it is not decidable what is inside and what is outside the polygon. Figure 7.13: Polygon clipping The figure 7.4 shows the result of a correct polygon clipping procedure. The polygon is divided into several pieces, each of which can be filled correctly. Figure 7.14: Polygon clipping Clipping polygons: Sutherland-Hodgman-Method The basic idea of this method yields from the fact that clipping apolygon at only one edge doesn’t create many complications.Therefore, the polygon is sequentially clipped against each of the 4window edges, and the result of each step is taken as input for the next one: Figure 7.15: Polygon is sequentially clipped against each of the 4 window edges 129 CU IDOL SELF LEARNING MATERIAL (SLM)
There are 4 different cases how an edge (V1, V2) can be located relative to an edge of the window. Thus,each step of the sequential procedure yields one of the following results. Figure 7.16: Sequential procedure results Thus the algorithm for one edge works like this: The polygon’s vertices are processed sequentially. Foreach polygon edge we verify which one of the 4mentioned cases it fits, and then create an according entry in the result list. After all points are processed, theresult list contains the vertices of the polygon, which is already clipped on the current window’s edge. Thus it isa valid polygon again and can be used as input for thenext clipping operation against the next window edge. Figure 7.17: Flowchart for algorithm for one edge These three intermediate results can be avoided bycalling the procedure for the 4 window edgesrecursively, and so using each result point instantly as input point for the next clipping operation.Alternatively, we can construct a pipeline through these 4 operations, which has a polygon as final output –the polygon which is correctly clipped against the 4 window edges.When cutting a polygon to pieces, this procedure creates connecting edges along the 130 CU IDOL SELF LEARNING MATERIAL (SLM)
clipping window’sborders. In such cases, a final verification respectively post-processing step may be necessary. Figure 7.18: Polygon which is correctly clipped against the 4 window edges Text Clipping At first glance clipping of text seems to be trivial, however, one little point has to be kept in mind. Depending on the way the letters are created it can happen, that either only text is displayed that is fullyreadable (i.e. all letters lie completely inside the window), or that text is clipped letter by letter (i.e. allletters disappear that do not lie completely inside the window), or that text is clipped correctly (i.e. also halfletters can be created). Figure 7.19: Text clipping Example A normalized window has left and right boundaries of (-0.05 to +0.05) and lower and upper boundaries of (0.1 to 0.2). the viewport window left and right is (250,550) and lower to upper is (100,400),find the coordinate of any point (u,v) in the viewport window. Solution Window( xmin=-0.05 , xmax=+0.05 , ymin=0.1, ymax=0.2) Viewport ( umin=250, umax=550, vmin=100, vmax=400) 131 CU IDOL SELF LEARNING MATERIAL (SLM)
7.4 WINDOW TO VIEW PORT MAPPING Window 1. A world-coordinate area selected for display is called a window. 2. In computer graphics, a window is a graphical control element. 3. It consists of a visual area containing some of the graphical user interface of the program it belongs to and is framed by a window decoration. 132 CU IDOL SELF LEARNING MATERIAL (SLM)
4. A window defines a rectangular area in world coordinates. You can define the window to be larger than, the same size as, or smaller than the actual range of data values, depending on whether you want to show all of the data or only part of the data. Viewport 1. An area on a display device to which a window is mapped is called a viewport. 2. A viewport is a polygon viewing region in computer graphics. The viewport is an area expressed in rendering-device-specific coordinates, e.g. pixels for screen coordinates, in which the objects of interest are going to be rendered. 3. A viewport defines in normalized coordinates a rectangular area on the display device where the image of the data appears. You can have your graph take up the entire display device or show it in only a portion, say the upper-right part. 7.5 SUMMARY A window-viewport transformation describes the mapping of a (rectangular) window in one coordinate system into another (rectangular) window in another coordinate system. This transformation is defined by the section of the original image that is transformed (clipping window), the location of the resulting window (viewport), and how the window is translated, scaled or rotated. Viewing pipeline describes a series of transformations, which are passed by geometry data to end up as image data being displayed on a device. The 2D viewing pipeline describes this process for 2D data. Window-to-Viewport transformation is the process of transforming a two- dimensional, world-coordinate scene to device coordinates. In particular, objects inside the world or clipping window are mapped to the viewport. The viewport is displayed in the interface window on the screen. In other words, the clipping window is used to select the part of the scene that is to be displayed. The viewport then positions the scene on the output device. This transformation involves developing formulas that start with a point in the world window, say (x, y). Clipping is the method of cutting away parts of a picture that lie outside the displaying window (see also picture above). The earlier clipping is done within the viewing pipeline, the more unnecessary transformations of parts which are invisible anyway can be avoided: Generally, line clipping algorithms benefit from the fact, that in a rectangular window each line can have at most one visible part. Furthermore, they should exploit the basic 133 CU IDOL SELF LEARNING MATERIAL (SLM)
principles of efficiency, like early elimination of simple and common cases and avoiding needless expensive operations (e.g. intersection calculations). When clipping a polygon we have to make sure, that the result is one single polygon again, even if the clipping procedure cuts the original polygon into more than one piece. It is a polygon clipped by a line clipping algorithm. Afterwards it is not decidable what is inside and what is outside the polygon. Alternatively, we can construct a pipeline through these 4 operations, which has a polygon as final output – the polygon which is correctly clipped against the 4 window edges. When cutting a polygon to pieces, this procedure creates connecting edges along the clipping window’s borders. In such cases, a final verification respectively post-processing step may be necessary. 7.6 KEYWORDS Model co-ordinates - Also known as the \"universe\" or sometimes \"model\" coordinate system. ... Object Coordinate System - When each object is created in a modelling program, the modeller must pick some point to be the origin of that particular object, and the orientation of the object to a set of model axes. World Co-ordinates - World coordinate system (WCS) is the right handed cartesian co-ordinate system where we define the picture to be displayed. ... The lower left handed corner is the origin of the coordinate system. Normalization Transformation Viewing co-ordinates - Mapping the window in the world coordinate space to viewport in NDC space is called the Normalization Transformation Viewing co-ordinates - View coordinate system: Usually a left handed system called the UVN system is used. , the y-axis of the view coordinate system is the perpendicular projection of on the view plane. , the x-axis of the view coordinates, is orthogonal to and i.e. Positive and are to the right and up-direction from eye's point of view. Normalized co-ordinates - Normalized device coordinate or NDC space is a screen independent display coordinate system; it encompasses a cube where the x, y, and z components range from −1 to 1.The current viewport transform is applied to each vertex coordinate to generate window space coordinates. Device co-ordinates - Normalized device coordinate or NDC space is a screen independent display coordinate system; it encompasses a cube where the x, y, and z components range from −1 to 1. The current viewport transform is applied to each vertex coordinate to generate window space coordinates 7.7 LEARNING ACTIVITY 134 CU IDOL SELF LEARNING MATERIAL (SLM)
1. Create a session in comparing 2D viewing techniques with 3D viewing techniques. ___________________________________________________________________________ ___________________________________________________________________________ 2. Create a session on defining window to view port mapping. ___________________________________________________________________________ ___________________________________________________________________________ 7.8 UNIT END QUESTIONS A. Descriptive Questions Short Questions 1. What do you mean by world coordinates? 2. What do you mean by normalized co-ordinates? 3. What is meant by text clipping? 4. What is viewing pipeline refers to? 5. What is meant by clipping? Long Questions 1. Explain line clipping. 2. Define polygon clipping. 3. Define window to view port transformation? 4. Explain window to view port mapping. 5. Explain Clipping lines: Cohen-Sutherland-Method B. Multiple Choice Questions 1. Which port resembles the coordinates from the real-world system? a. Window port b. View port c. Universal port d. None of these 2. Can we represent multiple scenes from a real-world coordinate system on the view port? If yes, how? a. By using multiple view ports b. By using multiple window ports 135 CU IDOL SELF LEARNING MATERIAL (SLM)
c. Both a and b d. No, we cannot represent multiple scenes from a real world coordinate system on the view port. 3. What can be said with respect to the window port in computer graphics? a. It represents a real world coordinate systems b. A view port can be defined with the help of aGWINDOW statement. c. Window port is the coordinate area specially selected for the display. d. All of these 4. What is termed for the process of transforming a 2D world coordinate object to device coordinates? a. Window to view port transformation b. Viewing transformation c. Windowing transformation d. All of these 5. What is the rectangle in the world defining the region which is to be displayed? a. World coordinate system b. Screen coordinate system c. World window d. Interface window Answers 1-a, 2-a, 3-d, 4-d, 5-b 7.9 REFERENCES References C. Coelho, M. Straforani, M. Campani \" Using Geometrical Rules and a priori Knowledge for the Understanding of Indoor Scenes\" Proceedings BMVC90, p.229- 234 Oxford, September 1990 Computer Graphics, Donald Hearn, M P. Baker, PHI. Procedural elements of Computer Graphics, David F. Rogers,Tata McGraw Hill. Textbooks 136 CU IDOL SELF LEARNING MATERIAL (SLM)
Computer Graphics, Amarendra Sinha, A. Udai,, Tata McGraw Hill. Computer Graphics,A. P. Godase, Technical Publications Pune. Computer Graphics, Rajesh K. Maurya, Wiley – India. Websites https://www.brainkart.com/article/Two-dimensional-viewing_10216/ https://www.chegg.com/homework-help/definitions/two-dimensional-viewing-and- clipping-3 https://www.cs.uic.edu/~jbell/CourseNotes/ComputerGraphics/Coordinates.html 137 CU IDOL SELF LEARNING MATERIAL (SLM)
UNIT – 8TWO-DIMENSIONAL GEOMETRIC TRANSFORMATIONS PART I STRUCTURE 8.0 Learning Objectives 8.1 Introduction 8.2 Two Dimensional Transformations: Transformations 8.3 Translation 8.4 Scaling 8.5 Summary 8.6 Keywords 8.7 Learning Activity 8.8 Unit End Questions 8.9 References 8.0 LEARNING OBJECTIVES After studying this unit, you will be able to: Illustrate two dimensional geometric transformations. Explain about translator transformation. Describe about scaling transformation method. 8.1 INTRODUCTION Transformations are one of the fundamental operations that are performed in computer graphics. It is often required when object is defined in one coordinate system and is needed to observe in some other coordinate system. Transformations are also useful in animation. In the coming sections we will see different types of transformation and their mathematical form. In computer graphics we often require to transform the coordinates of an object (position, orientation and size). One can view object transformation in two complementary ways. i. Geometric transformation: Object transformation takes place in relatively stationary coordinate system or background. 138 CU IDOL SELF LEARNING MATERIAL (SLM)
ii. Coordinate transformation: In this view point, coordinate system is transformed instead of object. On the basis of preservation, there are three classes of transformation Rigid body: Preserves distance and angle. Example – Translation and rotation Conformal: Preserves angles. Example- translation, rotation and uniform scaling Affine: Preserves parallelism, means lines remains lines. Example- translation, rotation, scaling, shear and reflection In general there are four attributes of an object that may be transformed i. Position(translation) ii. Size(scaling) iii. Orientation(rotation) iv. Shapes(shear) So far, we have seen how we can describe a scene in terms of graphics primitives, such as line segments and fill areas, and the attributes associated with these primitives. Also, we have explored the scan-line algorithms for displaying output primitives on a raster device. Now, we take a look at transformation operations that we can apply to objects to reposition or resize them. These operations are also used in the viewing routines that convert a world- coordinate scene description to a display for an output device. In addition, they are used in a variety of other applications, such as computer-aided design (CAD) and computer animation. An architect, for example, creates a layout by arranging the orientation and size of the component parts of a design, and a computer animator develops a video sequence by moving the “camera” position or the objects in a scene along specified paths. Operations that are applied to the geometric description of an object to change its position, orientation, or size are called geometric transformations. Sometimes geometric transformations are also referred to as modelling transformations, but some graphics packages make a distinction between the two. In general, modelling transformations are used to construct a scene or to give the hierarchical description of a complex object that is com- posed of several parts, which in turn could be composed of simpler parts, and so forth. For example, an aircraft consists of wings, tail, fuselage, engine, and other components, each of which can be specified in terms of second-level components, and so on, down the hierarchy of component parts. Thus, the aircraft can be described in terms of these components and an associated “modelling” transformation for each one that describes how that component is to be fitted into the overall aircraft design. Geometric transformations, on the other hand, can be used to 139 CU IDOL SELF LEARNING MATERIAL (SLM)
describe how objects might move around in a scene during an animation sequence or simply to view them from another angle. Therefore, some graphics packages provide two sets of transformation routines, while other packages have a single set of functions that can be used for both geometric transformations and modelling transformations. 8.2 TWO DIMENSIONAL TRANSFORMATIONS: TRANSFORMATIONS The geometric-transformation functions that are available in all graphics packages are those for translation, rotation, and scaling. Other useful transformation routines that are sometimes included in a package are reflection and shearing operations. To introduce the general concepts associated with geometric transformations, we first consider operations in two dimensions stand the basic concepts we can easily write routines to perform geometric transformations on objects in a two-dimensional scene. 8.3 TRANSLATION We perform a translation on a single coordinate point by adding offsets to its coordinates so as to generate a new coordinate position. In effect, we are moving the original point position along a straight-line path to its new location. Similarly, a translation is applied to an object that is defined with multiple coordinate positions, such as a quadrilateral, by relocating all the coordinate positions by the same displacement along parallel paths. Then the complete object is displayed at the new location. To translate a two-dimensional position, we add translation distances tx and ty to the original coordinates (x, y) to obtain the new coordinate position of (x′, y′) as shown in figure 8.1. x′= x + tx and y′= y + ty The translation distance pair (tx, ty) is called a translation vector or shift vector. We can express Equations 1 as a single matrix equation by using the following column vectors to represent coordinate positions and the translation vector: This allows us to write the two-dimensional translation equations in the matrix form P′= P + T 140 CU IDOL SELF LEARNING MATERIAL (SLM)
Figure 8.1: Translating a point from position P to position P’ using translation vector Translation is a rigid-body transformation that moves objects without deformation. That is, every point on the object is translated by the same amount. A straight-line segment is translated by applying Equation 3 to each of the two line endpoints and redrawing the line between the new endpoint positions. A polygon is translated similarly. We add a translation vector to the coordinate position of each vertex and then regenerate the polygon using the new set of vertex coordinates. Figure 8.2 illustrates the application of a specified translation vector to move an object from one position to another. The following routine illustrates the translation operations. An input translation vector is used to move the n vertices of a polygon from one world-coordinate position to another, and OpenGL routines are used to regenerate the translated polygon. Figure 8.2: Moving a polygon from position A to position B with translation vector 141 CU IDOL SELF LEARNING MATERIAL (SLM)
If we want to delete the original polygon, we could display it in the back-ground colour before translating it. Other methods for deleting picture componentsare available in some graphics packages. Also, ifwe want to save the original polygon position, we can store the translated positions in a different array.Similar methods are used to translate other objects. To change the position ofa circle or ellipse, we translate the centre coordinates and redraw the figure in thenew location. For a spline curve, we translate the points that define the curve pathand then reconstruct the curve sections between the new coordinate positions. 8.4 SCALING To alter the size of an object, we apply a scaling transformation. A simple two-dimensional scaling operation is performed by multiplying object positions (x, y)by scaling factors sx and sy to produce the transformed coordinates (x′, y′): Scaling factor sx scales an object in the x direction, while sy scales in the y direction. The basic two-dimensional scaling equations 10 can also be written in thefollowing matrix form: Or P’ = S.P where S is the 2 × 2 scaling matrix in above equationAny positive values can be assigned to the scaling factors sx and sy. Valuesless than 1 reduce the size of objects; values greater than 142 CU IDOL SELF LEARNING MATERIAL (SLM)
1 produce enlargements.Specifying a value of 1 for both sx and sy leaves the size of objects unchanged.When sx and sy are assigned the same value, a uniform scaling is produced,which maintains relative object proportions. Unequal values for sx and sy resultin a differential scaling that is often used in design applications, where picturesare constructed from a few basic shapes that can be adjusted by scaling andpositioning transformations (Figure 8.3). In some systems, negative values canalso be specified for the scaling parameters. This not only resizes an object, itreflects it about one or more of the coordinate axes. Figure 8.3: Turning a rectangle into a square Objects transformed with above equation are both scaled and repositioned.Scaling factors with absolute values less than 1 move objects closer to thecoordinate origin, while absolute values greater than 1 move coordinate positions farther from the origin. Figure 8.4 illustrates scaling of a line by assigningthe value 0.5 to both sx and sy in Equation 11. Both the line length and thedistance from the origin are reduced by a factor of 1We can control the location of a scaled object by choosing a position, called thefixed point, that is to remain unchanged after the scaling transformation. Coordinates for the fixed point, (x f , yf ), are often chosen at some object position, suchas its centroid (see Appendix A), but any other spatial position can be selected. Figure 8.4: A line scaled with Equation 12 using sx = s y = 0.5 is reduced in size and moved closer to the coordinate origin. 143 CU IDOL SELF LEARNING MATERIAL (SLM)
Objects are now resized by scaling the distances between object points and thepoint (Figure 8.5). For a coordinate position (x, y), the scaled coordinates(x′, y′) are then calculated from the following relationships: Figure 8.5: (x f , y f ). The distance from each polygon vertex to the fixed point is scaled by above equations. We can rewrite Equations 13 to separate the multiplicative and additive terms as where the additive terms x f (1 − sx) and yf (1 − sy) are constants for all points inthe object.Including coordinates for a fixed point in the scaling equations is similar toincluding coordinates for a pivot point in the rotation equations. We can set upa column vector whose elements are the constant terms in Equations 14, thenadd this column vector to the product S.P in Equation 12. In the next section,we discuss a matrix formulation for the transformation equations that involvesonly matrix multiplication.Polygons are scaled by applying transformations 14 to each vertex, thenregenerating the polygon using the transformed vertices. For other objects,we apply the scaling transformation equations to the parameters defining theobjects. To change the size of a circle, we can scale its radius and calculate thenew coordinate positions around the circumference. And to change the size of anellipse, we apply scaling parameters to its two axes and then plot the new ellipsepositions about its centre coordinates.The following procedure illustrates an application of the scaling calculationsfor a polygon. Coordinates for the polygon vertices and for the fixed point are input parameters, along with the scaling factors. After the coordinate transformations, OpenGL routines are used to generate the scaled polygon. 144 CU IDOL SELF LEARNING MATERIAL (SLM)
8.5 SUMMARY Transformation means changing some graphics into something else by applying rules. We can have various types of transformations such as translation, scaling up or down, rotation, shearing, etc. When a transformation takes place on a 2D plane, it is called 2D transformation. Transformations play an important role in computer graphics to reposition the graphics on the screen and change their size or orientation. To perform a sequence of transformation such as translation followed by rotation and scaling, we need to follow a sequential process − i. Translate the coordinates, ii. Rotate the translated coordinates, and then iii. Scale the rotated coordinates to complete the composite transformation. To shorten this process, we have to use 3×3 transformation matrix instead of 2×2 transformation matrix. To convert a 2×2 matrix to 3×3 matrix, we have to add an extra dummy coordinate W. A translation moves an object to a different position on the screen. You can translate a point in 2D by adding translation coordinate (tx, ty) to the original coordinate X,Y to get the new coordinate X′,Y′. 145 CU IDOL SELF LEARNING MATERIAL (SLM)
To change the size of an object, scaling transformation is used. In the scaling process, you either expand or compress the dimensions of the object. Scaling can be achieved by multiplying the original coordinates of the object with the scaling factor to get the desired result. Let us assume that the original coordinates are X,YX the scaling factors are (SX, SY), and the produced coordinates are X′,Y′ This can be mathematically represented as shown below − X' = X . SX and Y' = Y . SY The scaling factor SX, SY scales the object in X and Y direction respectively OR P’ = P . S Where S is the scaling matrix. If we provide values less than 1 to the scaling factor S, then we can reduce the size of the object. If we provide values greater than 1, then we can increase the size of the object. 8.6 KEYWORDS Geometric transformation: Object transformation takes place in relatively stationary coordinate system or background. Coordinate transformation: In this view point, coordinate system is transformed instead of object. Two Dimension - In two-dimensional projectile motion, such as that of a football or other thrown object, there is both a vertical and a horizontal component to the motion. The key to analysing two-dimensional projectile motion is to break it into two motions, one along the horizontal axis and the other along the vertical. Translation - Translational motion can be defined as the motion in which all points of a moving body move uniformly in the same line or direction. In due course of translation motion, the different points of an object do not change orientation to each other. Transformations - Transformation means changing some graphics into something else by applying rules. We can have various types of transformations such as translation, scaling up or down, rotation, shearing, etc. When a transformation takes place on a 2D plane, it is called 2D transformation. Scaling - Scaling (geometry), a linear transformation that enlarges or diminishes objects. Scale invariance, a feature of objects or laws that do not change if scales of 146 CU IDOL SELF LEARNING MATERIAL (SLM)
length, energy, or other variables are multiplied by a common factor. Scaling law, a law that describes the scale invariance found in many natural phenomena. 8.7 LEARNING ACTIVITY 1. Create a session in comparing translation with scaling. ___________________________________________________________________________ ___________________________________________________________________________ 2. Create a session about two dimensional transformations. ___________________________________________________________________________ ___________________________________________________________________________ 8.8 UNIT END QUESTIONS A. Descriptive Questions Short Questions 1. What do you mean by two dimensional motions? 2. What do you mean by transformation? 3. What is meant by uniform scaling? 4. What are the basic four attributes that an object can be transformed? 5. What is meant by translational motion? Long Questions 1. Explain scaling transformation method using diagram. 2. What are the two complementary ways that one can view object transformation? 3. What are the three classes of transformation based on preservation? 4. Explain the translational transformation and derive matrix equation? 5. Explain scaling factor. B. Multiple Choice Questions 1. What characteristics of the object does scaling change? a. Size b. Colour c. Weight d. None of these 147 CU IDOL SELF LEARNING MATERIAL (SLM)
2. What is the scaling called for same Sx and Sy? a. Real b. Uniform c. Apparent d. Virtual 3. What does rigid body transformation preserve? a. Distance b. Angles c. Distance and angle d. Parallelism 4. What does conformal transformation preserve? a. Angles b. Distance c. Parallelism d. All of these 5. What does affine transformation preserve? a. Parallelism b. Distance c. Angles d. None of these Answers 1-a, 2-b, 3-c, 4-a, 5-a 8.9 REFERENCES References Donald Hearn, M P. Baker Computer Graphics PHI. David F. Rogers Procedural elements of Computer Graphics Tata McGraw Hill. , Amarendra Sinha, A. Udai Computer Graphics Tata McGraw Hill. Textbooks A. P. Godase Computer GraphicsTechnical Publications Pune. 148 CU IDOL SELF LEARNING MATERIAL (SLM)
David F. Rogers Procedural elements of Computer Graphics Tata McGraw Hill. Rajesh K. Maurya Computer Graphics Wiley – India. Websites https://www.brainkart.com/article/Two-dimensional-viewing_10216/ https://www.chegg.com/homework-help/definitions/two-dimensional-viewing-and- clipping-3 https://www.cs.uic.edu/~jbell/CourseNotes/ComputerGraphics/Coordinates.html 149 CU IDOL SELF LEARNING MATERIAL (SLM)
UNIT- 9TWO-DIMENSIONAL GEOMETRIC TRANSFORMATIONS PART II STRUCTURE 9.0 Learning Objectives 9.1 Introduction 9.2 Rotation 9.3 Other Transformations Reflection 9.4 Shear 9.5 Homogenous Coordinate System 9.6 Summary 9.7 Keywords 9.8 Learning Activity 9.9 Unit End Questions 9.10 References 9.0 LEARNING OBJECTIVES After studying this unit, you will be able to: Explain 2D rotation transformation. Explain about two dimensional combined transformations. Illustrate homogenous coordinates and 2D transformation using homogenous coordinates. Describe shear transformations in 2D. 9.1 INTRODUCTION This chapter is the extension of the previous chapter in which we will discuss the rotation transformation about origin and about any arbitrary point. We will also learn about the translation transformation in which the position of an object changes. The homogenous coordinates and 2D transformation using homogenous coordinates will also be explained. If a transformation of the plane T1 is followed by a second plane transformation T2, then the result itself may be represented by a single transformation T which is the composition of T1 and T2 taken in that order. This is written as T = T1∙T2. 150 CU IDOL SELF LEARNING MATERIAL (SLM)
Search
Read the Text Version
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
- 81
- 82
- 83
- 84
- 85
- 86
- 87
- 88
- 89
- 90
- 91
- 92
- 93
- 94
- 95
- 96
- 97
- 98
- 99
- 100
- 101
- 102
- 103
- 104
- 105
- 106
- 107
- 108
- 109
- 110
- 111
- 112
- 113
- 114
- 115
- 116
- 117
- 118
- 119
- 120
- 121
- 122
- 123
- 124
- 125
- 126
- 127
- 128
- 129
- 130
- 131
- 132
- 133
- 134
- 135
- 136
- 137
- 138
- 139
- 140
- 141
- 142
- 143
- 144
- 145
- 146
- 147
- 148
- 149
- 150
- 151
- 152
- 153
- 154
- 155
- 156
- 157
- 158
- 159
- 160
- 161
- 162
- 163
- 164
- 165
- 166
- 167
- 168
- 169
- 170
- 171
- 172
- 173
- 174
- 175
- 176
- 177
- 178
- 179
- 180
- 181
- 182
- 183
- 184
- 185
- 186
- 187
- 188
- 189
- 190
- 191
- 192
- 193
- 194
- 195
- 196
- 197
- 198
- 199
- 200
- 201
- 202
- 203
- 204
- 205
- 206
- 207
- 208
- 209
- 210
- 211
- 212
- 213
- 214
- 215
- 216
- 217
- 218
- 219
- 220
- 221
- 222