5. Canvas

One of the most interesting and at the same time one of the oldest new HTML5 elements is Canvas. In July 2004, just one month after the WHATWG was formed, Apple’s David Hyatt presented a proprietary HTML extension named Canvas, an announcement that caused an uproar among the still young HTML5 movement. “The real solution is to bring these proposals to the table,” was Ian Hickson’s first reaction, and after a brief debate, Apple submitted its idea to the WHATWG. This paved the way for including Canvas in the HTML5 specification, and a first draft was published in August 2004.


Note

image

You can find Apple’s Canvas announcement and Ian Hickson’s reaction at:

http://weblogs.mozillazine.org/hyatt/archives/2004_07.html#005913

http://ln.hixie.ch/?start=1089635050&count=1


5.1 A First Example

Canvas is, simply put, a programmable picture on which you can draw via a JavaScript API. In addition to the canvas via the canvas element, we also need a script element for the drawing commands. Let’s start with the canvas element:

<canvas width="1200" height="800">
  alternative content for browsers without canvas support
</canvas>

The attributes width and height determine the dimension of the canvas element in pixels and reserve the corresponding amount of space on the HTML page. If one or both attributes are missing, default values come into effect: 300 pixels for width and 150 pixels for height. The area between the start and end tag is reserved for alternative content, which will be displayed if a browser does not support Canvas. Similar to the alt tag for pictures, this alternative content should describe the content of the Canvas application or show a suitable screen shot. Phrases like Your browser does not support Canvas without any further information are not very helpful and should be avoided.

Our canvas is now finished. In the next step, we can add the drawing commands in a script element. A few lines of code are enough to turn our first, and admittedly quite trivial, Canvas example into reality:

<script>
  var canvas = document.querySelector("canvas");
  var context = canvas.getContext('2d'),
  context.fillStyle = 'red';
  context.fillRect(0,0,800,600);
  context.fillStyle = 'rgba(255,255,0,0.5)';
  context.fillRect(400,200,800,600);
</script>

Even if we do not yet know anything about the syntax of the Canvas drawing commands, the result in Figure 5.1 will not come as a surprise if you look closely at the code. We now have a red and a light yellow rectangle with 50% opacity, resulting in an orange tone where the two rectangles overlap.

Figure 5.1 Two overlapping rectangles

image


Tip

image

All figures in this chapter were created as HTML pages using Canvas and can be found online either at the URL visible in the screen shot or via the Index page of the companion website at http://html5.komplett.cc/code/chap_canvas/index_en.html. Take a look at the source code!


Before we can draw on the canvas, we need to create a reference to it. The first line in the script does exactly that. In the variable canvas and using the W3C CSS Selectors API method document.querySelector(), it saves a reference to the first canvas element found in the document:

var canvas = document.querySelector("canvas");

Apart from the attributes canvas.width and canvas.height, this object, also called HTMLCanvasElement, has the method getContext(). It allows us to get to the heart of Canvas, the CanvasRenderingContext2D, by passing 2d as context parameter:

var context = canvas.getContext('2d'),

Now we have defined the drawing context and can start drawing the two rectangles. Without going into details of the attribute fillStyle or the method fillRect(), the basic procedure for both is the same: Define the fill color and then add the rectangle:

context.fillStyle = 'red';
context.fillRect(0,0,800,600);
context.fillStyle = 'rgba(255,255,0,0.5)';
context.fillRect(400,200,800,600);

The current Canvas specification only defines a 2D context (see HTML Canvas 2D Context specification at http://www.w3.org/TR/2dcontext) but does not rule out that others, for example 3D, could follow at a later stage. First initiatives in this direction have already been launched by the Khronos group: In cooperation with Mozilla, Google, and Opera, they are working on a JavaScript interface called WebGL based on OpenGL ES 2.0 (http://www.khronos.org/webgl). First implementations of this emerging standard are present in Firefox, WebKit, and Chrome.

But back to the 2D context: The possibilities of the CanvasRenderingContext2D interface are manifold and certainly well-suited for creating sophisticated applications. Figure 5.2 shows a simple bar chart, which will accompany us through an explanation of the first three features of the drawing context: rectangles, colors, and shadows.

Figure 5.2 Bar chart with ten horizontal bars

image

5.2 Rectangles

Canvas has four methods for creating rectangles. Three of these we will discuss now, the fourth we will encounter later in connection to paths:

context.fillRect(x, y, w, h)
context.strokeRect(x, y, w, h)
context.clearRect(x, y, w, h)

The names of these methods are self-explanatory: fillRect() creates a filled rectangle, strokeRect() a rectangle with border and no filling, and clearRect() a rectangle that clears existing content like an eraser. The rectangle’s dimensions are determined by four numerical parameters: origin x/y, width w, and height h.

In Canvas, the coordinate origin is at the top left, which means the x coordinates increase toward the right and the y coordinates toward the bottom (see Figure 5.3).

Figure 5.3 The Canvas coordinate system

image

In parallel to the first example, we first define a reference to the canvas element in our bar chart and then the drawing context. The function drawBars() is responsible for doing the main job, drawing the horizontal bars. We pass the desired number of bars we want to draw to this function:

<script>
var canvas = document.querySelector("canvas");
var context = canvas.getContext('2d'),
var drawBars = function(bars) {
  context.clearRect(0,0,canvas.width,canvas.height);
  for (var i=0; i<bars; i++) {
    var yOff = i*(canvas.height/bars);
    var w = Math.random()*canvas.width;
    var h = canvas.height/bars*0.8;
    context.fillRect(0,yOff,w,h);
    context.strokeRect(0,yOff,w,h);
  }
};
drawBars(10);
</script>

Calling this function with drawBars(10) deletes any existing content with clearRect() and then draws the ten filled rectangle outlines in the for loop with fillRect() and strokeRect(). The width w of the bars varies between 0 pixels and the full width of the canvas element, and is determined randomly via the JavaScript function Math.random(). The function Math.random() generates a number between 0.0 and 1.0, and is therefore ideal for producing random values for width, height, and the position, depending on the canvas dimension. Multiplying with the corresponding attribute value does the job.

The equally spaced, horizontal arrangement of the bars follows the canvas height. The spaces between the bars result from multiplying the calculated maximal bar height h by the factor 0.8.

The canvas width and height can be easily seen in the attributes canvas.width and canvas.height as mentioned in the first example. Just as easily, we can access the HTMLCanvasElement from the drawing context via its attribute context.canvas and use it to generate new bars with each click on the canvas. Three lines of code added after the drawBars(10) call are enough:

context.canvas.onclick = function() {
  drawBars(10);
};

We have clarified how the ten bars are drawn, but how do we make them light gray with black outlines? We will find the answer by looking at the options of assigning color in Canvas.

5.3 Colors and Shadows

The attributes fillStyle and strokeStyle serve to specify colors for fills and lines. The color specification follows the rules for CSS color values and can have a number of different formats. Table 5.1 shows the available options, using the color red as an example.

Table 5.1 Valid CSS color values for the color red

image

To specify the current fill and stroke color in Canvas, you just need to enter the appropriate color values as a character string for fillStyle and strokeStyle. In the bar chart example, we will choose the SVG named color silver as fill and a semitransparent black outline in RGBA notation. We want all bars to look the same, so we define the styles before the drawBars() function:

context.fillStyle = 'silver';
context.strokeStyle = 'rgba(0,0,0,0.5)';
var drawBars = function(bars) {
  // code for drawing bars
};

Valid opacity values range from 0.0 (transparent) to 1.0 (opaque) and can be used as a fourth component in RGB and HSL color space. The latter defines colors not via their red, green, and blue components, but via a combination of hue, saturation, and lightness.


Note

image

You can find more information on the topic CSS colors with HSL color palettes and a list of all valid SVG color names in the CSS Color Module Level 3 specification at http://www.w3.org/TR/css3-color.


If you look closely, you can see shadows behind the bars. These are created by four additional drawing context attributes:

context.shadowOffsetX = 2.0;
context.shadowOffsetY = 2.0;
context.shadowColor = "rgba(50%,50%,50%,0.75)";
context.shadowBlur = 2.0;

The first two lines determine the shadow offset with shadowOffsetX and shadowOffsetY, shadowColor assigns its color and opacity, and shadowBlur causes the shadow to be blurred. As a general rule, the higher the value of shadowBlur, the stronger the blur effect.

Before moving on to color gradients in the next section, we need to clarify how the dotted border in the bar chart and the subsequent graphics is achieved. The answer is very simple: with CSS. Every canvas element can of course also be formatted with CSS. You can specify spacing, position, and z-index just as easily as background color and border. In our example, the following style attribute creates the dotted border:

<canvas style="border: 1px dotted black;">

5.4 Gradients

In addition to solid colors for fills and lines, Canvas offers two kinds of gradients: linear and radial gradients. The basic principle of creating gradients in Canvas is easily demonstrated using the example of a simple gradient from red to yellow and orange and then to purple (see Figure 5.4).

Figure 5.4 Linear gradient with four colors

image

First, context.createLinearGradient(x0, y0, x1, y1) creates a CanvasGradient object and determines the direction of the gradient via the parameters x0, y0, x1, y1. We still need to specify the color offsets in another step, so we save this object in the variable linGrad:

var linGrad = context.createLinearGradient(
  0,450,1000,450
);

The method addColorStop(offset, color) of the CanvasGradient object is the next step and selects the desired colors and offsets on our imaginary gradient line. Offset 0.0 represents the color at the point x0/y0 and offset 1.0 the color at the end point x1/y1. All colors in between are divided up according to their offset, and transitions between the individual stops are interpolated by the browser in RGBA color space:

linGrad.addColorStop(0.0, 'red'),
linGrad.addColorStop(0.5, 'yellow'),
linGrad.addColorStop(0.7, 'orange'),
linGrad.addColorStop(1.0, 'purple'),

Colors are specified following the rules for CSS color values and are identified as SVG named colors in our examples to make it more readable. Our linear gradient is now finished and can be assigned via fillStyle or strokeStyle:

context.fillStyle = linGrad;
context.fillRect(0,450,1000,450);

Unlike linear gradients, the start and end points of radial gradients are not points, but circles. So to define a radial gradient, we now need to use the method context.createRadialGradient(x0, y0, r0, x1, y1, r1) (see Figure 5.5).

Figure 5.5 Components of a radial gradient

image

On the left side of the graphic, you can see the start and end circle, in the middle the three color stops with offset values, and on the right the final result: a sphere that appears to glow. A very appealing result is generated by a bit of clear and simple source code:

var radGrad = context.createRadialGradient(
  260,320,40,200,400,200
);
radGrad.addColorStop(0.0,'yellow'),
radGrad.addColorStop(0.9,'orange'),
radGrad.addColorStop(1.0,'rgba(0,0,0,0)'),
context.fillStyle = radGrad;
context.fillRect(0,200,400,400);

The shadow effect around the sphere is incidentally created by the last two color stops, interpolating from orange to transparent black, which means the visible part of the gradient ends directly at the outer circle.

After this quick trip through the world of colors and gradients, we now move on to other geometric forms: paths.

5.5 Paths

The process of creating paths in Canvas is comparable to drawing on a piece of paper: You put the pencil on the paper at one point, draw, lift the pencil off again, and continue drawing at another point on the paper. The content you draw can range from simple lines to complex curves or even polygons formed from these. An initial example illustrates the concept, translating each step of writing the letter A into Canvas path commands:

context.beginPath();
context.moveTo(300,700);
context.lineTo(600,100);
context.lineTo(900,700);
context.moveTo(350,400);
context.lineTo(850,400);
context.stroke();

The results are shown in Figure 5.6.

Figure 5.6 The letter A as a path

image

Let’s look closer at the source code for this example. We can see the three phases of creating the path:

1. Initialize a new path with beginPath()

2. Define the path geometry with moveTo() and lineTo() calls

3. Draw the lines with stroke()

Each path must be initialized with beginPath() and can then contain any number of segments. In our example, we have two segments that reproduce the hand movements when writing through combinations of moveTo() and lineTo(). This creates first the roof shape and then the horizontal line of the letter A. With stroke(), we then draw the defined path onto the canvas.

The decision whether and when segments of a path will be separated into several individual paths is entirely dependent on the layout. Each path can only be formatted in its entirety. So, if we wanted the horizontal line of the letter A to have a different color, we would need to define two separate paths.

Let’s look at the main path drawing methods in more detail.

5.5.1 Lines

To create lines as in our example of the letter A, Canvas offers the method lineTo():

context.lineTo(x, y)

The effect of the method is shown in Figure 5.7.

Figure 5.7 The path method “lineTo()”

image

Expressed in words, this means line to point x/y, which means we have to already have defined a starting point with moveTo() or still have an end point from the previous drawing step. After drawing the line, the coordinate x/y becomes the new current point.


Note

image

In all graphics used to demonstrate the path drawing methods, we have marked the starting point x0/y0 in light gray and the new current point in bold type.


5.5.2 Bézier Curves

Canvas knows two kinds of Bézier curves: quadratic and cubic, the latter incorrectly referred to only as bezierCurveTo(). Figure 5.8 illustrates the former, and Figure 5.9 illustrates the latter.

context.quadraticCurveTo(cpx, cpy, x, y)
context.bezierCurveTo(cp1x, cp1y, cp2x, cp2y, x, y)

Figure 5.8 The path method “quadraticCurveTo()”

image

Figure 5.9 The path method “bezierCurveTo()”

image

To create Bézier curves, we need the current point as a starting coordinate plus a target coordinate and, depending on the type of curve, one or two control points. In both cases, the coordinate x/y becomes the new current point after drawing the curve.

5.5.3 Arcs

Methods for creating arcs are not quite as straightforward. The first method is defined by two coordinates and a radius:

context.arcTo(x1, y1, x2, y2, radius)

As shown in Figure 5.10, arcTo() creates the new path as follows: A circle with a given radius is added to the line from x0/y0 to x1/y1 and then to x2/y2, so that the circle touches the line in exactly two points, the start tangent t1 and the end tangent t2. The arc between these two points becomes part of the path, and the end tangent t2 becomes the new current point.

Figure 5.10 The path method “arcTo()”

image

In practice, this method is very useful for rectangles with rounded corners. A reusable function will come in handy to do the job shown in Figure 5.11.

Figure 5.11 Four different rectangles with rounded corners; the circle is an extreme example of a rounded rectangle

image

var roundedRect = function(x,y,w,h,r) {
  context.beginPath();
  context.moveTo(x,y+r);
  context.arcTo(x,y,x+w,y,r);
  context.arcTo(x+w,y,x+w,y+h,r);
  context.arcTo(x+w,y+h,x,y+h,r);
  context.arcTo(x,y+h,x,y,r);
  context.closePath();
  context.stroke();
};
roundedRect(100,100,700,500,60);
roundedRect(900,150,160,160,80);
roundedRect(700,400,400,300,40);
roundedRect(150,650,400,80,10);

The function roundedRect() requires the basic values for the rectangle plus the radius for rounding. It then draws the desired rectangle with a moveTo() method, four arcTo() methods, and a closePath() method. You have not yet encountered the method closePath(): It closes the rectangle by joining the last point back up to the start point.

The second option for creating arcs—the method arc()—seems even more complicated at first glance. In addition to center and radius, we now have to specify two angles and the direction of rotation:

context.arc(x, y, radius, startAngle, endAngle, anticlockwise)

The center point of the arc in Figure 5.12 is the center of a circle with a given radius. Originating from this point, the angles startAngle and endAngle create two handles, intersecting the circle in two points. The direction of the arc between these two coordinates is determined by the parameter anticlockwise, where 0 means clockwise and 1 counterclockwise.

Figure 5.12 The path method “arc()”

image

The resulting arc begins in the center of the circle at the point x0/y0, joins this point in a straight line to the first intersection point spx/spy, and from there draws an arc to the end point epx/epy, which now becomes the new current point.

The biggest drawback in creating arcs is that all angles must be specified in radians instead of degrees. So here’s a quick helper to refresh your memory on how to convert:

var deg2rad = function(deg) {
  return deg*(Math.PI/180.0);
};

Talking of helper functions, let’s use two more to facilitate drawing circles and sectors. For circles, we really only need center and radius, the rest will be taken care of by the function circle():

var circle = function(cx,cy,r) {
  context.moveTo(cx+r,cy);
  context.arc(cx,cy,r,0,Math.PI*2.0,0);
};

Especially for circle diagrams, also called pie charts, specifying the angles in radians seems hardly intuitive. Our function sector() does the tedious conversion chore for us and allows us to specify start and end angles in degrees:

var sector = function(cx,cy,r,
    startAngle,endAngle, anticlockwise
  ) {
  context.moveTo(cx,cy);
  context.arc(
    cx,cy,r,
    startAngle*(Math.PI/180.0),
    endAngle*(Math.PI/180.0),
    anticlockwise
  );
  context.closePath();
};

Now, just a few lines of code are enough to draw circles and pie charts without losing track:

context.beginPath();
circle(300,400,250);
circle(300,400,160);
circle(300,400,60);
sector(905,400,250,-90,30,0);
sector(900,410,280,30,150,0);
sector(895,400,230,150,270,0);
context.stroke();

Figure 5.13 shows the result.

Figure 5.13 Circles and sectors

image

5.5.4 Rectangles

The method rect() handles a bit like our helpers, unlike the other methods:

context.rect(x, y, w, h)

In contrast to the previous path drawing methods, the current point x0/y0 is ignored altogether when drawing with rect(); instead, the rectangle is defined via the parameters x, y, width w, and height h. The origin point x/y then becomes the new current point after drawing (see Figure 5.14).

Figure 5.14 The path method “rect()”

image

5.5.5 Outlines, Fills, and Clipping Masks

If we think back to the three stages of creating a path with initialization—determining path, geometry, and drawing—we have now reached the third and last stage: the drawing. Here we decide what the path should look like. In all previous examples, we chose a simple outline at this point, created via the following method:

context.stroke()

The line color is determined by the attribute strokeStyle. You can also define the width of the line (lineWidth), what the ends of the line should look like (lineCap), and the join between lines (lineJoin) using three other Canvas attributes (the asterisk indicates default values; we will encounter it repeatedly from now on):

context.lineWidth = [ Pixel ]
context.lineCap = [ *butt, round, square ]
context.lineJoin = [ bevel, round, *miter ]

Figure 5.15 provides examples of the width, end, and join attributes.

Figure 5.15 Attributes for determining line styles

image

The lineWidth is specified in pixels; the default setting is 1.0. As with the two other line attributes, the line width applies not only to lines and polygons, but also to rectangles created with strokeRect().

If we want to add a cap to a line with lineCap, we can choose butt, round, or square; butt is the default value. If we use round, the line gets a round cap by adding a semicircle at the end of the line with half the lineWidth as a radius. For square, the semicircle is replaced by a rectangle with a height of half the line width.

To create beveled line joins, we use the attribute lineJoin with bevel; we can also round the corners and create mitered joins with miter, which is the default value. To stop the angle of miter lines from becoming too acute, the specification provides the attribute miterLimit with a default value of 10.0. This is the ratio of the length of the tapered point (the distance between the intersection of lines and point) to half the line width. If the miterLimit is exceeded, the point will be trimmed, creating the same effect as in bevel.

To fill paths with a color or gradient, we first need to set the appropriate style attribute with fillStyle and then call the following path method:

context.fill()

This may sound simple but can get very complicated if paths self-intersect or are nested. In such cases, the so-called non-zero winding number rule takes effect: It decides whether to fill or not depending on the winding direction of the subpaths involved.

Figure 5.16 shows the non-zero rule in action. On the left, both circles were drawn in clockwise direction; on the right, the inner circle was drawn counterclockwise, leading to the hole in the center.

Figure 5.16 The non-zero fill rule for paths

image

To help us draw the directional circles, we used the helper from the arc() section, this time slightly modified: The desired direction is now passed as an argument. Valid settings for anticlockwise are 0 and 1:

var circle = function(cx,cy,r,anticlockwise) {
  context.moveTo(cx+r,cy);
  context.arc(cx,cy,r,0,Math.PI*2.0,anticlockwise);
};

The code for the circle on the right with the hole in it looks like this:

context.beginPath();
context.fillStyle = 'yellow';
circle(900,400,240,0);
circle(900,400,120,1);
context.fill();
context.stroke();

After stroke() and fill(), we need only one other method for drawing paths—the method

context.clip().

The explanation is as short as its name: clip() ensures that the defined path is not drawn but used as a cutout for all other drawing elements. Anything within the mask remains visible; the rest is hidden. You can reset the mask by creating another clipping mask using the entire canvas area as geometry. We will encounter a more elegant method later on, in section 5.13, with save() and restore().

Let’s now move on to the topic of text, a topic to which the specification devotes only four pages. Could it be that text support in Canvas is not exactly great?

5.6 Text

At first glance, it is probably true that text support in Canvas is not great, because the options for using text in Canvas are meager and limited to formatting and positioning simple character strings. There is no running text with automatic line breaks, nor paragraph formats or the option to select already existing texts.

We are left with three attributes for determining text attributes, two methods for drawing text, and one method for determining text length of a character string while taking into account the current format. This does not seem like much, but if we look more closely, it becomes clear that those four pages of specification are based on well-thought-out details.

5.6.1 Fonts

The definition of the font attribute simply refers to the CSS specification and states that context.font is subject to the same syntax as the CCS font shorthand notation:

context.font = [ CSS font property ]

In this manner, all font properties can be easily specified in a single string. Table 5.2 lists the individual components and their possible values.

Table 5.2 The components of the CSS “font” property

image

When assembling the font attribute, only the properties font-size and font-family are required. All others are optional, and if omitted, default to the values marked with an asterisk as shown in Table 5.2. Because Canvas text does not recognize line breaks, the attribute line-height has no effect and is always ignored. The cleaned-up pattern for assembling the components is therefore:

context.font = [
  font-style font-variant font-weight font-size font-family
]

Regarding the font-family, the same rules apply as for defining fonts in stylesheets: You can specify any combination of font families and/or generic font families. The browser then picks the first known font from that priority list.

You can achieve complete independence from the browser or the relevant platform and its fonts by using webfonts. Once they are integrated into a stylesheet via @font-face, they are available as font-family in Canvas, too, via the font name assigned:

@font-face {
  font-family: Scriptina;
  src: url('fonts/scriptina.ttf'),
}

Figure 5.17 shows brief examples of valid CSS font attributes and their rendering in Canvas. The source of the webfont Scriptina in the preceding example is http://www.fontex.org—a well-organized collection of free fonts that are available for download.

Figure 5.17 Font formatting with the “font” attribute

image

At the time of this writing, no browser supported @font-face without problems. In Firefox, for example, the webfont Scriptina in the last line only appears in Canvas if it is used at least once in the HTML document. The correct implementation of small-caps is also missing in Firefox, which is why the second to last example is not displayed correctly either.

5.6.2 Horizontal Anchor Point

The attribute textAlign determines the horizontal anchor point of Canvas texts:

context.textAlign = [
  left | right | center | *start | end
]

The keywords left, right, and center are familiar from the CSS attribute textalign, whereas start and end are already CSS3 extensions that allow for text direction, depending on the appropriate language. Some languages are written not from left to right but sometimes from right to left, as for example, Arabic and Hebrew.

Figure 5.18 presents the horizontal anchor points for writing with textflow ltr (left to right) and rtl (right to left), demonstrating the effect of directionality on the attributes start and end.

Figure 5.18 Horizontal anchor points with “textAlign”

image


Note

image

In the browser, the directionality of a document can be changed via the global attribute document.dir:

 document.dir = [ *ltr | rtl ]


5.6.3 Vertical Anchor Point

The vertical anchor point and therefore the baseline on which all glyphs are aligned is determined by the third and last text attribute, textBaseline:

context.textBaseline = [
  top | middle | *alphabetic | bottom | hanging | ideographic
]

The first four textBaseline keywords, top, middle, alphabetic and bottom are self-explanatory. A hanging baseline is required by Devanagari, Gurmukhi, and Bengali, three Indian alphabets used for writing the languages Sanskrit, Hindi, Marathi, Nepali or Panjabi, and Bengali. The group of ideographic writing systems includes Chinese, Japanese, Korean, and Vietnamese. Figure 5.19 illustrates the textBaseline vertical anchor points.

Figure 5.19 Vertical anchor points with “textBaseline”

image

5.6.4 Drawing and Measuring Text

Once font and anchor point have been determined, you only need to draw the text. Similar to rectangles, you can decide on a fill and/or outline, and you can even specify the maximum text width with an optional parameter, maxwidth:

context.fillText(text, x, y, maxwidth)
context.strokeText(text, x, y, maxwidth)

Finally, you can measure the text dimension with the method measureText(), which can at least determine the width while taking into account the current format. In our example in Figure 5.20, the bottom right value (759) was calculated using this method:

TextWidth = context.measureText(text).width

Figure 5.20 “fillText()”, “strokeText()”, and “measureText()”

image

It is not currently possible to determine the height and origin point of the bounding box, but this may be implemented in a future version of the specification, together with multiline text layout. The final note in the text chapter of the Canvas specification sounds promising: It indicates that in the future, fragments of documents (e.g., formatted paragraphs) might also find their way into Canvas via CSS.

The Canvas API offers a multitude of options for working in Canvas with raster-based formats not only in the future, but right now. In addition to embedding images and videos, you also have optional reading and writing access to every pixel on the canvas area. You can read up on how to do this in section 5.8, Pixel Manipulation.

5.7 Embedding Images

For embedding images, Canvas offers the method drawImage(), which we can invoke with three different parameter sets (the method can take three, five, or nine arguments):

context.drawImage(image, dx, dy)
context.drawImage(image, dx, dy, dw, dh)
context.drawImage(image, sx, sy, sw, sh, dx, dy, dw, dh)

In all three cases we need an image, canvas, or video element in the first parameter, which can be dynamically integrated via JavaScript or statically in the HTML code. Animated pictures or videos are not rendered in animation but displayed statically as the first frame or a poster frame if present.

All other arguments of the method drawImage() affect position, size, or cropping the source image to render in the target canvas. Figure 5.21 shows the graphic interpretation of the possible position parameters; the prefix s stands for source and d for destination.

Figure 5.21 Position parameters of the “drawImage()” method

image

Let’s now compare the individual drawImage() methods using three simple examples. The common setup is a picture measuring 1200 × 800 pixels, created dynamically as a JavaScript object (see Figure 5.22):

var image = new Image();
image.src = 'images/yosemite.jpg';

Figure 5.22 The source image of all “drawImage()” examples

image

In addition to pixel sizes, which we will encounter in the examples, Figure 5.22 shows the impressive 1000-meter-high rock face of El Capitan in Yosemite National Park: The photo was taken from Taft Point. This picture is now drawn onload onto the 600 × 400 pixel target canvas, using one of the three possible sets of arguments. The first and simplest option determines the top-left corner of the image in the target canvas with dx/dy. In our case, this is the position 0/0:

image.onload = function() {
  context.drawImage(image,0,0);
};

Width and height are copied directly from the original image, and because our image is bigger than the target canvas, it will come as no surprise that we only see the top-left quarter of Taft Point on our canvas (see Figure 5.23).

Figure 5.23 Taft Point in Yosemite National Park

image

If we want to represent the whole image in the canvas, we also have to specify the desired width and height in the arguments dw/dh. The browser then takes care of scaling the image to 600 × 400 pixels. The result is shown in Figure 5.24:

image.onload = function() {
  context.drawImage(image,0,0,600,400);
};

Figure 5.24 Taft Point with El Capitan in Yosemite National Park

image

In contrast to the two previous variations of drawImage(), which could have been realized with CSS as well, the third variation offers completely new possibilities of working with images. We can now copy any section of the source image (sx, sy, sw, sh) into the defined area of the target canvas (dx, dy, dw, dh). So nothing stands in the way of image montage:

image.onload = function() {
  context.drawImage(image,0,0);
  context.drawImage(
    image, 620,300,300,375,390,10,200,250
  );
};

The result is shown in Figure 5.25.

Figure 5.25 Yosemite National Park postcard

image

The first drawImage() call returns again the top-left quarter of Taft Point; the second extracts the area of El Capitan and draws it as icon into the top-right corner. Text with shadows completes the rudimentary layout of our postcard.

If you would rather have El Capitan in the foreground and Taft Point as a stamp at the top right, you just need to slightly modify the drawImage() calls. In our example you can do this by clicking on the canvas:

canvas.onclick = function() {
  context.drawImage(
    image,600,250,600,400,0,0,600,400
  );
  context.drawImage(
    image,0,0,500,625,390,10,200,250
  );
};

This yields the image shown in Figure 5.26.

Figure 5.26 Yosemite National Park postcard (alternative layout)

image

This was a brief introduction to the topic drawImage(), using an image as a source. You will find a detailed example of using the video element as the first parameter of drawImage() in section 5.14.2, Playing a Video with “drawImage()”, but first we will discuss how you can get both read and write access to pixel values on the canvas area.

5.8 Pixel Manipulation

As methods for reading and manipulating pixel values, we have three choices: getImageData(), putImageData(), and createImageData(). Because all three contain the term ImageData, we first need to define what this refers to.

5.8.1 Working with the “ImageData” Object

Let’s approach the ImageData object with a 2 × 2 pixel-sized canvas, onto which we draw four rectangles 1 × 1 pixels big and filled with the named colors navy, teal, lime, and yellow:

context.fillStyle = 'navy';
context.fillRect(0,0,1,1);
context.fillStyle = 'teal';
context.fillRect(1,0,1,1);
context.fillStyle = 'lime';
context.fillRect(0,1,1,1);
context.fillStyle = 'yellow';
context.fillRect(1,1,1,1);

In the next step, we use the method getImageData(sx, sy, sw, sh) to get the ImageData object. The four arguments determine the desired canvas section as a rectangle, as shown in Figure 5.27:

ImageData = context.getImageData(
  0,0,canvas.width,canvas.height
);

Figure 5.27 The “ImageData” object

image

The ImageData object has the attributes ImageData.width, ImageData.height, and ImageData.data. The latter hides the actual pixel values in the so-called CanvasPixelArray. This is a flat array with red, green, blue, and alpha values for each pixel in the selected section, starting at the top left, from left to right and top to bottom. The number of all values is saved in the attribute ImageData.data.length.

Using a simple for loop, we can now read the individual values of the CanvasPixelArray and make them visible with alert(). Starting at 0, we work from pixel to pixel by increasing the counter by 4 after each loop. The RGBA values are the result of offsets from the current position. Red can be found at counter i, green at i+1, blue at i+2, and the alpha component at i+3:

for (var i=0; i<ImageData.data.length; i+=4) {
  var r = ImageData.data[i];
  var g = ImageData.data[i+1];
  var b = ImageData.data[i+2];
  var a = ImageData.data[i+3];
  alert(r+" "+g+" "+b+" "+a);
}

Modifying pixel values works exactly the same: We change the CanvasPixelArray in-place by assigning new values. In our example, the RGB values are set to random numbers between 0 and 255 via Math.random(); the alpha component remains unchanged:

for (var i=0; i<ImageData.data.length; i+=4) {
  ImageData.data[i] = parseInt(Math.random()*255);
  ImageData.data[i+1] = parseInt(Math.random()*255);
  ImageData.data[i+2] = parseInt(Math.random()*255);
}

After this step, the canvas still looks the same. The new colors only become visible after we write the modified CanvasPixelArray back to the canvas via the method putImageData(). When calling putImageData(), we can have a maximum of seven parameters:

context.putImageData(
  ImageData, dx, dy, [ dirtyX, dirtY, dirtyWidth, dirtyHeight ]
)

The first three attributes are required; in addition to the ImageData object, they contain the coordinate of the origin point dx/dy, from which the CanvasPixelArray is applied via its width and height attributes. The optional dirty parameters cut out only a specified section of the CanvasPixelArray and write back only that section with reduced width and height. Figure 5.28 shows our 4-pixel canvas before and after modification, with a list of the relevant values of the CanvasPixelArray.

Figure 5.28 Modifying colors in the “CanvasPixelArray”

image

You can initialize an empty ImageData object directly via the method createImageData(). Width and height correspond to the arguments sw/sh or the dimensions of an ImageData object passed in the call. In both cases, all pixels of the CanvasPixelArray are set to transparent/black, which is rgba(0,0,0,0):

context.createImageData(sw, sh)
context.createImageData(imagedata)

So we could also create the 2 × 2 pixel modified canvas of Figure 5.28 directly via createImageData() and draw it via putImageData():

var imagedata = context.createImageData(2,2);
for (var i=0; i<ImageData.data.length; i+=4) {
  imagedata.data[i] = parseInt(Math.random()*255);
  imagedata.data[i+1] = parseInt(Math.random()*255);
  imagedata.data[i+2] = parseInt(Math.random()*255);
}
context.putImageData(imagedata,0,0);

That’s it for now on dry CanvasPixelArray theory. In practice, things get much more exciting: With getImageData(), putImageData(), createImageData(), and a little bit of math, we can even write our own color filters for manipulating images. We will show you how in the next section.

5.8.2 Color Manipulation with “getImageData()”, “createImageData()”, and “putImageData()”

The starting picture for all examples is once again the photo of Yosemite National Park, drawn onto the canvas onload via drawImage(). In a second step, we define the original CanvasPixelArray via getImageData() and then modify it in the third step. In a for loop, each pixel’s RGBA values are calculated following a mathematical formula and inserted into a CanvasPixelArray created previously via createImageData(). At the end we write it back to the canvas with putImageData().

Listing 5.1 provides the basic JavaScript frame of all filters used in Figure 5.29. The function grayLuminosity() is not part of the code example but will be addressed later, together with the other filters:

Listing 5.1 Basic JavaScript frame for color manipulation


var image = new Image();
image.src = 'images/yosemite.jpg';
image.onload = function() {
  context.drawImage(image,0,0,360,240);
  var modified = context.createImageData(360,240);
  var imagedata = context.getImageData(0,0,360,240);
  for (var i=0; i<imagedata.data.length; i+=4) {
    var rgba = grayLuminosity(
      imagedata.data[i+0],
      imagedata.data[i+1],
      imagedata.data[i+2],
      imagedata.data[i+3]
    );
    modified.data[i+0] = rgba[0];
    modified.data[i+1] = rgba[1];
    modified.data[i+2] = rgba[2];
    modified.data[i+3] = rgba[3];
  }
  context.putImageData(modified,0,0);
};


Figure 5.29 Color manipulation with “getImageData()” and “putImageData()”

image


Note

image

The server icon in the bottom-right corner of Figure 5.29 indicates that if you are using Firefox as your browser, this example can only be accessed via a server with http:// protocol. We will explain the reasons in section 5.15.3, Security Aspects.


For converting the color to shades of gray, the documentation of the free, image-editing program GIMP offers three formulae in the chapter Desaturate (see the web link http://docs.gimp.org/en/gimp-tool-desaturate.html) with which you can calculate the shade of gray via Lightness, Luminosity, or average lightness (Average). If we implement these calculations with JavaScript, we get our first three color filters:

var grayLightness = function(r,g,b,a) {
  var val = parseInt(
    (Math.max(r,g,b)+Math.min(r,g,b))*0.5
  );
  return [val,val,val,a];
};

var grayLuminosity = function(r,g,b,a) {
  var val = parseInt(
    (r*0.21)+(g*0.71)+(b*0.07)
  );
  return [val,val,val,a];
};

var grayAverage = function(r,g,b,a) {
  var val = parseInt(
    (r+g+b)/3.0
  );
  return [val,val,val,a];
};

With grayLuminosity(), we are using the second formula in Figure 5.29, replacing the RGB component of each pixel with the new calculated value. In this and all following calculations, we must not forget that RGBA values can only be integers; the JavaScript method parseInt() makes sure of it.

The algorithm for sepiaTone() was taken from an article by Zach Smith, titled How do I ... convert images to grayscale and sepia tone using C#? (see the shortened web link http://bit.ly/a2nxI6):

var sepiaTone = function(r,g,b,a) {
  var rS = (r*0.393)+(g*0.769)+(b*0.189);
  var gS = (r*0.349)+(g*0.686)+(b*0.168);
  var bS = (r*0.272)+(g*0.534)+(b*0.131);
  return [
    (rS>255) ? 255 : parseInt(rS),
    (gS>255) ? 255 : parseInt(gS),
    (bS>255) ? 255 : parseInt(bS),
    a
  ];
};

Adding up the multiplied components can lead to values larger than 255 in each of the three calculations; in this case, 255 is inserted as a new value.

Inverting colors is very easy with the filter invertColor(): You simple deduct each RGB component from 255:

var invertColor = function(r,g,b,a) {
  return [
    (255-r),
    (255-g),
    (255-b),
    a
  ];
};

The filter swapChannels() modifies the sequence of the color channels. We first need to define the desired order as the fourth parameter in an array, where 0 is red, 1 is green, 2 is blue, and 3 is the alpha channel. To swap channels, we use the array rgba with the corresponding starting values and then return it in the new order. So changing from RGBA to BRGA, as in our example, can be achieved via order=[2, 0, 1, 3]:

var swapChannels = function(r,g,b,a,order) {
  var rgba = [r,g,b,a];
  return [
    rgba[order[0]],
    rgba[order[1]],
    rgba[order[2]],
    rgba[order[3]]
  ];
};

The last method, monoColor(), sets each pixel’s RGB component to a particular color, using the starting pixel’s gray value as an alpha component. When the function is called, the fourth parameter defines the desired color as an array of RGB values—in our case, blue with color= [0, 0, 255]:

var monoColor = function(r,g,b,a,color) {
  return [
    color[0],
    color[1],
    color[2],
    255-(parseInt((r+g+b)/3.0))
  ];
};

The filters we have introduced here are still rather simple, changing the color values of individual pixels without taking into account the neighboring pixels. If you factor these into the calculation, you can achieve more complex methods, such as sharpen, unsharp mask, or edge detection.


Note

image

Discussing such filters in detail would go beyond the scope of this book. If you want to explore more, check out Jacob Seidelin’s Pixastic Image Processing Library (http://www.pixastic.com/lib). More than 30 JavaScript filters, available free under the Mozilla Public License, are just waiting to be discovered.


In the meantime, let’s turn to Thomas Porter and Tom Duff, two Pixar Studios gurus who created a sensation back in 1984 with their article on alpha blending techniques. The digital compositing techniques they described not only earned them a prize at the Academy of Motion Picture Arts and Sciences, but also found their way into the Canvas specification.

5.9 Compositing

The possibilities of compositing in Canvas are many and varied, but you will only find a few good examples of their use on the Internet. Most are limited to presenting the methods per se, and to start with, that’s what we will do, too. Figure 5.30 shows valid keywords of the globalCompositeOperation attribute, their Porter-Duff equivalent (in italics, with A,B), and the result after drawing.

Figure 5.30 Values of the “globalCompositeOperation” attribute

image

First, we draw the blue rectangle as background, then we set the desired composite method, and finally we add the red circle. So for the first method, source-over, which is also the default value of the globalCompositeOperation attribute, the code looks like this:

context.beginPath();
context.fillStyle = 'cornflowerblue';
context.fillRect(0,0,50,50);
context.globalCompositeOperation = 'source-over';
context.arc(50,50,30,0,2*Math.PI,0);
context.fillStyle = 'crimson';
context.fill();

The image looks like that shown in Figure 5.30.

The circle is the source (A); the rectangle is the destination (B). Let’s use the Porter-Duff terms to explain the different methods, because they are much more intuitive and describe more precisely what is going on.

With source-over, we draw A over B; with source-in, only that part of A that is in B; with source-out, only that part of A that is outside of B; and with source-atop, we draw both A and B but only the part of A that overlaps B. The second line reverses the whole thing, so we do not need to explain it again.

The method lighter adds colors in the overlapping area, which makes it lighter. copy eliminates B and only draws A, and xor removes the intersection of A and B. The question mark indicates that vendor-specific composting operations are also allowed, similar to the getContext() method.

Unfortunately, compositing is not yet fully implemented in any browser, which makes it difficult to sensibly present all methods. We will pick two and take a look at some examples for using the operations destination-in and lighter.

If we use destination-in to combine image and text, we can achieve a cutout effect, as shown in Figure 5.31. First, we draw the image with drawImage(), set the compositing method, and then insert the text with a maximum width of 1080 pixels. The text formatting corresponds to a font-size of 600 px with a text anchor point at the center top and a 60 pixel border with round line caps and joins:

context.drawImage(image,0,0,1200,600);
context.globalCompositeOperation = 'destination-in';
context.strokeText('HTML5',600,50,1080);

Figure 5.31 Compositing operation “destination-in” with image and text

image

The light gray text is again written with the default compositing method source-over and therefore not affected by the effect. Currently, it is not possible to define several texts as cutout at the same time because of the already mentioned shortfall in browser implementation.

Our second example uses the method lighter, expanding the previously mentioned options for color manipulation in images. With lighter, Figure 5.32 combines the Yosemite picture with 16 rectangles in the named standard colors, offering a CPU-friendly alternative to the color filter monoColor() mentioned in section 5.8.2, Color Manipulation with “getImageData()”, “createImageData()”, and “putImageData()”. So we could implement the example used in that section differently and achieve a similar result:

context.drawImage(img,0,0,210,140);
context.globalCompositeOperation = 'lighter';
context.fillStyle = 'blue';
context.fillRect(0,0,210,140);

Figure 5.32 Compositing operation “lighter” with 16 base colors

image

We will encounter the compositing operator destination-out once more in the mirror effect in Figure 5.37 in section 5.11, Transformations. Let’s first turn to user-defined patterns in Canvas.

5.10 Patterns

To create user-defined patterns for fills and lines, the specification offers the method createPattern(). Similar to drawImage(), it accepts both image elements and canvas or video elements as input, defining the type of pattern repetition in the parameter repetition:

context.createPattern(image, repetition)

Permitted values of the repetition argument are, as with the CSS specification’s background-color attribute, repeat, repeat-x, repeat-y, and no-repeat. If we again use the 16 named basic colors, we can use a few lines of code to create checkered patterns, each with two pairs of colors (see Figure 5.33).

Figure 5.33 Checkered pattern in eight color combinations

image

The pattern is created as an in-memory canvas with a 20 × 20 pixel width and four 10 × 10 pixel squares. Illustrated using the example of the green pattern, this step looks as follows:

var cvs = document.createElement("CANVAS");
cvs.width = 20;
cvs.height = 20;
var ctx = cvs.getContext('2d'),
ctx.fillStyle = 'lime';
ctx.fillRect(0,0,10,10);
ctx.fillRect(10,10,10,10);
ctx.fillStyle = 'green';
ctx.fillRect(10,0,10,10);
ctx.fillRect(0,10,10,10);

We then define the canvas cvs as a repeating pattern using createPattern(), assign it to the attribute fillStyle, and use it to fill the square:

context.fillStyle = context.createPattern(cvs,'repeat'),
context.fillRect(0,0,220,220);

Patterns are anchored to the coordinate origin and applied starting from that point. If we were to begin fillRect() in the preceding example ten pixels to the right, at 10/0 instead of at 0/0, the first color in the top-left corner would be green instead of lime.

In addition to user-defined canvas elements, we can also use images as sources of patterns. Figure 5.34 shows an example using createPattern() to fill the background, to create a pattern for the title text, and to cut out individual sections of the familiar Yosemite picture. The two other pictures, pattern_107.png and pattern_125.png, are part of the Squidfingers pattern library, where you have the choice of nearly 160 other appealing patterns to download: http://www.squidfingers.com/patterns.

Figure 5.34 Pattern using images as a source

image

Let’s first look at how the background is created:

var bg = new Image();
bg.src = 'icons/pattern_125.png';
bg.onload = function() {
  context.globalAlpha = 0.5;
  context.fillStyle = context.createPattern(bg,'repeat'),
  context.fillRect(0,0,canvas.width,canvas.height);
};

The first two lines create a new Image object, setting its src attribute to the image pattern_125.png in the folder icons. Just as with drawImage(), we need to make sure that the image is really loaded before defining the pattern. The function bg.onload() contains the real code for generating the repeating pattern, which we apply at 50% opacity to the whole canvas area. With the same procedure, we fill the title text Yosemite! with the image pattern_107.png.

For the overlapping image sections, we simply enter the whole Yosemite photo yosemite.jpg as the pattern and then work in a for loop through the input array extents, which contains the x-, y-, width-, and height-values of the sections we want. By calling fillRect(), the relevant image area is shown as fill pattern and receives an additional border with strokeRect():

var extents = [
  { x:20,y:50,width:120,height:550 } // and 7 others ...
];
var image = new Image();
image.src = 'images/yosemite.jpg';
image.onload = function() {
  context.fillStyle = context.createPattern(
    image,'no-repeat'
  );
  for (var i=0; i<extents.length; i++) {
    var d = extents[i]; // short-cut
    context.fillRect(d.x,d.y,d.width,d.height);
    context.strokeRect(d.x,d.y,d.width,d.height);
  }
};

Three different images are used in Figure 5.34, and all three must be fully loaded before they can be used, so we need to nest the three onload functions. This ensures that we can control the correct order during drawing. The pseudo-code for a possible nesting looks like this:

// create all images
bg.onload = function() {
  // draw background
  image.onload = function() {
    // add image cutouts
    pat.onload = function() {
      // fill title with pattern
    };
  };
};

The only option to avoid this kind of nesting would be to link all involved images in the page’s HTML code as hidden img elements via visibility:hidden and to reference them with getElementById() or getElementsByTagName() after loading the page in window.onload().

Before moving on to another section of the Canvas specification, Transformations, we should mention that when using a video element as the source of createPattern(), the first frame of the video or the poster frame, if present, is used as a pattern, similar to the drawImage() method.

5.11 Transformations

Canvas transformations manipulate the coordinate system directly. When moving a rectangle, you are not only moving the actual element, but also shifting the whole coordinate system and only then redrawing the rectangle. The three basic transformations are scale(), rotate(), and translate(), as shown in Figure 5.35.

Figure 5.35 The basic transformations “scale()”, “rotate()”, and “translate()”

image

context.scale(x, y)
context.rotate(angle)
context.translate(x, y)

For scaling via scale(), we need two multiplicands as arguments for the size change of the x and y dimension, rotations using rotate() require the angle of clockwise rotation in radiant, and moving via translate() defines offsets in x- und y-directions in pixels. If combining these methods, the individual transformations must be carried out in reverse order: In terms of JavaScript code, they basically must be read from back to front.

To first scale and then rotate, we write:

context.rotate(0.175);
context.scale(0.75,0.75);
context.fillRect(0,0,200,150);

If we want to rotate first and then translate, the JavaScript code would have to be:

context.translate(100,50);
context.rotate(0.175);
context.fillRect(0,0,200,150);

You need to be careful in any case where rotations are involved, because they are always carried out with the origin 0/0 as the center of rotation. The rule of thumb is that rotate() is usually the last action. Figure 5.36 shows an example using all three basic methods, depicting our Yosemite image from a different perspective as a kind of ski jump.

Figure 5.36 Rotate, scale and move

image

Listing 5.2 shows the very short source code in Figure 5.36.

Listing 5.2 Source code of the transformations shown in Figure 5.36


image.onload = function() {
  var rotate = 15;
  var scaleStart = 0.0;
  var scaleEnd = 4.0;
  var scaleInc = (scaleEnd-scaleStart)/(360/rotate);
  var s = scaleStart;
  for (var i=0; i<=360; i+=rotate) {
    s += scaleInc;
    context.translate(540,260);
    context.scale(s,s);
    context.rotate(i*-1*Math.PI/180);
    context.drawImage(image,0,0,120,80);
    context.setTransform(1,0,0,1,0,0);
  }
};


As soon as the image is loaded, we define the angle of rotation rotate as 15°, the start and end scaling scaleStart as 0.0 and scaleEnd as 4.0, and derived from this the increment for scaling scaleInc with the aim of achieving the end scale 4.0 within a full rotation. In the for loop we then rotate the image counterclockwise by 15° each time, scale it from 0.0 to 4.0, and set its top-left corner to the coordinate 540/260.

So why do we have the method setTransform() at the end of the for loop?

Apart from the basic transformations scale(), rotate(), and translate(), Canvas offers two other methods for changing the coordinate system and therefore the transformation matrix: transform() and setTransform(), which were already mentioned in Listing 5.2:

context.transform(m11, m12, m21, m22, dx, dy);
context.setTransform(m11, m12, m21, m22, dx, dy);

Both have the arguments m11, m12, m21, m22, dx, and dy in common, representing the transformation properties listed in Table 5.3.

Table 5.3 Components of a Canvas matrix transformation

image

The main difference between them is that transform() changes the current transformation matrix via multiplication, whereas setTransform() overwrites the existing matrix with the new one.

The three basic methods could also be formulated as attributes of transform() or setTransform() and are basically nothing else than convenient shortcuts for corresponding matrix transformations. Table 5.4 lists these attributes and other useful matrices for flipping (flipX/Y) and skewing (skewX/Y). The angles for skewing are again specified in radiant.

Table 5.4 Matrices of basic transformations and other useful transformation methods

image

Before further exploring Canvas transformations using a detailed example, we should mention that both getImageData() and putImageData() are not affected by transformations, according to the specification. The call getImageData(0,0,100,100) always gets the 100 × 100 pixel square in the top-left corner of the canvas regardless of whether the coordinate system was translated, scaled, or rotated. The same goes for putImageData(imagedata,0,0), where the top-left corner serves as an anchor point for applying the content of imagedata.

Let’s move on to the example where we will apply all mentioned transformation methods. Figure 5.37 shows the appealing result—a collage of three image sections of our Yosemite picture with mirror effect in pseudo-3D.

Figure 5.37 Photo collage with mirror effect in pseudo-3D

image

Let’s start by punching out the three square sections for Taft Point, Merced River, and El Capitan. The result will be saved in the array icons:

var icons = [
  clipIcon(image,0,100,600,600),
  clipIcon(image,620,615,180,180),
  clipIcon(image,550,310,400,4];

The function clipIcon() takes care of clipping and adapting the size of the differently sized image portions. In this function, we first create a new in-memory canvas with a size of 320 × 320 pixels, onto which we then copy the appropriately reduced (or enlarged) icon with drawImage() before adding a 15-pixel white border:

var clipIcon = function(img,x,y,width,height) {
  var cvs = document.createElement("CANVAS");
  var ctx = cvs.getContext('2d'),
  cvs.width = 320;
  cvs.height = 320;
  ctx.drawImage(img,x,y,width,height,0,0,320,320);
  ctx.strokeStyle = '#FFF';
  ctx.lineWidth = 15;
  ctx.strokeRect(0,0,320,320);
  return cvs;
};

In a second step, we create the reflection effect for each of these three image sections and save it in the array effects:

var effects = [];
  for (var i=0; i<icons.length; i++) {
  effects[i] = createReflection(icons[i]);
}

The main work is done in the function createReflection(), the slightly modified code of which has been taken from a blog post in Charles Ying’s blog about art, music, and the art of technology about the iPhone’s CoverFlow effect (see the shortened web link http://bit.ly/b5AFW6):

var createReflection = function(icon) {
  var cvs = document.createElement("CANVAS");
  var ctx = cvs.getContext('2d'),
  cvs.width = icon.width;
  cvs.height = icon.height/2.0;

  // flip
  ctx.translate(0,icon.height);
  ctx.scale(1,-1);
  ctx.drawImage(icon,0,0);

  // fade
  ctx.setTransform(1,0,0,1,0,0);
  ctx.globalCompositeOperation = "destination-out";
  var grad = ctx.createLinearGradient(
    0,0,0,icon.height/2.0
  );
  grad.addColorStop(0,'rgba(255,255,255,0.5)'),
  grad.addColorStop(1,'rgba(255,255,255,1.0)'),
  ctx.fillStyle = grad;
  ctx.fillRect(0,0,icon.width,icon.height/2.0);
  return cvs;
};

In createReflection() we first use another in-memory canvas to flip the lower half of the image section passed in icon. Thinking back to the shortcuts for transformation matrices, we could achieve flipping via the matrix for flipY(). But in this case we use another option of creating reflection, using the method scale(), where scale(1,-1) corresponds to the method flipY() and scale(-1,1) corresponds to the method flipX(). The fade-out effect is achieved via a gradient from semitransparent white to opaque white, placed over the icon using the compositing method destination-out.

Now we have defined the individual image sections and can start drawing. A black/white gradient with almost complete black in the center of the gradient creates the impression of 3D space, in which we then place the three images:

var grad = context.createLinearGradient(
  0,0,0,canvas.height
);
grad.addColorStop(0.0,'#000'),
grad.addColorStop(0.5,'#111'),
grad.addColorStop(1.0,'#EEE'),
context.fillStyle = grad;
context.fillRect(0,0,canvas.width,canvas.height);

The center picture of Merced River is the easiest to position via setTransform(); we can then draw it with a reflection effect:

context.setTransform(1,0,0,1,440,160);
context.drawImage(icons[1],0,0,320,320);
context.drawImage(effects[1],0,320,320,160);

The width of the El Capitan image on the right is scaled by 0.9 to achieve a better 3D effect. The result is skewed by 10° downward via the matrix for skewY() and positioned to the right of the center:

context.setTransform(1,0,0,1,820,160);
context.transform(1,Math.tan(0.175),0,1,0,0);
context.scale(0.9,1);
context.drawImage(icons[2],0,0,320,320);
context.drawImage(effects[2],0,320,320,160);

Drawing the Taft Point image on the left is a bit more complicated. After skewing, the top-left corner of our section forms the anchor point; we then have to skew upward by 10° and then move the result downward again. Pythagoras’ theorem will help us determine the required dy value: It results as tangent of the rotation angle in radians multiplied by the length of the cathetus corresponding to the width of the icon, so Math.tan(0.175)*320. We also have to compensate for scaling the image width by 0.9 by shifting it to the right by 320*0.1:

context.setTransform(1,0,0,1,60,160);
context.transform(1,Math.tan(-0.175),0,1,0,0);
context.translate(320*0.1,Math.tan(0.175)*320);
context.scale(0.9,1);
context.drawImage(icons[0],0,0,320,320);
context.drawImage(effects[0],0,320,320,160);

We have now completed our most difficult Canvas example so far. The result is quite impressive, so we should probably save it as JPEG or PNG file. Unlike the other browsers, Firefox makes it easy for you—just right-click on the canvas to save your creation. If you click on View Image, a bizarre and very, very, very long URL address appears, starting with data:image/png;base64..., which takes us straight to the next section—canvas.toDataURL().

5.12 Base64 Encoding with “canvas.toDataURL( )”

Base64 describes a method of encoding binary data as ASCII strings. In Canvas it is used to turn the canvas content, which only really exists as raster in memory, into a processable data: URL. The method to achieve this is

canvas.toDataURL(type, args)

We pass the MIME type of the desired output format as type using either image/png or image/jpeg. The former is the default encoding format and is also used if we omit type or specify a format with which the browser cannot cope. Any additional parameters can be accommodated by the optional argument args—for example, the image quality if selecting image/jpeg with valid numbers between 0.0 and 1.0.

The result of toDataURL() is a base64-encoded string. In the case of the 2 × 2 pixel canvas in the named colors navy, teal, lime, and yellow of Figure 5.27, it looks as follows:

data:image/png;base64,iVBORw0KGgoAAAANSUhEUg
AAAAIAAAACCAYAAABytg0kAAAAF0lEQVQImQXBAQEAAA
CCIKb33ADLFql0PuYIemXXHEQAAAAASUVORK5CYII=

These encoded strings can get rather long. The base64 version of our photo collage with the reflection effect, for example, has no less than 1,298,974 characters and would fill 325 pages of this book (with each page containing 50 lines of 80 characters each)!

So what is toDataURL()used for? Why convert binary image data to character strings? The answer is simple: With toDataURL(), we can make the fleeting in-memory canvas permanently available in HTML, enabling the user or an application to save it.

The first use of toDataURL() is copying a Canvas graphic into an HTMLImageElement. This becomes possible because the src attribute can also be a data: URI. The necessary code is short and requires an empty image in addition to a dynamically created canvas:

<!DOCTYPE html>
<title>Copy canvas onto image</title>
<img src="" alt="copied canvas content, 200x200 pixels">
<script>
  var canvas = document.createElement("CANVAS");
  canvas.width = 200;
  canvas.height = 200;
  var context = canvas.getContext('2d'),
  context.fillStyle = 'navy';
  context.fillRect(0,0,canvas.width,canvas.height);
  document.images[0].src = canvas.toDataURL();
</script>

The crucial line in the listing is printed in bold and shows how easy it is to copy—define the reference to the first image in the document and specify its src attribute as canvas.toDataURL(). As a result, we get a regular img element, which we treat just like any other image in the browser and can save as PNG.

With a simple onclick handler on the canvas element, we demonstrate the next use of toDataURL()—directly assigning the resulting data: URI as URL, but this time the output is not as PNG, but as JPEG:

document.images[0].onclick = function() {
  window.location = canvas.toDataURL('image/jpeg'),
};

The disadvantages of this method are that the URL can get painfully long sometimes (remember the 1.3 million characters?), and the fact that images in this format do not end up in the cache and therefore must be created anew with every call. Other potential applications of toDataURL() are with localstorage or XMLHttpRequest, allowing saving and accessing existing Canvas graphics both on the client side and server side. toDataURL() also serves us well for creating CSS styles with background-image or list-style-image where we can insert it as url() value.

5.13 “save( )” and “restore( )”

Our journey through CanvasContext2D is nearly at an end. Only two methods are left to explain: context.save() and context.restore(). Without them, we could probably not manage any complex Canvas graphics; if you had a quick glance at the figures’ source code, you would probably agree. To help you better understand the methods context.save() and context.restore(), we need to recapitulate first.

By defining the drawing context with canvas.getContext('2d'), all attributes are assigned default values, which then have a direct effect when drawing:

context.globalAlpha = 1.0;
context.globalCompositeOperation = 'source-over';
context.strokeStyle = 'black';
context.fillStyle = 'black';
context.lineWidth = 1;
context.lineCap = 'butt';
context.lineJoin = 'miter';
context.miterLimit = 10;
context.shadowOffsetX = 0;
context.shadowOffsetY = 0;
context.shadowBlur = 0;
context.shadowColor = 'rgba(0,0,0,0)';
context.font = '10px sans-serif';
context.textAlign = 'start';
context.textBaseline = 'alphabetic';

At the same time, the coordinate system is initialized with the identity matrix, and a clipping mask is created, which comprises the entire canvas area:

context.setTransform(1, 0,0,1,0,0);
context.beginPath();
context.rect(0,0,canvas.width,canvas.height);
context.clip();

If we change attributes, transformations, or clipping masks, they remain valid until we change them again. In more complicated graphics, it is easy to lose track of all these changes. This is where context.save() and context.restore() become useful.

With context.save(), we can create a snapshot at any time, which saves the currently set attributes and transformations while taking into account the current clipping mask. Later, we can easily access this snapshot with context.restore(). The specification mentions the stack of drawing states in this context, because snapshots can also be nested.

This technique is excellent where transformations or clipping masks are concerned. And for shadow effects, it is much easier to reset the four shadow components back to their default values with context.save() and context.restore() than setting each component individually. For the animations we will discuss next, context.save() and context.restore() are practically indispensable.

5.14 Animations

Unlike SVG or SMIL animations, Canvas animations are done purely manually. The ingredients are a function for drawing plus a timer calling it in regular intervals. JavaScript offers window.setInterval() for this purpose; the rest is up to the imagination of the Canvas programmer.

5.14.1 Animation with Multicolored Spheres

This is our animation premiere: Spheres of different colors appear in random places on the canvas, fade slowly, and are covered by other spheres. The animation speed should correspond to an adult’s resting pulse of about 60 beats per minute. As an additional feature, we want to be able to stop or restart the animation by clicking on the canvas.

About 50 lines of JavaScript code are sufficient. But before turning to the analysis of Listing 5.3, let’s look at a static screen shot of the result in Figure 5.38.

Figure 5.38 Animation with multicolored spheres

image

Listing 5.3 JavaScript code for animation with multicolored spheres


var canvas = document.querySelector("canvas");
var context = canvas.getContext('2d'),
var r,cx,cy,radgrad;

var drawCircles = function() {
  // fade existing content
  context.fillStyle = 'rgba(255,255,255,0.5)';
  context.fillRect(0, 0,canvas.width,canvas.height);

  // draw new spheres
  for (var i=0; i<360; i+=15) {
    // random position and size
    cx = Math.random()*canvas.width;
    cy = Math.random()*canvas.height;
    r = Math.random()*canvas.width/10.0;

    // define radial gradient
    radgrad = context.createRadialGradient(
      0+(r* 0.15),0-(r* 0.25),r/3.0,
      0,0,r
    );
    radgrad.addColorStop(0.0,'hsl('+i+',100%,75%)'),
    radgrad.addColorStop(0.9,'hsl('+i+',100%,50%)'),
    radgrad.addColorStop(1.0,'rgba(0,0,0,0)'),

    // draw circle
    context.save();
    context.translate(cx,cy);
    context.beginPath();
    context.moveTo(0+r,0);
    context.arc(0,0,r,0,Math.PI*2.0,0);
    context.fillStyle = radgrad;
    context.fill();
    context.restore();
  }
};
drawCircles();  // draw first set of spheres

// start/stop animation at pulse speed
var pulse = 60;
var running = null;
canvas.onclick = function() {
  if (running) {
    window.clearInterval(running);
    running = null;
  }
  else {
    running = window.setInterval(
      "drawCircles()",60000/pulse
    );
  }
};


After defining canvas, context, and some other variables, the proper work starts with the function drawCircles(). A semitransparent white rectangle fades existing content from previous drawCircles() calls, and then the for loop draws new spheres. The position of each sphere and its radius are randomized once again with Math.random(), placing the center in each case into the canvas area and limiting the radius to a tenth of the canvas width.

To make sure the circles look like spheres, we create a radial gradient. Its geometry consists in a light spot at the top right and the total circle. The choice of increment of the for loop reflects the desire to have colors in HSL color space as colorStops of the gradient. With each loop, the color angle increases by 15°, causing the color change from red to green to blue and back to red.

We can then in each case derive pairs of matching colors from the lightness: The first one represents the light spot and the second one the darker color near the sphere’s edge. The third call of addColorStop() causes the very edges of the sphere to fade to transparent black. We create a total of 24 spheres in this way; to make things clearer, the spheres’ color pairs are shown in Figure 5.39.

Figure 5.39 HSL colors for multicolored spheres animation

image

Then the sphere is drawn as a circle with the defined gradient. Embedding in context.save() and context.restore() ensures that the temporary displacement with translate() is not applied to the subsequent circles. Now the function drawCircles() is complete, and we can draw a first set of spheres and then move on to the timer.

About 15 lines are sufficient to implement starting and stopping the animation via an onclick event listener. With the first click on the canvas, we start the animation with window.setInterval() and save the unique interval ID in the variable running. Times are specified in milliseconds for window.setInterval(), so we need to convert the beats per minute accordingly in the variable pulse.

Once the animation is running, the unique interval ID is assigned to the variable running, and with the next click, we can interrupt it using window.clearInterval(running). If we then set running back to null, the next click on the canvas signals: no animation is running. In this case, we restart and the fun starts over.

5.14.2 Playing a Video with “drawImage()”

As you already know from section 5.7, Embedding Images, an HTMLVideoElement can also be used as a source for drawImage(). But if you are hoping that videos embedded in this way will play automatically, you will be disappointed, because the logic for this must be implemented fully in JavaScript. This is not difficult, as you can see from the final Canvas animation example—an extension of our Yosemite National Park postcard in Figure 5.25. Instead of the static image section with El Capitan, we now place a dynamic video into the top-right corner, offering a 360° panoramic view from Taft Point. While the video is playing, ten small snapshots of the running video appear as a gallery along the bottom of the canvas. After the end of the video, you can see the picture shown in Figure 5.40.

Figure 5.40 Yosemite National Park video postcard

image


Note

image

The video was kindly provided by YouTube user pos315, converted to WebM via ffmpeg, and reduced to 320 × 240 pixels. You can see the original online at http://www.youtube.com/watch?v=NmdHx_7b0h0.


Unlike images, which up to now have always found their way into the canvas via the JavaScript method new Image(), we integrate the panoramic view directly into the HTML page as a video element. As additional attributes, we need preload, oncanplay as an event listener to give us the point in time when we can lay out the postcard and prepare for starting and stopping, and a style instruction for hiding the embedded original video. We only use the original video to copy the current video frame onto the canvas in brief intervals during playing. The alternative text for browsers without video support gives a quick reference to the content of the video:

<video src="videos/yosemite_320x240.webm"
  preload="auto"
  oncanplay="init(event)"
  style="display:none;"
>
Panoramic view of Yosemite Valley from Taft Point
</video>

To ensure that the function init(event) as a reference in the oncanplay attribute really exists, we set the script element before our video element. The schematic structure of this central function, which implements both the layout and the function of the video postcard, looks like this:

var init = function(evt) {
  // save reference to video element
  // create background image
  image.onload = function() {
    // draw background image
    // add title
    // draw first frame
    canvas.onclick = function() {
      // implement starting and stopping
      // copy video frames while playing
      // create icons at regular intervals while playing
    };
  }
};

The reference to the video object of the video element can be found in evt.target, and we save it in the variable video. As before, we create a new background image via new Image(), and as soon as the image is fully loaded, we continue drawing the background and title. The steps up to this point probably do not require further explanation, but perhaps we should explain drawing the first frame:

context.setTransform(1,0,0,1,860,20);
context.drawImage(video,0,0,320,240);
context.strokeRect(0,0,320,240);

We first position the coordinate system at the top-right corner with setTransform(), and then draw the first frame with a border using draw-Image(). This procedure will later be repeated over and over while playing, and it is crucial that the HTMLVideoElement video of the drawImage() method always offers the image of the current frame.

Stopping, starting, and then copying the current frames of the original video running in the background, as well as creating scaled-down image sections, is implemented via the function canvas.onclick() by clicking on the canvas. Listing 5.4 shows the JavaScript code needed to do all that:

Listing 5.4 Code for animating the video postcard


var running = null;
canvas.onclick = function() {
  if (running) {
    video.pause();
    window.clearInterval(running);
    running = null;
  }
  else {
    var gap = video.duration/10;
    video.play();
    running = window.setInterval(function () {
      if (video.currentTime < video.duration) {
        // update video
        context.setTransform(1,0,0,1,860,20);
        context.drawImage(video,0,0,320,240);
        context.strokeRect(0,0,320,240);
        // update icons
        var x1 = Math.floor(video.currentTime/gap)*107;
        var tx = Math.floor(video.currentTime/gap)*5;
        context.setTransform(1,0,0,1,10+tx,710);
        context.drawImage(video,x1,0,107,80);
        context.strokeRect(x1,0,107,80);
      }
      else {
        window.clearInterval(running);
        running = null;
      }
    },35);
  }
};


As in the first animation example, the variable running contains the unique interval ID of window.setInterval() and allows for controlling the animation. If a value is assigned to running, we pause the hidden video with video.pause(), stop copying frames by removing the interval, and set running back to null. Otherwise, we start the video with video.play() at the first or next click and copy the current video frame onto the canvas in the callback function of the interval every 35 milliseconds. We continue the whole process until the video has finished playing or the canvas is clicked again. The two attributes video.currentTime and video.duration of the video object in the variable video can help check whether the current playback position is still less than the total time of the video.

Drawing the copied video at the top right happens in parallel to drawing the first frame. For the strip of mini snapshots, we use the total length of the video and the desired number of snapshots to calculate the interval gap after which we need to shift the anchor point x1 further right with a small gap tx. As long as x1 has the same value, the animation in the reduced-size image keeps running. If x1 is shifted to the right, the last frame remains static and the animation continues running from the new position. After about 40 seconds of playing time, the video is over, ten new mini snapshots have been drawn, and we can restart the sequence all over again by clicking on the canvas.

That’s it for now for our video postcard. But before we can finish this chapter, we need to mention a few more topics.

5.15 Anything Still Missing?

The next section describes the method isPointInPath() and considers aspects of accessibility and security in Canvas. The chapter concludes with a quick update on the improved level of browser support and a selection of links for all those who want to find out more about Canvas.

5.15.1 “isPointInPath(x, y)”

As you can guess from the method’s name, isPointInPath() returns either true or false, depending on whether the point specified by the coordinates x/y is inside or outside of the current path. A brief example demonstrates the application of this method; in this case, it returns true in alert():

context.beginPath();
context.rect(50,50,100,100);
alert(
  context.isPointInPath(75,75)
);

One practical use of isPointInPath() is for determining if the user has clicked on a particular area of the canvas. All we need for this is an onclick event handler, which uses the mouse position in clientX/clientY and the position of the canvas element in offsetLeft/offsetTop to calculate the current x/y position in relation to the canvas area:

canvas.onclick = function(evt) {
  context.beginPath();
  context.rect(50,50,100,100);
  alert(
    context.isPointInPath(
      evt.clientX - canvas.offsetLeft,
      evt.clientY - canvas.offsetTop
    )
  );
};

Unfortunately, isPointInPath() does not allow for path transformations: Even if we had moved the coordinate system 200 pixels to the right before issuing the beginPath() instruction, clicking on the coordinate 75/75 would still return true. It does, however, take the non-zero fill rule into account when determining inside/outside; as already indicated for the two code examples, the path to be tested does not necessarily have to be drawn with fill() or stroke().

5.15.2 Accessibility in Canvas?

The question mark in this section heading is deliberate: Canvas is definitely still lacking with regard to accessibility. This is partly due to the fact that during the conception of Canvas, accessibility was given hardly any attention, and partly due to the nature of the issue—raster-based formats without DOM are innately anything but accessible.

In the context of the HTML5 specification, SVG with its DOM would probably be better suited for realizing accessible content. But practice proves that even big projects, such as the web-based code editor Skywriter (https://mozillalabs.com/skywriter), use Canvas instead of SVG for the sake of performance, which really breaks the basic rule stated at the beginning of the HTML5 specification’s Canvas section: Authors should not use the canvas element in a document when a more suitable element is available.

The second requirement, demanding that when authors use the canvas element, they must also provide content that, when presented to the user, conveys essentially the same function or purpose as the bitmap canvas, also does not hold true in reality. The area between the canvas start tag and end tag would be intended for such alternatives but is usually only used to specify fallback content for browsers without Canvas support.

For interactive Canvas applications, the HTML Canvas 2D Context specification also suggests including focusable HTML elements in the fallback content, for example, an input element for each focusable area of the canvas. Authors should use the method drawFocusRing() to mark with a ring those areas of the canvas that currently have the focus in fallback. The example listed in the specification in this context, with a couple of checkboxes that are meant to be kept synchronous in the fallback and canvas area via drawFocusring(), demonstrates how complicated the whole thing is and leads us to suspect that this is not the best solution.

Since July 2009, the Canvas Accessibility Task Force has been trying to remedy the unsatisfactory situation. They are investigating potential improvements of focus and cursor management. The first lot of suggestions are on the table, being discussed intensely, and may well find their way into the specification in one form or another.

But until that happens, we will just have to deal with it: Accessibility—please hold!

5.15.3 Security Aspects

From a security point of view, accessing images and their content (pixels) via scripts in other domains is especially problematic in Canvas. The specification refers to this as information leakage and tries to counter this leakage with the origin-clean flag.

The concept of origin-clean is two-stage and mainly based on certain method calls and attribute assignments setting the origin-clean flag from true to false during a running script. If getImageData() or todataURL() are called in such a case, the script aborts with a SECURITY_ERR exception.

The main protagonists are drawImage(), fillStyle, and strokeStyle. They contribute to a redefinition of the origin-clean flag whenever images or videos from another domain, or canvas elements that are not origin-clean themselves, come into play.

Assuming that the variable image contains a reference to the WHATWG logo at http://www.whatwg.org/images/logo and the script is not running on the WHATWG server, the following drawImage() call sets the origin-clean flag to false:

context.drawImage(image,0,0);

If we use the logo as a pattern, the properties fillStyle and strokeStyle have the same result—origin-clean becomes false:

var pat = context.createPattern(image);
context.fillStyle = pat;
context.strokeStyle = pat;

Each call of getImageData() or toDataURL() from that point on will invariably result in the script being terminated.

In the Firefox browser, this mechanism is handled even more restrictively: Any images loaded via the file:// protocol are classified as not origin-clean. So what is the consequence for our chapter? All graphics with a server icon in the bottom-right corner do not work in Firefox if they are opened locally via file://; instead, they can only be displayed by a web server.


Tip

image

If you do not want to install an Apache server and you have access to Python, you can use just one line to start a rudimentary web server in the current directory at port 8000 and then address the content of this directory in the browser via the url http://localhost:8000:

python -m SimpleHTTPServer


5.15.4 Browser Support

The current versions of Firefox as well as Safari, Chrome, and Opera support a large part of the Canvas specification. If you want to see Canvas in IE, you will have to use IE9, which offers hardware-accelerated support for Canvas. This makes workarounds for IE8 such as Google’s Chrome Frame Plugin (http://code.google.com/chrome/chromeframe) or the JavaScript shim explorercanvas (http://code.google.com/p/explorercanvas) obsolete.

As you would expect, there are slight differences in the degree to which those browsers that already support Canvas have implemented it. A useful source for determining the degree of implementation is the Canvas Testsuite by Philip Taylor with approximately 800 tests and a table of test results for the main browsers at http://philip.html5.org/tests/canvas/suite/tests.

All examples in this Canvas chapter were created with Firefox, as you can see in the screen shots. At the time of this writing, all examples worked fine in Firefox except for the representation of fonts in small-caps. Safari, Opera, IE9, and Chrome also score quite well with our examples—Safari and Opera more so than IE9 and Chrome.

Because every new release of the common browsers can result in improvements regarding Canvas implementation, regularly updated details of how the examples in our book run in different browsers are provided in the Canvas index on the companion website at http://html5.komplett.cc/code/chap_canvas/index_en.html.

5.15.5 Further Links

A good starting point for exploring Canvas is a portal describing itself as Home to applications, games, tools and tutorials that use the HTML 5 <canvas> element at http://www.canvasdemos.com; it offers a series of interesting links. Worth a look is also the extensive Canvas tutorial in Mozilla’s developer center at https://developer.mozilla.org/en/canvas_tutorial and http://hacks.mozilla.org/category/canvas, a blog of the Mozilla community focusing on advanced application examples.

If you want to get into the details of Canvas, your best bet is the Canvas specification. The current version of the two documents can be found at:

http://www.w3.org/TR/html5/the-canvas-element.html

http://www.w3.org/TR/2dcontext

If you prefer an interactive version with stages of implementation and the option of leaving comments directly or reporting errors on the individual sections, go to the WHATWG at http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html.

Summary

Our journey through the world of Canvas has come to an end. It was a long way from drawing two overlapping rectangles in red and yellow to programming a video postcard. You learned how to work with colors, create shadow effects, and draw lines, Bézier curves, arcs, rectangles, and clipping masks. We spent quite some time exploring the key features of Canvas—manipulating images and creating appealing effects by combining pixel manipulation methods with patterns, transformations, and compositing. We even dared to hand-code animations. But although this chapter is the longest in the book, it can only provide a small glimpse into the myriad possibilities offered through Canvas. Many other impressive examples are waiting to be discovered on the Internet—go explore!

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset