Important Announcement
PubHTML5 Scheduled Server Maintenance on (GMT) Sunday, June 26th, 2:00 am - 8:00 am.
PubHTML5 site will be inoperative during the times indicated!

Home Explore ebook electrodynamics full pdf

ebook electrodynamics full pdf

Published by Ramadan Salem, 2021-11-03 03:03:21

Description: ebook electrodynamics full pdf

Search

Read the Text Version

phase on all waves is a geometric plane called a wave front, introduced. In comparison, a point source of radiation sends waves out radially in all directions. A surface connecting points of equal phase for this situation is a sphere, so this wave is called a spherical wave. To generate the prediction of electromagnetic waves, we start with Faraday’s law, Equation 34.6: ∮ Let’s again assume the electromagnetic wave is traveling in the x direction, with the electric field E in the positive y direction and the magnetic field B in the positive z direction. Consider a rectangle of width dx and height l , lying in the xy plane as shown in Figure 34.6.

To apply Equation 34.6, let’s first evaluate the line integral of E.dl around this rectangle in the counterclockwise direction at an instant of time when the wave is passing through the rectangle. The contributions from the top and bottom of the rectangle are zero because E is perpendicular to dl for these paths. We can express the electric field on the right side of the rectangle as ( ) () () () where E(x) is the field on the left side of the rectangle at this instant. Therefore, the line integral over this rectangle is approximately ∮ [ ( ) ( )] [( ( ) ) ( )] () Because the magnetic field is in the z direction, the magnetic flux through the rectangle of area l dx is approximately () (assuming dx is very small compared with the wavelength of the wave). Taking the time derivative of the magnetic flux gives () Substituting in Maxwell’s equation ∮ It gives () Then

In a similar manner, we can derive a second equation by starting with Maxwell’s fourth equation in empty space (Eq. 34.7). In this case, the line integral of B.dl is evaluated around a rectangle lying in the xz plane and having width dx and length l , as in Figure 34.7. Noting that the magnitude of the magnetic field changes from B(x) to B(x+dx) over the width dx and that the direction for taking the line integral is counterclockwise when viewed from above in Figure 34.7, the line integral over this rectangle is found to be approximately ∮ [ () ( )]

The electric flux through the rectangle is () which, when differentiated with respect to time, gives Substituting in the fourth Maxwell’s equation in free space with I=0 ∮ It gives () Then

Finally two equations were obtained Taking the derivative of Equation 34.11 with respect to x and combining the result with Equation 34.14 gives () () () () In the same manner, taking the derivative of Equation 34.14 with respect to x and combining it with Equation 34.11 gives () ( ) ()

() Then two equations were obtained Equations 34.15 and 34.16 both have the form of the general wave equation With the wave speed v replaced by c, Therefore √

This equation gives the speed of electromagnetic waves, where Because this speed is precisely the same as the speed of light in empty space, we are led to believe (correctly) that light is an electromagnetic wave. Electromagnetic wave equation The simplest solution to Equations 34.15 and 34.16 is a sinusoidal wave for which the field magnitudes E and B vary with x and t according to the expressions () () where Eo and Bo are the maximum values of the fields. The angular wave number k is given by where is the wavelength. The angular frequency is given by The speed c of electromagnetic wave equals to the ratio /k

Active Figure 34.8 is a pictorial representation, at one instant, of a sinusoidal, linearly polarized electromagnetic wave moving in the positive x direction. To find the speed c of electromagnetic waves based on maximum values of electric and magnetic field, take the partial derivatives with respect to x of Equations () (( )) () and find partial derivative with respect to t of Equation () (( ))

() And substituting in equation Then ( )( ( )) Then () () That is, at every instant, the ratio of the magnitude of the electric field to the magnitude of the magnetic field in an electromagnetic wave equals the speed of light.

To derive that the sinusoidal electromagnetic wave equation for E or B () () is verify the general wave equation, As follows For electric field E Second derivative of E with respect to x to find (( )) () () ( ( )) ()

Second derivative of E with respect to t, to find (( )) () () ( ( )) () Substituting the equation Then ()( ( )) √

Production of electromagnetic waves by antenna Stationary charges and steady currents cannot produce electromagnetic waves. Whenever the current in a wire changes with time, however, the wire emits electromagnetic radiation. The fundamental mechanism responsible for this radiation is the acceleration of a charged particle. Whenever a charged particle accelerates, it radiates energy. Let’s consider the production of electromagnetic waves by a half-wave antenna. In this arrangement, two conducting rods are connected to a source of alternating voltage (such as an LC oscillator) as shown in Figure 34.11. The length of each rod is equal to one-quarter the wavelength of the radiation emitted when the oscillator operates at frequency f. The oscillator forces charges to accelerate back and forth between the two rods. Figure 34.11 shows the configuration of the electric and magnetic fields at some

instant when the current is upward. The separation of charges in the upper and lower portions of the antenna make the electric field lines resemble those of an electric dipole. (As a result, this type of antenna is sometimes called a dipole antenna.) Because these charges are continuously oscillating between the two rods, the antenna can be approximated by an oscillating electric dipole. The current representing the movement of charges between the ends of the antenna produces magnetic field lines forming concentric circles around the antenna that are perpendicular to the electric field lines at all points. The magnetic field is zero at all points along the axis of the antenna. Furthermore, E and B are 90° out of phase in time; for example, the current is zero when the charges at the outer ends of the rods are at a maximum. At the two points where the magnetic field is shown in Figure 34.11, the Poynting vector S is directed radially outward, indicating that energy is flowing away from the antenna at this instant. At later times, the fields and the Poynting vector reverse direction as the current alternates. Because E and B are 90° out of phase at points near the dipole, the net energy flow is zero. From this fact, you might conclude (incorrectly) that no energy is radiated by the dipole. Energy is indeed radiated, however. Because the dipole fields fall off as 1/r 3 (as shown in Example 23.5 for the electric field of a static dipole), they are negligible at great distances from the antenna. At these great distances, something else causes a type of radiation different from that close to the antenna. The source of this radiation is the continuous induction of an electric field by the time-varying magnetic field and the induction of a magnetic field by the time-varying electric field, predicted by Equations 34.6 and 34.7. The electric and magnetic fields produced in this manner are in phase with each other and vary as 1/r. The result is an outward flow of energy at all times. The angular dependence of the radiation intensity produced by a dipole antenna is shown in Figure 34.12. Notice that the intensity and the power radiated are a maximum in a plane that is perpendicular to the antenna and passing through its midpoint. Furthermore, the power radiated is zero along the antenna’s axis. A mathematical solution to Maxwell’s equations for the dipole antenna shows that the intensity of the radiation varies as (sin2 u)/r 2, where u is measured from the axis of the antenna.

Electromagnetic waves can also induce currents in a receiving antenna. The response of a dipole receiving antenna at a given position is a maximum when the antenna axis is parallel to the electric field at that point and zero when the axis is perpendicular to the electric field.

Vector analysis Scalar and vector quantities It is known that the displacement is a straight line segments going from one point to another and have direction as well as magnitude (length), and it is essential to take both into account when you combine them. Such objects are called vectors: velocity, acceleration, force and momentum are other examples. By contrast, quantities that have magnitude but no direction are called scalars: examples include mass, charge, density, and temperature. I shall use boldface (A, B, and so on) for vectors and ordinary type for scalars. The magnitude of a vector A is written lAl or, more simply, A. In diagrams, vectors are denoted by arrows: the length of the arrow is proportional to the magnitude of the vector, and the arrowhead indicates its direction. Vector quantity representation A scalar quantity has only a magnitude, while the vector quantity have a magnitude and direction. Therefore the scalar quantity is represented by a value and the vector quantity could be represented by three methods: by symbol, by graph and components. Vector algebra: Vector algebraic operations We define four vector operations: addition and three kinds of multiplication. (i) Addition of two vectors. Place the tail of B at the head of A; the sum, A + B, is the vector from the tail of A to the head of B (Fig. 1.3). (This rule generalizes the obvious procedure for combining two displacements.) Addition is commutative: Addition is also associative: ) () (

To subtract a vector, add its opposite: () (ii) Multiplications of two vectors Multiplication operation for vector quantities is differ from that for scalar quantities. Scalar product and vector product Vector algebra: components form In the previous section I defined the four vector operations (addition, scalar multiplication, dot product, and cross product) in \"abstract\" form-that is, without reference to any particular coordinate system. In practice, it is often easier to set up Cartesian coordinates x, y, Z and work with vector \"components.\" Let i, j, k be unit vectors parallel to the x, y, and z axes, respectively. An arbitrary vector A can be expanded in terms of these basis unit vectors as The numbers Ax, A y, and Az, are called components of A; geometrically, they are the projections of A along the three coordinate axes. We can now reformulate each of the four vector operations as a rule for manipulating components: ( )( )( ) Scalar product

Vector product ( )( )( ) Vector calculus Differential calculus Ordinary derivatives Suppose we have a function of one variable: f(x). the derivative, df/dx, tells us how rapidly the function f(x) varies when we change the argument x by a tiny amount, dx: () () In words: If we change x by an amount dx, then f(x) changes by an amount df(x); the derivative is the proportionality factor. For example, in Fig. 1.17(a), the function varies slowly with x, and the derivative is correspondingly small. In Fig. 1.17(b), f increases rapidly with x, and the derivative is large, as you move away from x = o

Geometrical Interpretation: The derivative df/dx is the slope of the graph of f(x) versus x. The gradient of scalar field  Suppose, now, that we have a function of three variables-say, the temperature T(x,Y, z) in a room (scalar field). (Start out in one comer, and set up a system of axes; then for each point (x, Y, z) in the room, T gives the temperature at that spot.) We want to generalize the notion of \"derivative\" to functions like T, which depend not on one but on three variables. Now a derivative is supposed to tell us how fast the function varies, if we move a little distance. But this time the situation is more complicated, because it depends on what direction we move: If we go straight up, then the temperature will probably increase fairly rapidly, but if we move horizontally, it may not change much at all. In fact, the question \"How fast does T vary?\" has an infinite number of answers, one for each direction we might choose to explore. A theorem on partial derivatives states that () () () This tells us how T changes when we alter all three variables by the infinitesimal amounts dx, dy, dz. Notice that we do not require an infinite number of derivatives-

three will suffice: the partial derivatives along each of the three coordinate directions. Equation 1.34 is reminiscent of a dot product of two vectors: () () () ( )( ) Where the first vector is differential vector in three dimensions, , The second vector id the displacement vector in three dimensions dl, The vector dl is the displacement vector through an open path from point a to point b, l(a,b). the open path may be a straight path or curved path The vector  is called nabla, del operator. It is differential vector operator. When it operates on a scalar field, it gives the change of this scalar field in three dimensions. The gradient of a scalar field through an open path means change of the scalar field with respect to the displacement in three dimensions, it is denoted as T. Noted that there is no algebraic sign between  and T, and  is a vector field, T is a scalar field, and T is a vector field.

( )( ) Differential vector operator is called nabla or del operator Geometrical Interpretation of the Gradient: Like any vector, the gradient has magnitude and direction. To determine its geometrical meaning, let's rewrite the dot product (I .35) in abstract form: where  is the angle between the vector T and dl as shown in figure Now, if we fix the magnitude Idll and search around in various directions (that is, vary ), the maximum change in T evidentally occurs when  = 0 (for then cos  = 1). That is, for a fixed distance Idll, dT is greatest when I move in the same direction as T. Thus: The gradient T points in the direction of maximum increase of the function T. Moreover: The magnitude ITI gives the slope (rate of increase) along this maximal direction.

The differential vector operator: nabla  The gradient has the fomal appearance of a vector, , \"multiplying\" a scalar field T: () The term in parentheses is called \"del operator\": It is without specific meaning until we provide it with a function to act upon. Furthermore, it does not \"multiply\" T; rather, it is an instruction to differentiate what follows. To be precise, then, we should say that  is a vector operator that acts upon T, not a vector that multiplies T. With this qualification, though,  mimics the behavior of an ordinary vector in virtually every way; almost anything that can be done with other vectors can also be done with , if we merely translate \"multiply\" by \"act upon.\" So by all means take the vector appearance of  seriously: it is a marvelous piece of notational simplification, as you will appreciate if you ever consult Maxwell's original work on electromagnetism, written without the benefit of . There are three ways the operator  can act: 1- On a scalar field  :  (the gradient of scalar field); 2- On a vector field E, via the dot product: .E (the divergence of vector field); 3- On a vector field E, via the cross product: xE (the curl of vector field). The divergence of vector field .E From the definition of  we construct the divergence. If E is a vector field, the divergence of vector field is given by ( )( )

Observe that the divergence of a vector field E is itself a scalar field. (You can't have the divergence of a scalar: that's meaningless.) Geometrical Interpretation: The name divergence is well chosen, for .E is a measure of how much the vector field E spreads out (diverges) from the point in question. For example, the vector field in Fig. 1.18a has a large (positive) divergence (if the arrows pointed in, it would be a large negative divergence), the vector field in Fig. 1.18b has zero divergence, and the vector field in Fig. 1.18c again has a positive divergence. (Please understand that E here is a function-there's a different vector associated with every point in space. In the diagrams, of course, I can only draw the arrows at a few representative locations.) Imagine standing at the edge of a pond. Sprinkle some sawdust or pine needles on the surface. If the material spreads out, then you dropped it at a point of positive divergence; if it collects together, you dropped it at a point of negative divergence. (The vector function v in this model is the velocity of the water-this is a two-dimensional example, but it helps give one a \"feel\" for what the divergence means. A point of positive divergence is a source, or \"faucet\"; a point of negative divergence is a sink, or \"drain.\") The curl of vector field xB From the definition of  we construct the curl: the curl of a vector field B is given by ( )( ) ( )( )( ) Notice that the curl of a vector function B is, like any cross product, a vector field. (You cannot have the curl of a scalar; that's meaningless.) Geometrical Interpretation: The name curl is also well chosen, for xB is a measure of how much the vector B \"curls around\" the point in question. Thus the three functions in Fig. 1.18 all have zero curl (as you can easily check for

yourself), whereas the functions in Fig. 1.19 have a substantial curl, pointing in the z-direction, as the natural right-hand rule would suggest. Imagine (again) you are standing at the edge of a pond. Float a small paddlewheel (a cork with toothpicks pointing out radially would do); if it starts to rotate, then you placed it at a point of nonzero curl. A whirlpool would be a region of large curl. Vector identities: second derivatives Vector identities are ……………. The gradient, the divergence, and the curl are the only first derivatives we can make with ; by applying  twice we can construct five species of second derivatives. The gradient  is a vector field, so we can take the divergence and curl of it: 1- The divergence of gradient: .() 2- The curl of gradient: x() The divergence .E is a scalar field-all we can do is take its gradient: 3- The gradient of divergence (.E) The curl xB is a vector field, so we can take its divergence and curl: 4- The divergence of curl .(xB)

5- The curl of curl: x(xB)) Then a five vector identities were obtained, this exhausts the possibilities. In the next it would be found these identities. 1- Divergence of gradient .() Divergence of gradient of scalar field  is given by ( ) ( ) (  ) ) ( )( ( )  This object, which we write 2 for short, is called the Laplacian of ; we shall be studying it in great detail later on. Notice that the Laplacian of a scalar field and  is a scalar field. Occasionally, we shall speak of the Laplacian of a vector, V2E. By this we mean a vector quantity whose x-component is the Laplacian of vx , and so on:

2- The curl of gradient of scalar field  The curl of gradient of scalar field is given by ( ) ( ) (   ) (  ) (  ) (  ) 3- The gradient of divergence of vector field E The gradient of divergence of vector field E is given by ( )( )( ) Integral calculus: line integral, surface integral and volume integral In electrodynamics we encounter several different kinds of integrals, among which the most important are line (or path) integrals (or potential), surface integrals (or flux), and volume integrals. Line integral (electric potential) A line integral is an expression of the form ∫

where E is a vector field, dl is the infinitesimal displacement vector and the integral is to be carried out along a prescribed path from point a to point b as shown in fig (-) If the path in question forms a closed loop (that is, if b =a), I shall put a circle on the integral sign: ∮ As shown in fig (--)

At each point on the path we take the dot product of vector field E (evaluated at that point) with the displacement vector dl to the next point on the path. Ordinarily, the value of a line integral depends critically on the particular path taken from a to b, but there is an important special class of vector functions for which the line intetgral is independent of the path, and is determined entirely by the end points. It will be our business in due course to characterize this special class of vectors. ( A force that has this property is called conservative.), for conservative vector field ∮ For example in physics as shown in fig (--) Suppose a conducting wire of arbitrary shape represented as an open path l from point a to point b l(a,b). A voltage source connected between its ends, then the potential difference between the two points is ∫

Noted that the potential difference independent on the shape of a path between the two points a and b. If the two terminals of the wire are connected together, then the potential difference becomes zero, then ∫ The open path l(a,b) becomes a closed path L(S) encloses an open surface S, then this equation could be rewrite as ∮ ∮ ∫ Surface integral (electric flux) A surface integral over open surface S is an expression of the form ∫ where E is again some vector field, and dS is an infinitesimal patch of area (area vector), with direction perpendicular to the surface (Fig. 1.22).

There are, of course, two directions perpendicular to any surface, so the sign of a surface integral is intrinsically ambiguous. If the surface is closed (forming a \"balloon\"), in which case I shall again put a circle on the integral sign ∮ then tradition dictates that \"outward\" is positive, but for open surfaces it's arbitrary. If E describes the flow of a fluid (mass per unit area per unit time), then surface integral represents the total mass per unit time passing through the surface-hence the alternative name, \"flux.\"

The fundamental theorem of calculus Suppose f(x) is a function of one variable. The fundamental theorem of calculus states: ∫ () ∫ () () () ∫ () Where () () Geometrical Interpretation: According to Eq. () () The df is the infinitesimal change in f when you go from (x) to (x +dx). The fundamental theorem of calculus () ∫ () ∫ says that if you chop the interval from a to b into many tiny pieces, dx, and add up the increments df from each little piece, the result is (not surprisingly) equal to the total change in f: f(b) - f(a). In other words, there are two ways to determine the total change in the function: either subtract the values at the ends or go step-by-step, adding up all the tiny increments as you go. You'll get the same answer either way.

Notice the basic format of the fundamental theorem of calculus: the integral of a derivative over an interval is given by the value of the function at the end points (boundaries). In vector calculus there are three species of derivative (gradient, divergence, and curl), and each has its own \"fundamental theorem,\" with essentially the same format. The fundamental theorem for gradient (gradient theorem) Suppose we have a scalar function of three variables (x, y, z). Starting at point a, we move a small distance dl as in fig (--). According to the definition of gradient, the function  will change by an amount  Now we move a little further, by an additional small displacement dl; the incremental change in  will be ().dl. In this manner, proceeding by infinitesimal steps, we make the journey to point b. At each step we compute the gradient of  (at that point) and dot it into the displacement vector dl. This gives us the change in . 

∫ ∫ ( ) ( ) ∫ ( ) This equation gives the fundamental, which means that: to calculate the change of scalar field over an open path, it should use two methods: the first method is calculate the difference between two ends of the open path (left hand side of equation) and the second method is calculate the integration of gradients at each element dl over the open path (right hand side of equation). This is called the fundamental theorem for gradients; like the \"ordinary\" fundamental theorem, it says that the integral (here a line integral) of a derivative (here the gradient) is given by the value of the function at the boundaries (a and b). Geometrical Interpretation: Suppose you wanted to determine the height of the Eiffel Tower. You could climb the stairs, using a ruler to measure the rise at each step, and adding them all up (that's the right side of Eq(--)), or you could place altimeters at the top and the bottom, and subtract the two readings (that's the left side of eq (--)); you should get the same answer either way (that's the fundamental theorem). Incidentally, line integrals ordinarily depend on the path taken from a to b. But the right side of Eq.(--) makes no reference to the path-only to the end points. Evidently, gradients have the special property that their line integrals are path independent: Corollary 1: The right hand side of equation is independent of path taken from a to b. Corollary 2: The wright hand side of equation is equal to zero over closed path since the beginning and end points are identical, and hence (b) - (a) = 0. The fundamental theorem for divergence (divergence, or Gauss, theorem) Previously, it could discussed the total (or net) electric flux through a closed surface A encloses a volume V (denoted as A(V)) and is given by

∮ Therefore, the electric flux density is the net electric flux per unit volume V enclosed by the closed surface A ∫ ∮ ∫∫ The flux density is the flux through the closed surface over the volume enclosed by the closed surface, it is the divergence of the vector field E, that is ∮ ∫ Then ∮ ∫( ) This equation gives the fundamental theorem of divergence. This theorem has at least three special names: Gauss's theorem, Green's theorem, or, simply, the divergence theorem. Like the other \"fundamental theorems,\" it says that the integral of a derivative (in this case the divergence) over a region (in this case a volume) is equal to the value of the function at the boundary (in this case the suiface that bounds the volume). Notice that the boundary term is itself an integral (specifically, a surface integral). This is reasonable: the \"boundary\" of a line is just two end points, but the boundary of a volume is a (closed) surface. Geometrical Interpretation: If E represents the flow of an incompressible fluid, then the flux of E (the left side of Eq. 1.56) is the total amount of fluid passing out through the surface, per unit time. Now, the divergence measures the \"spreading out\" of the vectors from a point-a place of high divergence is like a \"faucet,\" pouring out liquid. If we have lots of faucets in a region filled with incompressible fluid, an equal amount of liquid will be forced out through the boundaries of the region. In fact, there are two ways we could determine how much is being

produced: (a) we could count up all the faucets, recording how much each puts out, or (b) we could go around the boundary, measuring the flow at each point, and add it all up. You get the same answer either way: ∮∫ This, in essence, is what the divergence theorem says. The fundamental theorem for curl (curl, or stokes, theorem) From Faraday’s law of induction, the total electromotive force over a closed path L encloses an open surface S, L(S), is given by ∮ The electromotive over the open surface S is the total electromotive force per unit area of the open surface, ∮ ∫∫ It gives the curl over open surface S, that is the curl of vector field E, ∮ ∫ ∮ ∫( ) This equation gives the fundamental theorem for curl. As always, the integral of a derivative (here, the curl) over a region (here, a patch of open surface) is equal to the value of the function at the boundary (here, the perimeter of the open surface patch). As in the case of the divergence theorem, the boundary term is itself an integral-specifically, a closed line integral. Geometrical Interpretation: Recall that the curl measures the \"twist\" of the vectors E; a region of high curl is a whirlpool-if you put a tiny paddle wheel there, it will

rotate. Now, the integral of the curl over some surface (or, more precisely, the flux of the curl through that surface) represents the \"total amount of swirl,\" and we can determine that swirl just as well by going around the edge and finding how much the flow is following the boundary (Fig. 1.31). You may find this a rather forced interpretation of Stokes' theorem, but it's a helpful mnemonic, if nothing else. You might have noticed an apparent ambiguity in Stokes' theorem: concerning the boundary line integral, which way are we supposed to go around (clockwise or counterclockwise)? If we go the \"wrong\" way we'll pick up an overall sign error. The answer is that it doesn't matter which way you go as long as you are consistent, for there is a compensating sign ambiguity in the surface integral: Which way does dS point? For a closed surface (as in the divergence theorem) dS points in the direction of the outward normal; but for an open surface, which way is \"out?\" Consistency in Stokes' theorem (as in all such matters) is given by the right-hand rule: If your fingers point in the direction of the line integral, then your thumb fixes the direction of da (Fig. 1.32). Now, there are plenty of surfaces (infinitely many) that share any given boundary line. Twist a paper clip into a loop and dip it in soapy water. The soap film constitutes a surface, with the wire loop as its boundary. If you blow on it, the soap film will expand, making a larger surface, with the same boundary. Ordinarily, a flux integral depends critically on what surface you integrate over, but evidently this is not the case with curls. For Stokes' theorem says that ∫( ) is equal to the line integral of E around the boundary, and the latter makes no reference to the specific surface you choose. Corollary 1:

∫( ) depends only on the boundary line, not on the particular surface used. Corollary 2: for any closed surface, since the boundary line, like the mouth ∫( ) of a balloon, shrinks down to a point, and hence the right side of Eq. 1.57 vanishes. Differential Maxwell’s equations Maxwell’s equations in different forms The Maxwell’s equations which are derived previously are the laws of electricity and magnetism, they given by: Maxwell’s equations in symmetrical form ∮ ∮ ∮ ∮ Maxwell’s equations in symmetrical form could be converted to integral and differential forms as follows: First Maxwell’s equation ∮

∫ ∮∫ ∮ ∫ Second Maxwell’s equation ∮ ∮ ∫ Third Maxwell’s equation ∮ ∫

∮ ∫ ∮ ∫ Fourth Maxwell’s equation ∮ ∫ ∫ ∫ ∮∫ ∮ ∫

Maxwell’s equations in integral form ∫ ∮ ∮ ∫ ∫ ∮ ∫ ∮∫ Maxwell’s equations in differential form

Solution of Maxwell’s equation in differential form Electromagnetic waves in vacuum: the wave equations for E and B In regions of space where there is no charge or current, =0 and J=0 The Maxwell's equations in vacuum are They constitute a set of coupled, first-order, partial differential equations for E and B. They can be decoupled by applying the curl to third and fourth equations. () () () ()

Substituting from fourth equation () ( ) () For left side applying the vector identity ) ( )( Then () By using first law .E=0, then The curl of fourth equation ) ( ) ( () () Substituting from second equation and third equation ()

Finally two equations were obtained We now have separate equations for E and B, but they are of second order. In vacuum, then, each Cartesian component of E and B satisfies the three- dimensional wave equation, So Maxwell's equations imply that empty space supports the propagation of electromagnetic waves, traveling at a speed v=c √ This is the speed of light, light is an electromagnetic wave. Of course, this conclusion does not surprise today, but imagine what a revelation it was in Maxwell's time! Remember how o and o came into the theory in the first place: they were constants in Coulomb's law and the Biot-Savart law, respectively. You measure them in experiments involving charged pith balls, batteries, and wires-- experiments having nothing whatever to do with light. And yet, according to Maxwell's theory you can calculate c from these two numbers. Notice the crucial role played by Maxwell's contribution to Ampere's law without it, the wave equation would not emerge, and there would be no electromagnetic theory of light.

Monochromatic plane waves () () The theory of vector field Helmoheltz theorem Ever since Faraday, the laws of electricity and magnetism have been expressed in terms of electric and magnetic fields, E and B. Like many physical laws, these are most compactly expressed as differential equations. Since E and B are vectors, the differential equations naturally involve vector derivatives: divergence and curl. Indeed, Maxwell reduced the entire theory to four equations, specifying respectively the divergence and the curl of E and B. First equation: Gauss’s law of electrostatics Second equation: Gauss’s law of magnetostatics Third equation: Faraday’s law of electromagnetics Fourth equation: Ampere-Maxwell law of electrodynamics

Maxwell's formulation raises an important mathematical question: To what extent is a vector function determined by its divergence and curl? In other words, if I tell you that the divergence of E or of B, is a specified scalar field as in first and second equations and the curls of E and of B is a vector field as in third and fourth equations, can you then determine the function E and B? If Then () () Proof The left side ( )( ) ( )( )( )

( ) )( (( )( )) ( )( )( ) Right side () () () From second Maxwell’s equation Then () () () ()

Potential theory Theory 1: scalar potential and irrotational field If the curl of a vector field vanishes everywhere, then it can be written as the gradient of a scalar potential. In physics the electrostatic field in Gauss’s law (not the induced electric field in Faraday’s law) is curl-less vector field, that is if Then it must be  Where  is the electrostatic potential, where ( ) That is means the curl of gradient equal to zero The vector field E which satisfy this condition (curl-less vector field) is called irrotational vector field. The following conditions are equivalent (that is, E satisfies one if and only if it satisfies all the others): (a) V x E = 0 everywhere. (b) ∫ is independent of path, for any given end points. (c) ∮ for any closed loop. (d) E is the gradient of some scalar, E = - V The scalar potential is not unique-any constant can be added to  with impunity, since this will not affect its gradient.

Theory 2: vector potential and solenoidal field If the divergence of a vector field vanishes everywhere, then B can be expressed as the curl of a vector potential. In physics the divergence of magnetic field B equal to zero (second Maxwell’s equation), if It must be Where () The divergence of curl equals zero. The vector field satisfied this condition is called solenoidal field That's the main conclusion of the following theorem: Theorem 2: Divergence-less (or \"solenoidal\") fields. The following conditions are equivalent: (a) V . B = 0 everywhere. (b) ∫ is independent of surface, for any given boundary line. (c) ∮ for any closed surface. (d) B is the curl of some vector, B = x A. The vector potential is not unique-the gradient of any scalar function can be added to A without affecting the curl, since the curl of a gradient is zero. ………………………….  …….



Electrostatics Electrostatics is a branch of physics that studies electric charges at rest. Electric force: Coulomb’s law ̂ Electric field ̂ Charge configuration: Continues charge distribution A charge could be configured through point charge q, point test charge qo and group of charge Q=q1+q2+q3+ …….. If the charge is spread out along a line, with charge-per-unit-Iength A, then the line charge density is ∫ The surface charge density ∫ For volume charge density

∫ Divergence and curl of electrostatic field Divergence of electrostatic field ∮ Curl of electrostatic field ) ∫ () For open path l(a,b) ∫ ∫( For closed path L(S), rb=ra ∮ where ra is the distance from the origin to the point a and rb is the distance to b. The integral around a closed path is evidently zero (for then ra = rb): ∮ ∫


Like this book? You can publish your book online for free in a few minutes!
Create your own flipbook