Sunday, December 18, 2011

Submit Url Add Link Add Your Link Submit Url To Our Free Reciprocal Links Exchange Directory

Thursday, December 8, 2011

Renewable Energy

Renewable Energy is electric power that is generated from renewable sources of energy lsuch as: wind power, solar power, geothermal energy, and hydroelectric energy. Renewable energy is easily replenished by nature and is a cleaner, non-carbon polluting source of energy like various fossil fuels. Renewable Energy sources are often referred to as emerging energy technologies.

Recently, the cost of leading renewable energy technologies have dropped so much that renewable energy technologies are competing with traditional sources of energy. The best advice is to consider your options. Lots of intelligent information is available from a variety of leading sources, like renewable energy associations, consultants and wind and solar equipment manufacturers.

Renewable energy electricity production is expected to expand significantly over the coming years in the developed world. This represents an opportunity for developed countries (large electricity consumers) to develop and commercialize new and competitive technologies to the traditional "fossil fuel" based technologies and thereby manufacture products and offer services in support of a growing industry.

Renewable energy is power that is generated from natural resources such as sunlight (through photovoltaic solar cells), wind (through wind turbines), water (through dams and hydroelectric power plants), came from renewable energy sources, In 2006, about 18 per cent of the world's electricity consumption came from renewable energy technologies, with 13 per cent coming from traditional biomass, such as wood-burning. Hydroelectricity was the next largest renewable source, providing 3 per cent (15 per cent of global electricity generation, followed by solar hot water/heating, which contributed 1.3 per cent. Modern technologies, such as geothermal energy, wind power, solar power, and ocean energy together provided some 0.8 per cent of total electricity generation.

The term "renewable energy" may not be equal to the term “green” energy. This is because typically the term green energy refers to energy from renewable systems that are smaller than conventional, large-scale electric power generation, including various renewable energy systems. For example, some large-scale hydro-electric projects require large dams and vast reservoirs that flood huge tracks of wilderness. Conversely, low-capacity hydroelectric plants use "low head" water as it turns downstream in order to generate electric power. This results in less impact on the environment.

Although renewable energy is quickly replenished, some of these energies depend greatly on whether the sun is shining or the wind is blowing.

When renewable energy is converted into electric power production, it is often times transmitted into an electric power grid and joins the electricity "pool", including non-renewable energy sources of power. Government and electric utilities are working to increase the overall proportion of renewable energy electricity produced by renewable energy.

Governments and energy experts are taking a new interest in renewable energy for several reasons. Electric power production from various renewable energy sources produces much fewer global warming, carbon dioxide and other toxic pollutantts, which are having an impact on the world's changing climate. Also, renewable energy usually adds fewer other gaseous pollutants to the atmosphere, including the following:

sulphur dioxide and nitrogen oxide gases that form the largest compontents of "acid rain";
fossil fuel particulate matter, which combined with ground-level ozone, constitutes "smog" on hot summer days;
mercury, which is claimed to transformed in the environment into a highly toxic substance and a threat to all living creatures.
When the world uses low-impact renewable energy sources, we help to protect the environment. When large-scale non-renewable (fossil fuel) energy projects are developed, they have the potentiality to affect watersheds, migration animal and bird routes, etc.

Electrical Transformers

Electrical transformers are used to "transform" voltage from one level to another, usually from a higher voltage to a lower voltage. They do this by applying the principle of magnetic induction between coils to convert voltage and/or current levels.

In this way, electrical transformers are a passive device which transforms alternating current (otherwise known as "AC") electric energy from one circuit into another through electromagnetic induction. An electrical transformer normally consists of a ferromagnetic core and two or more coils called "windings". A changing current in the primary winding creates an alternating magnetic field in the core. In turn, the core multiplies this field and couples the most of the flux through the secondary tranformer windings. This in turn induces alternating voltage (or emf) in each of the secondary coils.


Electrical transformers can be configured as either a single-phase or a three-phase configuration. There are several important specifications to specify when searching for electrical transformers. These include: maximum secondary voltage rating, maximum secondary current rating, maximum power rating, and output type. An electrical transformer may provide more than one secondary voltage value. The Rated Power is the sum of the VA (Volts x Amps) for all of the secondary windings. Output choices include AC or DC. For Alternating Current waveform output, voltage the values are typically given in RMS values. Consult manufacturer for waveform options. For direct current secondary voltage output, consult manufacturer for type of rectification.

Cores can be constructed as either a toroidal or laminated. Toroidal units typically have copper wire wrapped around a cylindrical core so the magnetic flux, which occurs within the coil, doesn't leak out, the coil efficiency is good, and the magnetic flux has little influence on other components. Laminated refers to the laminated-steel cores. These steel laminations are insulated with a nonconducting material, such as varnish, and then formed into a core that reduce electrical losses. There are many types. These include autotransformer, control, current, distribution, general-purpose, instrument, isolation, potential (voltage), power, step-up, and step-down. Mountings include chassis mount, dish or disk mount, enclosure or free standing, h frame, and PCB mount.

Wind Power

Wind power can be an excellent complement to a solar power system. Here in Colorado, when the sun isn't shining, the wind is usually blowing. Wind power is especially helpful here in the winter to capture both the ferocious and gentle mountain winds during the times of least sunlight and highest power use. In most locations, wind is not suitable as the only source of power, it simply fills in the gaps left by solar power quite nicely.
Building a wind generator from scratch is not THAT difficult of a project. You will need a shop with basic power and hand tools, and some degree of dedication. Large wind generators of 2000 Watts and up are a major project needing very strong construction, but smaller ones in the 700-1000 Watt, 8-11 foot range can be built fairly easily. In fact, it is highly recommended that you tackle a smaller wind turbine before even thinking about building a large one. You'll need to be able to cut and weld steel, and a metal lathe can be handy

Electrical Testing Equipments

Digital Multimeters
With a good wiring diagram and a good multimeter, a trained electrical professional can find the cause of almost any problem.

There are two basic types of multimeters, digital and analog. Analog multimeters have a needle and digital multimeters have an LCD or a LED display. WIth today's demand for accuracy in testing electrical systems, it makes more sense to have a digital multimeter but an analog multimeter still has its uses.

This article focuses on digital multimeters. A digital multimeter will have many functions built into it. As with any tool or piece of equipment, it is necessary to make certain you read and follow digital multimeter instructions and cautions. This will protect you and your electrical equipment.

All digital multimeters will test for voltage, current and resistance. These are the three functions needed when trying to diagnose a problem. When you purchase a digital multimeter, one of the most important things to look at is the meter's impedance, which is the meter's operating resistance. Most digital multimeters have very high impedance. Since the meter is part of the circuit being tested, its resistance will affect the current flow through that circuit.

Typical Amperage Test
If a digital multimeter has a very high impedance or resistance it will cause a slight increase in the circuit's current. This becomes a concern when you test electronic systems because the increased current draw can damage the components being tested or, at the very least, alter the readings or change a sensor signal. It's best to get a meter that has an impedance of at least 10 megaohms. That way the current draw is so low it becomes invisible.

Almost all digital multimeters have an "auto-range" features that will automatically select the proper range. Some digital multimeters will let you override this feature and let you manually select the range you want. Some DMMs do not have this option and must be set manually. Check the documentation that came with your digital multimeter and make sure you know and understand its different ranges.

Most digital multimeters that have an auto-range will have the setting either before or after the reading. Ohms are measured in multiples of ten and given the designation 'K' or 'M' with 'K' standing for 1,000 ohms and 'M' standing for 100,000,000 ohms. Amps would be displayed as mA, milliamps or 1/1000 of an amp or A for full amps. Volts will also be displayed as mV or volts. When you take a reading with a DMM that has auto-range, be sure you note at what range the meter is on. You could mistake 10 mA as 10 amps.

Typical Voltage Test
Most digital multimeters that have auto-range will show the reading with a decimal point. A reading of 1.2 amps will be 12 amps if you ignore the decimal point.

Digital multimeters do have a limit on how much current they can test. Usually this limit is printed at the point where the red lead plugs into the meter. If it says, "10 Amps Max" then there is a 10-amp fuse inside the meter that will blow if the current is above 10 amps. If you take out the 10-amp fuse and put in a 20-amp fuse, you will burn out the meter beyond repair. I would suggest buying a DMM that will handle at least 20 amps for automotive testing.

Many digital multimeters have an inductive pickup that clamps around the wire being tested. These ammeters measure amperage based on the magnetic field created by the current flowing through the wire. DMMs that have an inductive pickup usually will read higher current and have a higher limit. Since this type of meter doesn't become part of the circuit you do not need to disconnect any wires to get a reading.

Voltmeters are usually connected across a circuit. You can perform two types of tests with a voltmeter. If you connect it from the positive terminal of a component to ground, you will read the amount of voltage there is to operate the component. It will usually read 0 volts or full voltage. If you test a component that is supposed to have 12 volts, but there is 0 volts, there is an open in the circuit. This is where you will have to trace back until you locate the open.

Typical Resistance Test
Another useful function of the digital multimeter is the ohmmeter. An ohmmeter measures the electrical resistance of a circuit. If you have no resistance in a circuit, the ohmmeter will read 0. If you have an open in a circuit, it will read infinite.

An ohmmeter uses its own battery to conduct a resistance test. Therefore there must be no power in the circuit being tested or the ohmmeter will become damaged.

When a component is tested, the red lead is placed on the positive side and the black lead on the negative side. Current from the battery will flow through the component and the meter will determine the resistance by how much the voltage drops. If the component has an open the meter will flash "1.000" or "OL" to show an open or infinite resistance. A reading of 0 ohms indicates that there is no resistance in the component and it is shorted. If a component is supposed to have 1,000 ohms of resistance and a test shows it has 100 ohms of resistance, which indicates a short. If it reads infinite, then it is open.

Analog ohmmeters will need to be calibrated before they are used. There is an "ohms adjust" screw on the meter used to do the calibration. To calibrate the ohmmeter, you touch the red and black leads together and turn the adjusting screw until the needle is at 0. You should do this each time you use the ohmmeter and each time you change scales. DMMs do not need to be calibrated since they will self calibrate themselves. Holding the two leads together will confirm that they are, indeed, calibrated.

To check a wire in a harness you connect one lead at one end of the wire and the other lead to the other end of the wire. If the wire is good you will get a reading. If it is broken, you will get an infinite reading. This is useful in determining why a particular component is not getting power. Just be sure you isolate the wire from the circuit so your ohmmeter does not get damaged.

These are the three basic functions of all digital multimeters. Some digital multimeters will have many other features such as averaging where it will take a reading over a period of time and average it out. Some have a MIN/MAX feature that will hold the highest/lowest reading. Some will do specific diode tests, measure injector pulse times and even have thermometers.

Sunday, November 27, 2011

Integrated Circuit

The History of the Integrated Circuit

What we didn't realize then was that the integrated circuit would reduce the cost of electronic functions by a factor of a million to one, nothing had ever done that for anything before.

The Integrated Circuit
It seems that the integrated circuit was destined to be invented. Two separate inventors, unaware of each other's activities, invented almost identical integrated circuits or ICs at nearly the same time.
Jack Kilby, an engineer with a background in ceramic-based silk screen circuit boards and transistor-based hearing aids, started working for Texas Instruments in 1958. A year earlier, research engineer Robert Noyce had co-founded the Fairchild Semiconductor Corporation. From 1958 to 1959, both electrical engineers were working on an answer to the same dilemma: how to make more of less.

Commercial Release
In 1961 the first commercially available integrated circuits came from the Fairchild Semiconductor Corporation. All computers then started to be made using chips instead of the individual transistors and their accompanying parts. Texas Instruments first used the chips in Air Force computers and the Minuteman Missile in 1962. They later used the chips to produce the first electronic portable calculators. The original IC had only one transistor, three resistors and one capacitor and was the size of an adult's pinkie finger. Today an IC smaller than a penny can hold 125 million transistors.
Jack Kilby holds patents on over sixty inventions and is also well known as the inventor of the portable calculator (1967). In 1970 he was awarded the National Medal of Science. Robert Noyce, with sixteen patents to his name, founded Intel, the company responsible for the invention of the microprocessor, in 1968. But for both men the invention of the integrated circuit stands historically as one of the most important innovations of mankind. Almost all modern products use chip technology.

3 Dimensions

The intuitive notion that the universe has three dimensions seems to be an irrefutable fact. After all, we can only move up or down, left or right, in or out. But are these three dimensions all we need to describe nature? What if there are, more dimensions ? Would they necessarily affect us? And if they didn't, how could we possibly know about them?

Some physicists and mathematicians investigating the beginning of the universe think they have some of the answers to these questions. The universe, they argue, has far more than three, four, or five dimensions. They believe it has eleven! But let's step back a moment. How do we know that our universe consists of only three spatial dimensions? Let's take a look at two of these "proofs."

Proof 1: There are five and only five regular polyhedra. A regular polyhedron is defined as a solid figure whose faces are identical polygons - triangles, squares, and pentagons - and which is constructed so that only two faces meet at each edge. If you were to move from one face to another, you would cross over only one edge. Shortcuts through the inside of the polyhedron that could get you from one face to another are forbidden. Long ago, the mathematician Leonhard Euler demonstrated an important relation between the number of faces (F), edges (E), and corners (C) for every regular polyhedron: C - E + F = 2. For example, a cube has 6 faces, 12 edges, and 8 corners while a dodecahedron has 12 faces, 30 edges, and 20 corners. Run these numbers through Euler's equation and the resulting answer is always two, the same as with the remaining three polyhedra. Only five solids satisfy this relationship - no more, no less.

Not content to restrict themselves to only three dimensions, mathematicians have generalized Euler's relationship to higher dimensional spaces and, as you might expect, they've come up with some interesting results. In a world with four spatial dimensions, for example, we can construct only six regular solids. One of them - the "hypercube" - is a solid figure in 4-D space bounded by eight cubes, just as a cube is bounded by six square faces. What happens if we add yet another dimension to space? Even the most ambitious geometer living in a 5-D world would only be able to assemble thee regular solids. This means that two of the regular solids we know of - the icosahedron and the dodecahedron - have no partners in a 5-D universe.


For those of you who successfully mastered visualizing a hypercube, try imagining what an "ultracube" looks like. It's the five- dimensional analog of the cube, but this time it is bounded by one hypercube on each of its 10 faces! In the end, if our familiar world were not three-dimensional, geometers would not have found only five regular polyhedra after 2,500 years of searching. They would have found six (with four spatial dimension,) or perhaps only three (if we lived in a 5-D universe). Instead, we know of only five regular solids. And this suggests that we live in a universe with, at most, three spatial dimensions.

All right, let's suppose our universe actually consists of four spatial dimensions. What happens? Since relativity tells us that we must also consider time as a dimension, we now have a space-time consisting of five dimensions. A consequence of 5-D space-time is that gravity has freedom to act in ways we may not want it to.

Proof 2: To the best available measurements, gravity follows an inverse square law; that is, the gravitational attraction between two objects rapidly diminishes with increasing distance. For example, if we double the distance between two objects, the force of gravity between them becomes 1/4 as strong; if we triple the distance, the force becomes 1/9 as strong, and so on. A five- dimensional theory of gravity introduces additional mathematical terms to specify how gravity behaves. These terms can have a variety of values, including zero. If they were zero, however, this would be the same as saying that gravity requires only three space dimensions and one time dimension to "give it life." The fact that the Voyager space- craft could cross billions of miles of space over several years and arrive vithin a few seconds of their predicted times is a beautiful demonstration that we do not need extra-spatial dimensions to describe motions in the Sun's gravitational field.

From the above geometric and physical arguments, we can conclude (not surprisingly) that space is three-dimensional - on scales ranging from that of everyday objects to at least that of the solar system. If this were not the case, then geometers would have found more than five regular polyhedra and gravity would function very differently than it does - Voyager would not have arrived on time. Okay, so we've determined that our physical laws require no more than the three spatial dimensions to describe how the universe works. Or do they? Is there perhaps some other arena in the physical world where multidimensional space would be an asset rather than a liability?

Since the 1920s, physicists have tried numerous approaches to unifying the principal natural interactions: gravity, electromagnetism, and the strong and weak forces in atomic nuclei. Unfortunately, physicists soon realized that general relativity in a four-dimensional space-time does not have enough mathematical "handles" on which to hang the frameworks for the other three forces. Between 1921 and 1927, Theodor Kaluza and Oskar Klein developed the first promising theory combining gravity and electromagnetism. They did this by extending general relativity to five dimensions. For most of us, general relativity is mysterious enough in ordinary four-dimensional space-time. What wonders could lie in store for us with this extended universe?

General relativity in five dimensions gave theoreticians five additional quantities to manipulate beyond the 10 needed to adequately define the gravitational field. Kaluza and Klein noticed that four of the five extra quantities could be identified with the four components needed to define the electromagnetic field. In fact, to the delight of Kaluza and Klein, these four quantities obeyed the same types of equations as those derived by Maxwell in the late 1800s for electromagnetic radiationl Although this was a promising start, the approach never really caught on and was soon buried by the onrush of theoretical work on the quantum theory of electromagnetic force. It was not until work on supergravity theory began in 1975 that Kaluza and Klein's method drew renewed interest. Its time had finally come.

What do theoreticians hope to gain by stretching general relativity beyond the normal four dimensions of space-time? Perhaps by studying general relativity in a higher-dimensional formulation, we can explain some of the constants needed to describe the natural forces. For instance, why is the proton 1836 times more massive than the electron? Why are there only six types of quarks and leptons? Why are neutrinos massless? Maybe such a theory can give us new rules for calculating the masses of fundamental particles and the ways in which they affect one another. These higher-dimensional relativity theories may also tell us something about the numbers and properties of a mysterious new family of particles - the Higgs bosons - whose existence is predicted by various cosmic unification schemes. (See "The Decay of the False Vacuum," ASTRONOMY, November 1983.)

These expectations are not just the pipedreams of physicists - they actually seem to develop as natural consequences of certain types of theories studied over the last few years. In 1979, John Taylor at Kings College in London found that some higher- dimensional formalisms can give predictions for the maximum mass of the Higgs bosons (around 76 times that of the proton.) As they now stand, unification theories can do no more than predict the existence of these particles - they cannot provide specific details about their physical characteristics. But theoreticians may be able to pin down some of these details by using extended theories of general relativity.

Experimentally, we know of six leptons: the electron, the muon, the tauon, and their three associated neutrinos. The most remarkable prediction of these extended relativity schemes, however, holds that the number of leptons able to exist in a universe is related to the number of dimensions of space-time. In a 6-D space-time, for example, only one lepton - presumably the electron - can exist. In a 10-D space-time, four leptons can exist - still not enough to accommodate the six we observe. In a 12-D space- time, we can account for all six known leptons - but we also acquire two additional leptons that have not yet been detected. Clearly, we would gain much on a fundamental level if we could increase the number of dimensions in our theories just a little bit.

How many additional dimensions do we need to consider in order to account for the elementary particles and forces that we know of today? Apparently we require at least one additional spatial dimension for every distinct "charge" that characterizes how each force couples to matter. For the electromagnetic force, we need two electric charges: positive and negative. For the strong force that binds quarks together to form, among other things, protons and neutrons, we need three "color" charges - red, blue, and green. Finally, we need two "weak" charges to account for the weak nuclear force. if we add a spatial dimension for each of these charges, we end up with a total of seven extra dimensions. The properly extended theory of general relativity we seek is one with an 11 -dimensional space-time, at the very least. Think of it - space alone must have at least 10 dimensions to accomodate all the fields known today.

Of course, these additional dimensions don't have to be anything like those we already know about. In the context of modern unified field theory, these extra dimensions are, in a sense, internal to the particles themselves - a "private secret," shared only by particles and the fields that act on them! These dimensions are not physically observable in the same sense as the three spatial dimensions we experience; they'stand in relation to the normal three dimensions of space much like space stands in relation to time.

With today's veritable renaissance in finding unity among the forces and particles that compose the cosmos, some by methods other than those we have discussed, these new approaches lead us to remarkably similar conclusions. It appears that a four-dimensional space-time is simply not complex enough for physics to operate as it does.

We know that particles called bosons mediate the natural forces. We also know that particles called fermions are affected by these forces. Members of the fermion family go by the familiar names of electron, muon, neutrino, and quark; bosons are the less well known graviton, photon, gluon, and intermediate vector bosons. Grand unification theories developed since 1975 now show these particles to be "flavors" of a more abstract family of superparticies - just as the muon is another type of electron. This is an expression of a new kind of cosmic symmetry - dubbed supersymmetry, because it is all-encompassing. Not only does it include the force-carrying bosons, but it also includes the particles on which these forces act. There also exists a corresponding force to help nature maintain supersymmetry during the various interactions. It's called supergravity. Supersymmetry theory introduces two new types of fundamental particles - gravitinos and photinos. The gravitino has the remarkable property of mathematically moderating the strength, of various kinds of interactions involving the exchange of gravitons. The photino, cousin of the photon, may help account for the "missing mass" in the universe.

Supersymmetry theory is actually a complex of eight different theories, stacked atop one another like the rungs of a ladder. The higher the rung, the larger is its complement of allowed fermion and boson particle states. The "roomiest" theory of all seems to be SO(8), (pronounced ess-oh-eight), which can hold 99 different kinds of bosons and 64 different kinds of fermions. But SO(8) outdoes its subordinate, SO(7), by only one extra dimension and one additional particle state. Since SO(8) is identical to SO(7) in all its essential features, we'll discuss SO(7) instead. However, we know of far more than the 162 types of particles that SO(7) can accommodate, and many of the predicted types have never been observed (like the massless gravitino). SO(7) requires seven internal dimensions in addition to the four we recognize - time and the three "every day" spatial dimensions. If SO(7) at all mirrors reality, then our universe must have at least 11 dimensions! Unfortunately, it has been demonstrated by W. Nahm at the European Center for Nuclear Research in Geneva, Switzerland that supersymmetry theories for space-times with more than 11 dimensions are theoretically impossible. SO(7) evidently has the largest number of spatial dimensions possible, but it still doesn't have enough room to accommodate all known types of particles.

It is unclear where these various avenues of research lead. Perhaps nowhere. There is certainly ample historical precedent for ideas that were later abandoned because they turned out to be conceptual dead-ends. Yet what if they turn out to be correct at some level? Did our universe begin its life as some kind of 11-dimensional "object" which then crystallized into our four- dimensional cosmos?

Although these internal dimensions may not have much to do with the real world at the present time, this may not always have been the case. E. Cremmer and J. Scherk of I'Ecole Normale Superieure in Paris have shown that just as the universe went through phase transitions in its early history when the forces of nature became distinguishable, the universe may also have gone through a phase transition when mensionality changed. Presumably matter has something like four external dimensions (the ones we encounter every day) and something like seven internal dimensions. Fortunately for us, these seven extra dimensions don't reach out into the larger 4-D realm where we live. If they did, a simple walk through the park might become a veritable obstacle course, littered with wormholes in space and who knows what else!

Alan Chocos and Steven Detweiler of Yale University have considered the evolution of a universe that starts out being five- dimensional. They discovered that while the universe eventually does evolve to a state where three of the four spatial dimensions expand to become our world at large, the extra fourth spatial dimension shrinks to a size of 10^-31 centimeter by the present time. The fifth dimension to the universe has all but vanished and is 20 powers of 10 - 100 billion billion times - smaller than the size of a proton. Although the universe appears four- dimensional in space-time, this perception is accidental due to our large size compared to the scale of the other dimensions. Most of us think of a dimension as extending all the way to infinity, but this isn't the full story. For example, if our universe is really destined to re-collapse in the distant future, the three- dimensional space we know today is actually limited itself - it will eventually possess a maximum, finite size. It just so happens that the physical size of human beings forces us to view these three spatial dimensions as infinitely large.

It is not too hard to reconcile ourselves to the notion that the fifth (or sixth, or eleventh) dimension could be smaller than an atomic nucleus - indeed, we can probably be thankful that this is the case.

Oscillations and Simple Harmonic Motion

The Qualitative Physics Of Oscillation

In the slow motion movie clip at right, the mass glides on an air track. The track is perforated with small holes, through which flows air from the inside, where the pressure is above atmospheric. So the mass is supported, like a hovercraft, on a cushion of air, and friction is eliminated. Because the speed is small, air resistance is very small. Consequently, the only non-negigible force in the horizontal direction is that exerted by the two springs. Because there are no vertical displacements, we discuss here only the horizontal displacement.

At the equilibrium position (x = 0 in the graph below the clip), the forces exerted by the two springs are equal in magnitude but opposite in direction, so the total force is zero. To the right of equilibrium, the force acts to accelerate the mass to the left, and vice versa. (The graph is rotated 90° from its normal orientation so that we can compare it with the motion.)

Let's begin (as do the graph and the animation) with the mass to the right of equilibrium and at rest. Let's see what happens when I release it:

First, the spring force acts to the left and mass is accelerated towards x = 0.
When it reaches x = 0, it has a velocity and therefore a momentum to the left. (Near equilbrium, the forces are small, so there is a region near x = 0 over which the velocity changes little: the x(t) graph is almost straight.)
When it arrives at x = 0, because of its momentum to the left, it overshoots, i.e. it continues travelling to the left. While it is to the left of x = 0, however, the spring force acts to the right. This force gradually slows the mass until it stops. The point at which it stops is, of course, its maximum displacement to the left.
Once it is stopped on the left hand side of equilibrium, the spring force accelerates it to the right, so the velocity and momentum to the right increase.
When it reaches equilibrium again, it now has its maximum rightwards momentum.
It overshoots and continues to the right. The spring force now acts to the left, so it decelerates until it stops at its maximum rightwards displacement.
Because no non-negligible nonconservative forces act, mechanical energy is conserved. Consequently, the system returns to its initial condition. The cycle then repeats exactly, so the motion is periodic.

Quantitative Analysis
For linear springs, this leads to Simple Harmonic Motion. The force F exerted by the two springs is F = − kx, where k is the combined spring constant for the two springs (see Young's modulus, Hooke's law and material properties). In this case, k = k1 + k2, where k1 and k2 are the constants of the two springs. The analysis that follows here is fairly brief. However, we do a quantitative analysis on the multimedia chapter Oscillations and also solve this problem as an example on Differential Equations. There is also a page on the Kinematics of Simple Harmonic Motion.
Newton's second law states that the acceleration d2x/dt2 of the mass m subject to total force F satisfies F = m.d2x/dt2 , which gives the differential equation

m.d2x/dt2 = − kx, or
d2x/dt2 = − ω2 x , where ω2 = k/m .
Solving this particular equation is described in detail on the Differential Equations page. However, we can verify by subsitution that the solution is
x = A sin (ωt + φ),

where A is the amplitude, and the phase constant φ is determined by the initial conditions. We discuss these below.

Initial Conditions
In the first movie shown shown at right, the mass is released from rest, so the amplitude is maximal (x = A) at t = 0, so the required phase constant is φ = π/2. (Indeed, for this particular case, we could say that the curve is a cos function rather than a sine.)

x1 = A sin (ωt + π/2) = A cos (ωt )



In the second movie shown at right, however, the mass is given an impulsive start, so the initial condition approximates maximum velocity and x = 0 at t = 0. This requires φ = 0, so

x2 = A sin (ωt + 0) = A sin (ωt)

Here we start with an initial velocity, which is

v0 = dx2/dt = A sin ωt = Aω
Note that the initial condition determines both φ and A.
In both these clips, a rotating line (an animated phasor diagram) is used to show that Simple Harmonic Motion is the projection onto one dimension of circular motion. This is explained in detail in the Kinematics of Simple Harmonic Motion in Physclips.

Phasors are commonly used to facilitate calculations in AC circuits.

Frequency f And Angular Frequency ω
We saw above that x = A sin (ωt + φ), where ω2 = k/m . The cyclic frequency is f = 1/T, where T is the period. The sine function goes through one complete cycle when its argument increases by 2π, so we require that (ω(t+T) + φ) − (ωt + φ) = 2π, so ωT = 2π, so

ω = 2π/T = 2πf = (k/m)½ .
This parameter is determined by the system: the particular mass and spring used. For a linear system, the frequency is independent of amplitude (see below, however, a for nonlinear system ).

Compare the oscillations shown in the two clips at right. The first uses one air track glider and the second uses two similar gliders, so the mass is doubled. The period is increased by about 40%, i.e. by a factor of √2, so the frequency is decreased by the same factor.




Though it is not so easy to see in the video, at right we have used stiffer springs with a higher value of k. Here the period is shorter and therefore the frequency higher that in all the previous examples.