Thursday, September 1, 2011
Solar or photovoltaic cells represent one of the best possible technologies for providing an absolutely clean and virtually inexhaustible source of energy to power our civilization. However, for this dream to be realized, solar cells need to be made from inexpensive elements using low-cost, less energy-intensive processing chemistry, and they need to efficiently and cost-competitively convert sunlight into electricity. A team of researchers with the U.S. Department of Energy (DOE)'s Lawrence Berkeley National Laboratory has now demonstrated two out of three of these requirements with a promising start on the third.
Peidong Yang, a chemist with Berkeley Lab's Materials Sciences Division, led the development of a solution-based technique for fabricating core/shell nanowire solar cells using the semiconductors cadmium sulfide for the core and copper sulfide for the shell. These inexpensive and easy-to-make nanowire solar cells boasted open-circuit voltage and fill factor values superior to conventional planar solar cells. Together, the open-circuit voltage and fill factor determine the maximum energy that a solar cell can produce. In addition, the new nanowires also demonstrated an energy conversion efficiency of 5.4-percent, which is comparable to planar solar cells.
"This is the first time a solution based cation-exchange chemistry technique has been used for the production of high quality single-crystalline cadmium sulfide/copper sulfide core/shell nanowires," Yang says. "Our achievement, together with the increased light absorption we have previously demonstrated in nanowire arrays through light trapping, indicates that core/shell nanowires are truly promising for future solar cell technology."
Yang, who holds a joint appointment with the University of California (UC) Berkeley, is the corresponding author of a paper reporting this research that appears in the journal Nature Nanotechnology. The paper is titled "Solution-processed core–shell nanowires for efficient photovoltaic cells." Co-authoring this paper with Yang were Jinyao Tang, Ziyang Huo, Sarah Brittman and Hanwei Gao.
Typical solar cells today are made from ultra-pure single crystal silicon wafers that require about 100 micrometers in thickness of this very expensive material to absorb enough solar light. Furthermore, the high-level of crystal purification required makes the fabrication of even the simplest silicon-based planar solar cell a complex, energy-intensive and costly process.
A highly promising alternative would be semiconductor nanowires – one-dimensional strips of materials whose width measures only one-thousandth that of a human hair but whose length may stretch up to the millimeter scale. Solar cells made from nanowires offer a number of advantages over conventional planar solar cells, including better charge separation and collection capabilities, plus they can be made from Earth abundant materials rather than highly processed silicon. To date, however, the lower efficiencies of nanowire-based solar cells have outweighed their benefits.
"Nanowire solar cells in the past have demonstrated fill factors and open-circuit voltages far inferior to those of their planar counterparts," Yang says. "Possible reasons for this poor performance include surface recombination and poor control over the quality of the p–n junctions when high-temperature doping processes are used."
At the heart of all solar cells are two separate layers of material, one with an abundance of electrons that function as a negative pole, and one with an abundance of electron holes (positively-charged energy spaces) that function as a positive pole. When photons from the sun are absorbed, their energy is used to create electron-hole pairs, which are then separated at the p-n junction – the interface between the two layers - and collected as electricity.
About a year ago, working with silicon, Yang and members of his research group developed a relatively inexpensive way to replace the planar p-n junctions of conventional solar cells with a radial p-n junction, in which a layer of n-type silicon formed a shell around a p-type silicon nanowire core. This geometry effectively turned each individual nanowire into a photovoltaic cell and greatly improved the light-trapping capabilities of silicon-based photovoltaic thin films.
Now they have applied this strategy to the fabrication of core/shell nanowires using cadmium sulfide and copper sulfide, but this time using solution chemistry. These core/shell nanowires were prepared using a solution-based cation (negative ion) exchange reaction that was originally developed by chemist Paul Alivisatos and his research group to make quantum dots and nanorods. Alivisatos is now the director of Berkeley Lab, and UC Berkeley's Larry and Diane Bock Professor of Nanotechnology.
"The initial cadmium sulfide nanowires were synthesized by physical vapor transport using a vapor–liquid–solid (VLS) mechanism rather than wet chemistry, which gave us better quality material and greater physical length, but certainly they can also be made using solution process" Yang says. "The as-grown single-crystalline cadmium sulfide nanowires have diameters of between 100 and 400 nanometers and lengths up to 50 millimeters."
The cadmium sulfide nanowires were then dipped into a solution of copper chloride at a temperature of 50 degrees Celsius and kept there for 5 to 10 seconds. The cation exchange reaction converted the surface layer of the cadmium sulfide into a copper sulfide shell.
"The solution-based cation exchange reaction provides us with an easy, low-cost method to prepare high-quality hetero-epitaxial nanomaterials," Yang says. "Furthermore, it circumvents the difficulties of high-temperature doping and deposition for typical vapor phase production methods, which suggests much lower fabrication costs and better reproducibility. All we really need are beakers and flasks for this solution-based process. There's none of the high fabrication costs associated with gas-phase epitaxial chemical vapor deposition and molecular beam epitaxy, the techniques most used today to fabricate semiconductor nanowires."
Yang and his colleagues believe they can improve the energy conversion efficiency of their solar cell nanowires by increasing the amount of copper sulfide shell material. For their technology to be commercially viable, they need to reach an energy conversion efficiency of at least ten-percent.
Provided by Lawrence Berkeley National Laboratory [August 31, 2011]
University of Florida researchers may help resolve the public debate over future light source of choice: Edison's incandescent bulb or the more energy efficient compact fluorescent lamp. It could be neither.
Instead, future lighting needs may be supplied by a new breed of light emitting diode, or LED, that conjures light from the invisible world of quantum dots. According to an article in the current online issue of the journal Nature Photonics, moving a QD LED from the lab to market is a step closer to reality thanks to a new manufacturing process pioneered by two research teams in UF's department of materials science and engineering.
"Our work paves the way to manufacture efficient and stable quantum dot-based LEDs with really low cost, which is very important if we want to see wide-spread commercial use of these LEDs in large-area, full-color flat-panel displays or as solid-state lighting sources to replace the existing incandescent and fluorescent lights," said Jiangeng Xue, the research leader and an associate professor of material science and engineering "Manufacturing costs will be significantly reduced for these solution-processed devices, compared to the conventional way of making semiconductor LED devices."
A significant part of the research carried out by Xue's team focused on improving existing organic LEDs. These semiconductors are multilayered structures made up of paper thin organic materials, such as polymer plastics, used to light up display systems in computer monitors, television screens, as well as smaller devices such as MP3 players, mobile phones, watches, and other handheld electronic devices. OLEDs are also becoming more popular with manufacturers because they use less power and generate crisper, brighter images than those produced by conventional LCDs (liquid crystal displays). Ultra-thin OLED panels are also used as replacements for traditional light bulbs and may be the next big thing in 3-D imaging.
Complementing Xue's team is another headed by Paul Holloway, distinguished professor of materials science and engineering at UF, which delved into quantum dots, or QDs. These nano-particles are tiny crystals just a few nanometers (billionths of a meter) wide, comprised of a combination of sulfur, zinc, selenium and cadmium atoms. When excited by electricity, QDs emit an array of colored light. The individual colors vary depending on the size of the dots. Tuning, or "adjusting," the colors is achieved by controlling the size of the QDs during the synthetic process.
By integrating the work of both teams, researchers created a high-performance hybrid LED, comprised of both organic and QD-based layers. Until recently, however, engineers at UF and elsewhere have been vexed by a manufacturing problem that hindered commercial development. An industrial process known as vacuum deposition is the common way to put the necessary organic molecules in place to carry electricity into the QDs. However, a different manufacturing process called spin-coating, is used to create a very thin layer of QDs. Having to use two separate processes slows down production and drives up manufacturing costs.
According to the Nature Photonics article, UF researchers overcame this obstacle with a patented device structure that allows for depositing all the particles and molecules needed onto the LED entirely with spin-coating. Such a device structure also yields significantly improved device efficiency and lifetime compared to previously reported QD-based LED devices.
Spin-coating may not be the final manufacturing solution, however.
"In terms of actual product manufacturing, there are many other high through-put, continuous "roll-to-roll" printing or coating processes that we could use to fabricate large area displays or lighting devices," Xue said. "That will remain as a future research and development topic for the university and a start-up company, NanoPhotonica, that has licensed the technology and is in the midst of a technology development program to capitalize on the manufacturing breakthrough."
Provided by University of Florida [August 31, 2011]
Wednesday, April 27, 2011
Most powerful millimeter-scale energy harvester generates electricity from vibrations
A new energy harvester developed by University of Michigan researchers can harness energy from vibrations and convert it to electricity with five to ten times greater efficiency and power than other devices in its class.
"In a tiny amount of space, we've been able to make a device that generates more power for a given input than anything else out there on the market," said Khalil Najafi, one of the system's developers and chair of Electrical and Computer Engineering.
This new vibration energy harvester is specifically designed to turn the cyclic motions of factory machines into energy to power wireless sensor networks. These sensor networks monitor machines' performance and let operators know about any malfunctions.
The sensors that do this today get their power from a plug or a battery. They're considered "wireless" because they can transmit information without wires. Being tethered to a power source drastically increases their installation and maintenance costs, said Erkan Aktakka, one of the system's developers and a doctoral student in Electrical and Computer Engineering.
Long-lasting power is the greatest hurdle to large-scale use of pervasive information-gathering sensor networks, the researchers say.
"If one were to look at the ongoing life-cycle expenses of operating a wireless sensor, up to 80 percent of the total cost consists solely of installing and maintaining power wires and continuously monitoring, testing and replacing finite-life batteries," Aktakka said. "Scavenging the energy already present in the environment is an effective solution."
The researchers have built a complete system that integrates a high-quality energy-harvesting piezoelectric material with the circuitry that makes the power accessible. (Piezoelectric materials allow a charge to build up in them in response to mechanical strain, which in this case would be induced by the machines' vibrations.)
"There are lots of energy sources surrounding us. Lightning has a lot of electricity and power, but it's not useful," Najafi said. "To be able to use the energy you harvest you have to store it in a capacitor or battery. We've developed an integrated system with an ultracapacitor that does not need to start out charged."
The active part of the harvester that enables the energy conversion occupies just 27 cubic millimeters. The packaged system, which includes the power management circuitry, is in the size of a penny. The system has a large bandwidth of 14 Hertz and operates at a vibration frequency of 155 Hertz, similar to the vibration you'd feel if you put your hand on top of a running microwave oven.
"Most of the previous vibration harvesters operated either at very high frequencies or with very narrow bandwidths, and this limited their practical applicationsoutside of a laboratory environment," Aktakka said.
The new harvester can generate more than 200 microwatts of power when it is exposed to 1.5g vibration amplitude. (1g is the gravitational acceleration that all objects experience by Earth's gravity.) The harvested energy is processed by an integrated circuitry to charge an ultracapacitor to 1.85 volts.
In theory, these devices could be left in place for 10 or 20 years without regular maintenance. "They have a limitless shelf time, since they do not require a pre-charged battery or an external power source," Aktakka said.
A novel silicon micromachining technique allows the engineers to fabricate the harvesters in bulk with a high-quality piezoelectric material, unlike other competing devices.
The market for power sources for wireless sensor networks in industrial settings is expected to reach $450 million by 2015, Aktakka said.
These new devices could have applications in medicine and the auto industry too. They could possibly be used to power medical implants in people or heat sensors on vehicle motors, Najafi said.
The researchers will present this work next at the 16th International Conference on Solid-State Sensors, Actuators, and Microsystems (TRANSDUCERS 2011) in Beijing in June.
Provided by University of Michigan
Thursday, March 24, 2011
Syracuse University chemist develops technique to use light to predict molecular crystal structures
|March 23rd, 2011|
|A Syracuse University chemist has developed a way to use very low frequency light waves to study the weak forces (London dispersion forces) that hold molecules together in a crystal. This fundamental research could be applied to solve critical problems in drug research, manufacturing and quality control.|
The research by Timothy Korter, associate professor of chemistry in SU's College of Arts and Sciences, was the cover article of the March 14 issue of Physical Chemistry Chemical Physics. The journal, published by the Royal Society of Chemistry, is one of the most prestigious in the field. A National Science Foundation Early Career Development (CAREER) Award funds Korter's research.
"When developing a drug, it is important that we uncover all of the possible ways the molecules can pack together to form a crystal," Korter says. "Changes in the crystal structure can change the way the drug is absorbed and accessed by the body."
One industry example is that of a drug distributed in the form of a gel capsule that crystallized into a solid when left on the shelf for an extended period of time, Korter explains. The medication inside the capsule changed to a form that could not dissolve in the human body, rendering it useless. The drug was removed from shelves. This example shows that it is not always possible for drug companies to identify all the variations of a drug's crystal structure through traditional experimentation, which is time consuming and expensive.
"The question is," Korter says, "can we leverage a better understanding of London and other weak intermolecular forces to predict these changes in crystal structure?"
Korter's lab is one of only a handful of university-based research labs in the world exploring the potential of THz radiation for chemical and pharmaceutical applications. THz light waves exist in the region between infrared radiation and microwaves and offer the unique advantages of being non-harmful to people and able to safely pass through many kinds of materials. THz can also be used to identify the chemical signatures of a wide range of substances. Korter has used THz to identify the chemical of signatures of molecules ranging from improvised explosives and drug components to the building blocks of DNA.
Korter's new research combines THz experiments with new computational models that accurately account for the effects of the London dispersion forces to predict crystal structures of various substances. London forces are one of several types of intermolecular forces that cause molecules to stick together and form solids. Environmental changes (temperature, humidity, light) impact the forces in ways that can cause the crystal structure to change. Korter's research team compares the computer models with the THz experiments and uses the results to refine and improve the theoretical models.
"We have demonstrated how to use THz to directly visualize these chemical interactions," Korter says. "The ultimate goal is to use these THz signatures to develop theoretical models that take into account the role of these weak forces to predict the crystal structures of pharmaceuticals before they are identified through experimentation."
Source: Syracuse University
Saturday, March 5, 2011
Invisibility cloaks may be just around the corner
In 1897, H.G. Wells created a fictional scientist who became invisible by changing his refractive index to that of air, so that his body could not absorb or reflect light. More recently, Harry Potter disappeared from sight after wrapping himself in a cloak spun from the pelts of magical herbivores.
Countless other fictional characters in books and films throughout history have discovered or devised ways to become invisible, a theme that long has been a staple of science fiction and a source of endless fascination for humans. Who among us has never imagined the possibilities? But, of course, it's not for real.
Or is it?
While no one yet has the power to put on a garment and disappear, Elena Semouchkina, an associate professor of electrical and computer engineering at Michigan Technological University, has found ways to use magnetic resonance to capture rays of visible light and route them around objects, rendering those objects invisible to the human eye. Her work is based on the transformation optics approaches, developed and applied to the solution of invisibility problems by British scientists John B. Pendry and Ulf Leonhardt in 2006.
"Imagine that you look at the object, which is placed in front of a light source," she explains.
"The object would be invisible for your eye if the light rays are sent around the object to avoid scattering, and are accelerated along these curved paths to reach your eye undistinguishable from direct straight beams exiting the source, when the object is absent."
At its simplest, the beams of light flow around the object and then meet again on the other side so that someone looking directly at the object would not be able to see it--but only what's on the other side.
"You would see the light source directly through the object," said Semouchkina. "This effect could be achieved if we surround the object by a shell with a specific distribution of such material parameters as permittivity and permeability."
She and her collaborators at the Pennsylvania State University, where she is also an adjunct professor, designed a nonmetallic "invisibility cloak" that uses concentric arrays of identical glass resonators made of chalcogenide glass, a type of dielectric material--that is, one that does not conduct electricity.
In computer simulations, the cloak made objects hit by infrared waves--approximately one micron, or one-millionth of a meter long--disappear from view.
The potential practical applications of the work could be dramatic, for example, in the military, such as "making objects invisible to radar," she said, as well as in intelligence operations "to conceal people or objects."
Furthermore, "shielding objects from electromagnetic irradiation is also very important," she said, adding, "for sure, the gaming industry could use it in new types of toys."
Multi-resonator structures comprising Semouchkina's invisibility cloak belong to "metamaterials"--artificial materials with properties that do not exist in nature--since they can refract light by unusual ways. In particular, the "spokes" of tiny glass resonators accelerate light waves around the object making it invisible.
Until recently, there were no materials available with the relative permeability values between zero and one, which are necessary for the invisibility cloak to bend and accelerate light beams, she said. However, metamaterials, which were predicted more than 40 years ago by the Russian scientist Victor Veselago, and first implemented in 2000 by Pendry from Imperial College, London, in collaboration with David R. Smith from Duke University, now make it possible, she said.
Metamaterials use lattices of resonators, instead of atoms or molecules of natural materials, and provide for a broad range of relative permittivity and permeability including zero and negative values in the vicinity of the resonance frequency, she said. Metamaterials were listed as one of the top three physics discoveries of the decade by the American Physical Society.
"Metamaterials were initially made of metallic split ring resonators and wire arrays that limited both their isotropy (uniformity in all directions) and frequency range," Semouchkina said. "Depending on the size of split ring resonators, they could operate basically at microwaves and millimeter (mm) waves."
In 2004, her research group proposed replacing metal resonators with dielectric resonators. "Although it seemed strange to control magnetic properties of a metamateral by using dielectrics, we have shown that arrays of dielectric resonators can provide for negative refraction and other unique properties of metamaterials," she said. "Low loss dielectric resonators promise to extend applications of metamaterials to the optical range, and we have demonstrated this opportunity by designing an infrared cloak."
Semouchkina and colleagues recently reported on their research in the journalApplied Physics Letters, published by the American Institute of Physics. Her co-authors were Douglas Werner and Carlo Pantano of Penn State and George Semouchkin, who teaches at Michigan Tech and has an adjunct position with Penn State.
The National Science Foundation (NSF) is funding her research on dielectric metamaterials and the team's applications with a $318,520 award, but she plans to apply for an additional grant to conduct specific studies into invisibility cloak structures.
Semouchkina, who received her master's degree in electrical engineering and her doctorate in physics and mathematics from Tomsk State University in her native Russia, has lived in the United States for 13 years, and has been a U.S. citizen since 2005. She also earned her second doctorate in materials in 2001 from Penn State.
She and her team now are testing an all-dielectric invisibility cloak rescaled to work at microwave frequencies, performing experiments in Michigan Tech's anechoic chamber, a cave-like compartment in an electrical energy resources center lab, lined with highly absorbent charcoal-gray foam cones.
There, "horn" antennas transmit and receive microwaves with wavelengths up to several centimeters, that is, more than 10,000 times longer than in the infrared range. They are cloaking metal cylinders two to three inches in diameter and three to four inches high with a shell comprised of mm-sized ceramic resonators, she said.
"We want to move experiments to higher frequencies and smaller wavelengths," she said, adding: "The most exciting applications will be at the frequencies of visible light."
Provided by National Science Foundation
Tuesday, March 1, 2011
WHAT IS THE STATUS OF MALAYSIA ENERGY SECURITY?
The energy industry's face challenges and need solutions for protecting critical assets including oil and gas infrastructure, transmission grids, power plants, storage, pipelines, and all aspects of strategic industry assets. Of special concern is the new Cyber-terrorism and protecting Control Systems.
Recent terrorist activities like the China hacker attack on global oil and gas companies - as reported by security company McAfee, have raised several critical questions:
- How can the Malaysia defend itself against future attacks on critical infrastructure such as energy systems?
- Are energy supplies vulnerable to attack, and if so ? how and where?
- How can energy generating and storage facilities be made safer?
- How can we protect transportation systems and transmission lines?
- How are government and industry leaders working together to develop contingency plans to protect the public?
- What policies would enhance Malaysia energy security?
- What are the roles of industry and government?
Energy security is a complex, multi-faceted issue. In its most fundamental sense, energy security is assured when a nation can deliver energy economically, reliably, in an environmentally sound and safe manner, and in quantities sufficient to support its economic and defense needs. To do this requires policies that support expansion of all elements of the energy supply and delivery infrastructure, with sufficient storage and generating reserves, diversity, and redundancy to meet the demands of economic growth.
The threats facing the nation's critical energy infrastructure continue to evolve and present new challenges. The intricate nature of the nation's electrical grid is becoming readily apparent with rolling blackouts and the potential for further disruptions. The interdependencies of the oil, natural gas, and electric infrastructures are increasingly complex and not easily understood. The impact of a major terrorist attack directed against this fragile and interdependent infrastructure could have drastic consequences.
Conflict over resources stretches far back in human history, and energy infrastructures have long been subject to planned attacks. For instance, the New World Liberation Front bombed assets of the Pacific Gas & Electric Company over 10 times in 1975 alone. Members of the Ku Klux Klan and San Joaquin Militia were convicted of conspiring or trying to attack energy infrastructure. Organized paramilitaries have had significant impacts in some countries. For example, the Farabundo-Marti National Liberation Front interrupted service in up to 90% of El Salvador at a time and created manuals for attacking power systems.
So what is the current state of the industry, and is there is a serious effort to provide insights on all aspects of infrastructure and asset protection and recovery being done?
Adapted from EnergyBusinessReports
Tuesday, February 22, 2011
New plastics can conduct electricity
February 22, 2011
Plastics usually conduct electricity so poorly that they are used to insulate electric cables but, by placing a thin film of metal onto a plastic sheet and mixing it into the polymer surface with an ion beam, Australian researchers have shown that the method can be used to make cheap, strong, flexible and conductive plastic films.
The research has been published in the journal ChemPhysChem by a team led by Professor Paul Meredith and Associate Professor Ben Powell, both at the University of Queensland, and Associate Professor Adam Micolich of the UNSW School of Physics. This latest discovery reports experiments by former UQ Ph.D. student, Dr Andrew Stephenson.
Ion beam techniques are widely used in the microelectronics industry to tailor the conductivity of semiconductors such as silicon, but attempts to adapt this process to plastic films have been made since the 1980s with only limited success – until now.
"What the team has been able to do here is use an ion beam to tune the properties of a plastic film so that it conducts electricity like the metals used in the electrical wires themselves, and even to act as a superconductor and pass electric current without resistance if cooled to low enough temperature," says Professor Meredith.
To demonstrate a potential application of this new material, the team produced electrical resistance thermometers that meet industrial standards. Tested against an industry standard platinum resistance thermometer, it had comparable or even superior accuracy.
"This material is so interesting because we can take all the desirable aspects of polymers - such as mechanical flexibility, robustness and low cost - and into the mix add good electrical conductivity, something not normally associated with plastics," says Professor Micolich. "It opens new avenues to making plastic electronics."
Andrew Stephenson says the most exciting part about the discovery is how precisely the film’s ability to conduct or resist the flow of electrical current can be tuned. It opens up a very broad potential for useful applications.
"In fact, we can vary the electrical resistivity over 10 orders of magnitude – put simply, that means we have ten billion options to adjust the recipe when we're making the plastic film. In theory, we can make plastics that conduct no electricity at all or as well as metals do – and everything in between,” Dr Stephenson says.
These new materials can be easily produced with equipment commonly used in the microelectronics industry and are vastly more tolerant of exposure to oxygen compared to standard semiconducting polymers.
Combined, these advantages may give ion beam processed polymer films a bright future in the on-going development of soft materials for plastic electronics applications – a fusion between current and next generation technology, the researchers say.
Provided by University of New South Wales
Sunday, February 20, 2011
What is a monetary unit, in reality and how does it relate to energy?
Here is the kind of analysis (greatly simplified) that might help us understand this better and lead us to an answer. Consider the production of an electronic gadget — a widget in economies. Let's just see where the cost elements come from. This hypothetical is highly simplified, but it isn't too far off the mark. It's the concept that counts.
Cost of production (consumer electronic widget):
Cost of production (consumer electronic widget):
Materials $ 500.00
Overhead (allocated) $ 100.00
Energy $ 50.00
Transportation $ 10.00
Total Costs $5,660.00
Energy as percent of total: .88%
If energy is such a small percentage of total costs, why worry about a mere 200% increase in the cost of oil over the last decade? Hell, energy is still cheap, right? Some economists say so.
But there is a problem. Let's start with Materials costs. We might think that we are paying for just physical material, right? Matter. But the reality is quite a bit more complicated.
Mining/Smelting/Forming Operations (proportioned):
Labor $ 200.00
Equipment depreciation $ 1.00
Overhead $ 2.00
Energy (to run equipment) $ 50.00
Total cost of mining $ 253.00
Profit $ 30.00
Price $ 283.00
Energy as percent of cost: 20%
And suppose we add up the average costs of parts manufacturing.
Materials $ 283.00
Labor $ 100.00
Overhead $ 10.00
Energy $ 55.00
Machinery depreciation $ 5.00
Total costs $ 453.00
Profit $ 47.00
Price $ 500.00
Energy as percent of cost 12%
Total energy costs rolled up $155.00
This is still only 2.4% of the total costs. Right? Well, what about labor? This one requires a much greater depth of analysis than I can fit in here, but just think about some basics. Consider the food eaten (energy) by the average worker. Consider the transportation costs to get the average worker to work (energy). Consider the cost of keeping the house warm in winter, cool in summer (energy). Now, it goes even deeper. For example, consider the work that was done in making the house (amortized over the life of the house, but nevertheless an energy input). Consider the work done to make the car. Consider the farm work needed to grow the food. And then consider that every one of the workers in this lower level have exactly the same energy needs as the workers at each of the above activities.
In other words, if you look carefully enough at all of the factors that go into making it possible for a worker (farm, blue collar, white collar, or no-collar) to work you will soon see that the above product is sitting atop a massive energy pyramid. We could perform the same analysis for equipment used in manufacturing, mining, and transportation of the goods. We get the same picture. Fundamentally, all of the work that goes into producing that one product is based on energy in one form or another. In the end, one can argue that nearly 100% of the cost of making the widget is energy.
Now, what is the effect of doubling the basic cost of energy? In the end everything is affected. It takes time for the cost increases to ripple through the economy. They are felt differently by different industries at different times. It isn't a single smooth curve. But it is inexorable. Over time, the costs will percolate upward driving everything from bottom to top up in dollar measures.
What causes the cost of energy to go up? And why does this impact the purchasing power of a monetary unit, say the dollar? The first one is ultimately very simple to answer. You have to use energy to get energy. The same analysis as above applied to, say, an off-shore oil drilling rig, or to the cost of exploration, or the cost of mining coal, will produce the same picture. The more machinery and labor that it takes to get the raw fuel, the more it takes to refine or process the fuel, the more equipment and anti-pollution measures you take to clean up the emissions and the environment (due to the release of toxic stuff in the fuel when burned), the more energy it takes to get the stuff in the first place. As the sources of oil are depleted and it takes more effort to get the same volume of fuels the energy it takes to get energy goes up as well.
In the end we have less net energy available for consumption as we labor on to make our widgets. The very same argument, by the way, holds for service industry work. And it is net energy that counts.
In an era in which the finding and extracting of easy to reach fuels was the norm, the net energy actually increased from year to year. As it did it could support a growing amount of work. The economy could expand and more people could enjoy more stuff, like widgets. Energy was cheap. For a while, it even grew cheaper in terms of the amount of energy it took to return a given unit of new energy. In monetary terms, and this helps to answer the second question above, we watched as dollars could buy more goods and services over the long-run. The period after WWII saw the most incredible expansion of the extraction of easy to get oil and natural gas (and coal too). Right up until sometime in the nineties we enjoyed the creation of unheard of wealth (well if you call SUV's wealth — I call them pseudo-wealth). And then things started to change. Overall energy production started to decelerate. That is, while still growing, its marginal rate of growth declined. This was an ominous sign. It portended something really different from what we had gotten used to.
In fact, I would argue that the debt crises we are experiencing is really due to a mismatch between expected growth in wealth production and actual growth due to energy limits. By attempting to pump more oil from very expensive (in energy terms) wells and expecting there will be even more in the future, we have borrowed, literally, against that future just at a time when everything is changing.
Eventually, if not already, the peak of energy production will arrive (see a summary of oil production here. That is, the gain in net energy will go to zero and, sometime thereafter, decline. We will be living in a world in which the value of our monetary units will go down. Inflation will increase at increasing rates until widgets' dollar price will be unreachable. This is inescapable save for some miraculous technology that can create energy out of... I'll save that question for another posting. Meanwhile we have bought a lot of stuff (expended energy in the past) with the expectation that there would be more energy in the future, not less. The energy deficit that we are realizing and the monetary deficit that we now face are linked.
Here is a not-so-simple-to-implement solution to our lack of understanding economics. Let a dollar equal a fixed number of energy units, say British Thermal Units (BTUs). Instead of a gold standard — remember you can't eat gold — we adopt an energy standard. More technically correct, we adopt a free energy standard. Free energy is what physicists call the energy available to do useful work. Not all energy qualifies. Think of the heat radiating from your home; it isn't able to do any work, but it is a lot of energy. A free energy standard says that there can be no more monetary units in circulation than there are units of stored and readily available units of free energy. This standard would already take into account the energy needed to obtain the stored energy. One of the beauties of this proposal is that the measure of amount of energy is fairly unambiguous. There is a standard unit of measure that is well defined. I remember that I started to think about this after reading something in Paul Samuelson's classic Introductory Economics (back in the 70's!) in which he noted that money is a lousy measuring tool, much like trying to measure a physical distance using a rubber yardstick. I guess this is one reason some folks prefer the gold standard. What Samuelson meant was that while it might be lousy in a physical sense, it was good enough for government work, literally. But what Samuelson and most other economists didn't know or understand is that there was a force pulling at both ends of that rubber yardstick that kept building a measurement error into every measurement act. Now we are going to see what happens when that force is removed — the yardstick will return to its earlier length.
A free energy standard would have unbelievable consequences for the way an economy works and how we understand that working. It would literally turn it into a system akin to the natural ecosystem where energy is the obvious currency. Spending would take on a whole new meaning. Borrowing would too. Most of all the price of everything we buy/sell would reflect the true value of things. Moreover, we could know the future value of owning things by virtue of knowing how much energy they could consume in the future. It would be an easier decision to make regarding the purchase of that widget if we knew that it's operation consumed so many BTUs per time unit. We see a basic start on this trend in looking at automobile mileage figures or refrigerator efficiency ratings. In that same vein we would have a good idea of how much it would cost to replace the item. In an age of diminishing energy we would be able to put a truer time value (discount rate) on things, knowing that they will cost more in monetary terms.
Right now everything is distorted in the economy. Economists can't really tell you what to expect because we are now operating in a different energy regime than when they developed their so-called economic laws. Things just don't work as they're supposed to under those laws. There is no small amount of head scratching going on in Washington and academia right now.
It is possible that the current crisis is just a temporary phenomenon. This kind of situation has happened before and has had similar effects in terms of the economy not working as advertised (remember stagflation during Jimmy Carter's administration). Historically we've survived and things seemed to return to normal. Maybe that will happen again. But the only way it can happen is if someone, some genius, somewhere invents the most stupendous energy production source ever imagined. Because that is what it is going to take to get us out of hot (forgive the pun) water now and return us to what we have thought of as normalcy in the future. Don't hold your breath, but if you are a believer, maybe pray.
Reports produced in 2010 state that Malaysia is going to be proposing Feed in Tariffs for Solar, Biomass, Biogas and Hydro. This will be debated in the Parliament during the second quarter of 2011. There is a growing interest for renewable technology in Malaysia and the target is to achieve 11% by 2020. The proposed tariff structure is shown in the table below:
Saturday, February 19, 2011
The United States needs to “take control of its energy future” and prevent future resource crunches of ‘energy critical elements’. So says a new report from the American Physical Society and the Materials Research Society, which echoes many of the same concerns and solutions as the US Department of Energy’s December 2010 report on the same topic (see Nature News blog post). Action seems to be happening off the back of these reports, the report’s authors told the American Association for the Advancement of Science (AAAS) meeting in DC today: US senator Mark Udall (Colorado) has introduced a bill on the same subject – the critical minerals and materials promotion act of 2011.
The reports and the bill focus on elements that are used in everything from efficient lighting to electric cars and wind turbines. Many of these elements are rarer than gold, and some are almost exclusively mined in China. In the face of skyrocketing demand, researchers, businessmen and politicians are seeking to find cheaper, more stable supplies, or invent alternative materials that use less of the critical elements. If they fail, future shortages of critical elements could hamstring the production of game-changing clean-energy technologies.
The new report differs from the DOE’s effort in that it takes a broader view of critical elements. “We are concerned to a relatively high degree about a good chunk of the periodic table,” said report co-author Tom Graedel of Yale University. “Maybe about a third.” By contrast the DOE report focused on six elements they identified as particularly critical.
The authors call for more information to be gathered (the US Geological Survey doesn’t even have statistics about the mining of individual critical elements, they note), and for a federal push on research into substitutes and recycling. They conclude that building up stockpiles of critical elements is probably not a good idea, since it wouldn’t spur innovative research.
Friday, February 18, 2011
February 17th, 2011
More than 50 years after the invention of the laser, scientists at Yale University have built the world's first anti-laser, in which incoming beams of light interfere with one another in such a way as to perfectly cancel each other out. The discovery could pave the way for a number of novel technologies with applications in everything from optical computing to radiology.
In the anti-laser, incoming light waves are trapped in a cavity where they bounce back and forth until they are eventually absorbed. Their energy is dissipated as heat.
Credit: Yidong Chong/Yale University
Conventional lasers, which were first invented in 1960, use a so-called "gain medium," usually a semiconductor like gallium arsenide, to produce a focused beam of coherent light-light waves with the same frequency and amplitude that are in step with one another.
Last summer, Yale physicist A. Douglas Stone and his team published a study explaining the theory behind an anti-laser, demonstrating that such a device could be built using silicon, the most common semiconductor material. But it wasn't until now, after joining forces with the experimental group of his colleague Hui Cao, that the team actually built a functioning anti-laser, which they call a coherent perfect absorber (CPA).
The team, whose results appear in the Feb. 18 issue of the journal Science, focused two laser beams with a specific frequency into a cavity containing a silicon wafer that acted as a "loss medium." The wafer aligned the light waves in such a way that they became perfectly trapped, bouncing back and forth indefinitely until they were eventually absorbed and transformed into heat.
Stone believes that CPAs could one day be used as optical switches, detectors and other components in the next generation of computers, called optical computers, which will be powered by light in addition to electrons. Another application might be in radiology, where Stone said the principle of the CPA could be employed to target electromagnetic radiation to a small region within normally opaque human tissue, either for therapeutic or imaging purposes.
Theoretically, the CPA should be able to absorb 99.999 percent of the incoming light. Due to experimental limitations, the team's current CPA absorbs 99.4 percent. "But the CPA we built is just a proof of concept," Stone said. "I'm confident we will start to approach the theoretical limit as we build more sophisticated CPAs." Similarly, the team's first CPA is about one centimeter across at the moment, but Stone said that computer simulations have shown how to build one as small as six microns (about one-twentieth the width of an average human hair).
The team that built the CPA, led by Cao and another Yale physicist, Wenjie Wan, demonstrated the effect for near-infrared radiation, which is slightly "redder" than the eye can see and which is the frequency of light that the device naturally absorbs when ordinary silicon is used. But the team expects that, with some tinkering of the cavity and loss medium in future versions, the CPA will be able to absorb visible light as well as the specific infrared frequencies used in fiber optic communications.
It was while explaining the complex physics behind lasers to a visiting professor that Stone first came up with the idea of an anti-laser. When Stone suggested his colleague think about a laser working in reverse in order to help him understand how a conventional laser works, Stone began contemplating whether it was possible to actually build a laser that would work backwards, absorbing light at specific frequencies rather than emitting it.
"It went from being a useful thought experiment to having me wondering whether you could really do that," Stone said. "After some research, we found that several physicists had hinted at the concept in books and scientific papers, but no one had ever developed the idea."
Source: Yale University