Tuesday, February 22, 2011

conducting plastic

New plastics can conduct electricity

February 22, 2011

Plastics usually conduct electricity so poorly that they are used to insulate electric cables but, by placing a thin film of metal onto a plastic sheet and mixing it into the polymer surface with an ion beam, Australian researchers have shown that the method can be used to make cheap, strong, flexible and conductive plastic films.

The research has been published in the journal ChemPhysChem by a team led by Professor Paul Meredith and Associate Professor Ben Powell, both at the University of Queensland, and Associate Professor Adam Micolich of the UNSW School of Physics. This latest discovery reports experiments by former UQ Ph.D. student, Dr Andrew Stephenson.

Ion beam techniques are widely used in the microelectronics industry to tailor the conductivity of semiconductors such as silicon, but attempts to adapt this process to plastic films have been made since the 1980s with only limited success – until now.

"What the team has been able to do here is use an ion beam to tune the properties of a plastic film so that it conducts electricity like the metals used in the electrical wires themselves, and even to act as a superconductor and pass electric current without resistance if cooled to low enough temperature," says Professor Meredith.

To demonstrate a potential application of this new material, the team produced electrical resistance thermometers that meet industrial standards. Tested against an industry standard platinum resistance thermometer, it had comparable or even superior accuracy.

"This material is so interesting because we can take all the desirable aspects of polymers - such as mechanical flexibility, robustness and low cost - and into the mix add good electrical conductivity, something not normally associated with plastics," says Professor Micolich. "It opens new avenues to making plastic electronics."

Andrew Stephenson says the most exciting part about the discovery is how precisely the film’s ability to conduct or resist the flow of electrical current can be tuned. It opens up a very broad potential for useful applications.

"In fact, we can vary the electrical resistivity over 10 orders of magnitude – put simply, that means we have ten billion options to adjust the recipe when we're making the plastic film. In theory, we can make plastics that conduct no electricity at all or as well as metals do – and everything in between,” Dr Stephenson says.

These new materials can be easily produced with equipment commonly used in the microelectronics industry and are vastly more tolerant of exposure to oxygen compared to standard semiconducting polymers.

Combined, these advantages may give ion beam processed polymer films a bright future in the on-going development of soft materials for plastic electronics applications – a fusion between current and next generation technology, the researchers say.

Provided by University of New South Wales

Sunday, February 20, 2011

Energy and Economy

What is a monetary unit, in reality and how does it relate to energy?

Here is the kind of analysis (greatly simplified) that might help us understand this better and lead us to an answer. Consider the production of an electronic gadget — a widget in economies. Let's just see where the cost elements come from. This hypothetical is highly simplified, but it isn't too far off the mark. It's the concept that counts.

Cost of production (consumer electronic widget):

Cost of production (consumer electronic widget):
Materials $ 500.00
Labor $5,000.00
Overhead (allocated) $ 100.00
Energy $ 50.00
Transportation $ 10.00
Total Costs $5,660.00

Energy as percent of total: .88%

If energy is such a small percentage of total costs, why worry about a mere 200% increase in the cost of oil over the last decade? Hell, energy is still cheap, right? Some economists say so.

But there is a problem. Let's start with Materials costs. We might think that we are paying for just physical material, right? Matter. But the reality is quite a bit more complicated.

Mining/Smelting/Forming Operations (proportioned):

Labor                       $  200.00
Equipment depreciation     $    1.00
Overhead                    $    2.00
Energy (to run equipment)   $   50.00
Total cost of mining        $  253.00
Profit                      $   30.00
Price                       $  283.00

Energy as percent of cost: 20%

And suppose we add up the average costs of parts manufacturing.

Materials                   $  283.00
Labor                       $  100.00
Overhead                    $   10.00
Energy                      $   55.00
Machinery depreciation      $    5.00
Total costs                 $  453.00
Profit                      $   47.00
Price                       $  500.00

Energy as percent of cost 12%

Total energy costs rolled up $155.00

This is still only 2.4% of the total costs. Right? Well, what about labor? This one requires a much greater depth of analysis than I can fit in here, but just think about some basics. Consider the food eaten (energy) by the average worker. Consider the transportation costs to get the average worker to work (energy). Consider the cost of keeping the house warm in winter, cool in summer (energy). Now, it goes even deeper. For example, consider the work that was done in making the house (amortized over the life of the house, but nevertheless an energy input). Consider the work done to make the car. Consider the farm work needed to grow the food. And then consider that every one of the workers in this lower level have exactly the same energy needs as the workers at each of the above activities.

In other words, if you look carefully enough at all of the factors that go into making it possible for a worker (farm, blue collar, white collar, or no-collar) to work you will soon see that the above product is sitting atop a massive energy pyramid. We could perform the same analysis for equipment used in manufacturing, mining, and transportation of the goods. We get the same picture. Fundamentally, all of the work that goes into producing that one product is based on energy in one form or another. In the end, one can argue that nearly 100% of the cost of making the widget is energy.

Now, what is the effect of doubling the basic cost of energy? In the end everything is affected. It takes time for the cost increases to ripple through the economy. They are felt differently by different industries at different times. It isn't a single smooth curve. But it is inexorable. Over time, the costs will percolate upward driving everything from bottom to top up in dollar measures.

What causes the cost of energy to go up? And why does this impact the purchasing power of a monetary unit, say the dollar? The first one is ultimately very simple to answer. You have to use energy to get energy. The same analysis as above applied to, say, an off-shore oil drilling rig, or to the cost of exploration, or the cost of mining coal, will produce the same picture. The more machinery and labor that it takes to get the raw fuel, the more it takes to refine or process the fuel, the more equipment and anti-pollution measures you take to clean up the emissions and the environment (due to the release of toxic stuff in the fuel when burned), the more energy it takes to get the stuff in the first place. As the sources of oil are depleted and it takes more effort to get the same volume of fuels the energy it takes to get energy goes up as well.

In the end we have less net energy available for consumption as we labor on to make our widgets. The very same argument, by the way, holds for service industry work. And it is net energy that counts.

In an era in which the finding and extracting of easy to reach fuels was the norm, the net energy actually increased from year to year. As it did it could support a growing amount of work. The economy could expand and more people could enjoy more stuff, like widgets. Energy was cheap. For a while, it even grew cheaper in terms of the amount of energy it took to return a given unit of new energy. In monetary terms, and this helps to answer the second question above, we watched as dollars could buy more goods and services over the long-run. The period after WWII saw the most incredible expansion of the extraction of easy to get oil and natural gas (and coal too). Right up until sometime in the nineties we enjoyed the creation of unheard of wealth (well if you call SUV's wealth — I call them pseudo-wealth). And then things started to change. Overall energy production started to decelerate. That is, while still growing, its marginal rate of growth declined. This was an ominous sign. It portended something really different from what we had gotten used to.

In fact, I would argue that the debt crises we are experiencing is really due to a mismatch between expected growth in wealth production and actual growth due to energy limits. By attempting to pump more oil from very expensive (in energy terms) wells and expecting there will be even more in the future, we have borrowed, literally, against that future just at a time when everything is changing.

Eventually, if not already, the peak of energy production will arrive (see a summary of oil production here. That is, the gain in net energy will go to zero and, sometime thereafter, decline. We will be living in a world in which the value of our monetary units will go down. Inflation will increase at increasing rates until widgets' dollar price will be unreachable. This is inescapable save for some miraculous technology that can create energy out of... I'll save that question for another posting. Meanwhile we have bought a lot of stuff (expended energy in the past) with the expectation that there would be more energy in the future, not less. The energy deficit that we are realizing and the monetary deficit that we now face are linked.

Here is a not-so-simple-to-implement solution to our lack of understanding economics. Let a dollar equal a fixed number of energy units, say British Thermal Units (BTUs). Instead of a gold standard — remember you can't eat gold — we adopt an energy standard. More technically correct, we adopt a free energy standard. Free energy is what physicists call the energy available to do useful work. Not all energy qualifies. Think of the heat radiating from your home; it isn't able to do any work, but it is a lot of energy. A free energy standard says that there can be no more monetary units in circulation than there are units of stored and readily available units of free energy. This standard would already take into account the energy needed to obtain the stored energy. One of the beauties of this proposal is that the measure of amount of energy is fairly unambiguous. There is a standard unit of measure that is well defined. I remember that I started to think about this after reading something in Paul Samuelson's classic Introductory Economics (back in the 70's!) in which he noted that money is a lousy measuring tool, much like trying to measure a physical distance using a rubber yardstick. I guess this is one reason some folks prefer the gold standard. What Samuelson meant was that while it might be lousy in a physical sense, it was good enough for government work, literally. But what Samuelson and most other economists didn't know or understand is that there was a force pulling at both ends of that rubber yardstick that kept building a measurement error into every measurement act. Now we are going to see what happens when that force is removed — the yardstick will return to its earlier length.

A free energy standard would have unbelievable consequences for the way an economy works and how we understand that working. It would literally turn it into a system akin to the natural ecosystem where energy is the obvious currency. Spending would take on a whole new meaning. Borrowing would too. Most of all the price of everything we buy/sell would reflect the true value of things. Moreover, we could know the future value of owning things by virtue of knowing how much energy they could consume in the future. It would be an easier decision to make regarding the purchase of that widget if we knew that it's operation consumed so many BTUs per time unit. We see a basic start on this trend in looking at automobile mileage figures or refrigerator efficiency ratings. In that same vein we would have a good idea of how much it would cost to replace the item. In an age of diminishing energy we would be able to put a truer time value (discount rate) on things, knowing that they will cost more in monetary terms.

Right now everything is distorted in the economy. Economists can't really tell you what to expect because we are now operating in a different energy regime than when they developed their so-called economic laws. Things just don't work as they're supposed to under those laws. There is no small amount of head scratching going on in Washington and academia right now.

It is possible that the current crisis is just a temporary phenomenon. This kind of situation has happened before and has had similar effects in terms of the economy not working as advertised (remember stagflation during Jimmy Carter's administration). Historically we've survived and things seemed to return to normal. Maybe that will happen again. But the only way it can happen is if someone, some genius, somewhere invents the most stupendous energy production source ever imagined. Because that is what it is going to take to get us out of hot (forgive the pun) water now and return us to what we have thought of as normalcy in the future. Don't hold your breath, but if you are a believer, maybe pray.

Source: Biophysical Economics, Political Economy


Feed in Tariffs to be proposed in Malaysia
Reports produced in 2010 state that Malaysia is going to be proposing Feed in Tariffs for Solar, Biomass, Biogas and Hydro. This will be debated in the Parliament during the second quarter of 2011. There is a growing interest for renewable technology in Malaysia and the target is to achieve 11% by 2020. The proposed tariff structure is shown in the table below:

Saturday, February 19, 2011


An elementary energy problem - February 18, 2011

The United States needs to “take control of its energy future” and prevent future resource crunches of ‘energy critical elements’. So says a new report from the American Physical Society and the Materials Research Society, which echoes many of the same concerns and solutions as the US Department of Energy’s December 2010 report on the same topic (see Nature News blog post). Action seems to be happening off the back of these reports, the report’s authors told the American Association for the Advancement of Science (AAAS) meeting in DC today: US senator Mark Udall (Colorado) has introduced a bill on the same subject – the critical minerals and materials promotion act of 2011.
The reports and the bill focus on elements that are used in everything from efficient lighting to electric cars and wind turbines. Many of these elements are rarer than gold, and some are almost exclusively mined in China. In the face of skyrocketing demand, researchers, businessmen and politicians are seeking to find cheaper, more stable supplies, or invent alternative materials that use less of the critical elements. If they fail, future shortages of critical elements could hamstring the production of game-changing clean-energy technologies.
The new report differs from the DOE’s effort in that it takes a broader view of critical elements. “We are concerned to a relatively high degree about a good chunk of the periodic table,” said report co-author Tom Graedel of Yale University. “Maybe about a third.” By contrast the DOE report focused on six elements they identified as particularly critical.
The authors call for more information to be gathered (the US Geological Survey doesn’t even have statistics about the mining of individual critical elements, they note), and for a federal push on research into substitutes and recycling. They conclude that building up stockpiles of critical elements is probably not a good idea, since it wouldn’t spur innovative research.
Source: nature.com

Friday, February 18, 2011


Scientists build world's first anti-laser

February 17th, 2011

More than 50 years after the invention of the laser, scientists at Yale University have built the world's first anti-laser, in which incoming beams of light interfere with one another in such a way as to perfectly cancel each other out. The discovery could pave the way for a number of novel technologies with applications in everything from optical computing to radiology.

In the anti-laser, incoming light waves are trapped in a cavity where they bounce back and forth until they are eventually absorbed. Their energy is dissipated as heat.
Credit: Yidong Chong/Yale University
Conventional lasers, which were first invented in 1960, use a so-called "gain medium," usually a semiconductor like gallium arsenide, to produce a focused beam of coherent light-light waves with the same frequency and amplitude that are in step with one another.

Last summer, Yale physicist A. Douglas Stone and his team published a study explaining the theory behind an anti-laser, demonstrating that such a device could be built using silicon, the most common semiconductor material. But it wasn't until now, after joining forces with the experimental group of his colleague Hui Cao, that the team actually built a functioning anti-laser, which they call a coherent perfect absorber (CPA).

The team, whose results appear in the Feb. 18 issue of the journal Science, focused two laser beams with a specific frequency into a cavity containing a silicon wafer that acted as a "loss medium." The wafer aligned the light waves in such a way that they became perfectly trapped, bouncing back and forth indefinitely until they were eventually absorbed and transformed into heat.

Stone believes that CPAs could one day be used as optical switches, detectors and other components in the next generation of computers, called optical computers, which will be powered by light in addition to electrons. Another application might be in radiology, where Stone said the principle of the CPA could be employed to target electromagnetic radiation to a small region within normally opaque human tissue, either for therapeutic or imaging purposes.

Theoretically, the CPA should be able to absorb 99.999 percent of the incoming light. Due to experimental limitations, the team's current CPA absorbs 99.4 percent. "But the CPA we built is just a proof of concept," Stone said. "I'm confident we will start to approach the theoretical limit as we build more sophisticated CPAs." Similarly, the team's first CPA is about one centimeter across at the moment, but Stone said that computer simulations have shown how to build one as small as six microns (about one-twentieth the width of an average human hair).

The team that built the CPA, led by Cao and another Yale physicist, Wenjie Wan, demonstrated the effect for near-infrared radiation, which is slightly "redder" than the eye can see and which is the frequency of light that the device naturally absorbs when ordinary silicon is used. But the team expects that, with some tinkering of the cavity and loss medium in future versions, the CPA will be able to absorb visible light as well as the specific infrared frequencies used in fiber optic communications.

It was while explaining the complex physics behind lasers to a visiting professor that Stone first came up with the idea of an anti-laser. When Stone suggested his colleague think about a laser working in reverse in order to help him understand how a conventional laser works, Stone began contemplating whether it was possible to actually build a laser that would work backwards, absorbing light at specific frequencies rather than emitting it.

"It went from being a useful thought experiment to having me wondering whether you could really do that," Stone said. "After some research, we found that several physicists had hinted at the concept in books and scientific papers, but no one had ever developed the idea."

Source: Yale University
Credit: Eurekalert

environmental friendly battery

Technology breakthrough fuels laptops and phones, recharges scientist's 60-year career
February 17, 2011
How does a scientist fuel his enthusiasm for chemistry after 60 years? By discovering a new energy source, of course.

This week, SiGNa Chemistry Inc. unveiled its new hydrogen cartridges, which provide energy to fuel cells designed to recharge cell phones, laptops and GPS units. The green power source is geared toward outdoor enthusiasts as well as residents of the Third World, where electricity in homes is considered a luxury.
"SiGNa has created an inherently-safe solution to produce electric power, resulting in an eco-friendly and cost-effective portable solution," said Michael Lefenfeld, SiGNa's CEO.
The spark for this groundbreaking technology came from James Dye's Michigan State University laboratory. Dye, University Distinguished Professor of Chemistry Emeritus, and his work with alkali metals led to a green process to harness the power of sodium silicide, which is the source for SiGNa's new product.
"In our lab, we were able to produce alkali metal silicides, which basically are made from sodium and silicon, which in turn, are produced from salt and sand," Dye said. "By adding water to sodium silicide, we're able to produce hydrogen, which creates energy for fuel cells. The byproduct, sodium silicate, is also green. It's the same stuff found in toothpaste."
SiGNa was able to build on Dye's research and develop a power platform that produces low-pressure hydrogen gas on demand, convert it to electricity via a low-cost fuel cell and emit simple water vapor.
Dye, the co-founder of SiGNa and director of its scientific council, said that making the jump to research the company's products was a small one.
"I've been working with alkali metals for 50 years," he said. "My research was closely related to what SiGNa was looking for. So when they came to me with their idea, it was a relatively easy adaptation to make."
Dye came to MSU in 1953 — two years before MSU was a university. Based on the products that can be linked to Dye's research just in the last year, it's clear that he is reaping the rewards of his six decades of scientific sowing.
Using a similar process, Dye was able to assist the creation of a fuel source to power electric bicycles. The fuel cell, developed by SiGNa's partners, ranges in size from 1 watt to 3 kilowatts and is capable of pushing a bicycle up to 25 mph for approximately 100 miles.
While the mainstream attention of his work is rewarding, it's the untamed excitement of daily discovery and being able to share it with his students that fuel Dye's desire to maintain a full-time research schedule.
"Instilling that excitement about chemistry in my undergraduate students and giving them a jump on their graduate research is my reward," Dye said. "Everyone who has come through the lab and gone on to graduate school has had glowing reviews on how this experience helped their career."

Provided by Michigan State University

Wednesday, February 16, 2011


New material provides 25 percent greater thermoelectric conversion efficiency

February 15, 2011

Automobiles, military vehicles, even large-scale power generating facilities may someday operate far more efficiently thanks to a new alloy developed at the U.S. Department of Energy's Ames Laboratory. A team of researchers at the Lab that is jointly funded by the DOE Office of Basic Energy Sciences, Division of Materials Sciences and Engineering and the Defense Advanced Research Projects Agency, achieved a 25 percent improvement in the ability of a key material to convert heat into electrical energy.

"What happened here has not happened anywhere else," said Evgenii Levin, associate scientist at Ames Laboratory and co-principal investigator on the effort, speaking of the significant boost in efficiency documented by the research. Along with Levin, the Ames Lab-based team included: Bruce Cook, scientist and co-principal investigator; Joel Harringa, assistant scientist II; Sergey Bud'ko, scientist; and Klaus Schmidt-Rohr, faculty scientist. Also taking part in the research was Rama Venkatasubramanian, who is director of the Center for Solid State Energetics at RTI International, located in North Carolina.
So-called thermoelectric materials that convert heat into electricity have been known since the early 1800s. One well-established group of thermoelectric materials is composed of tellurium, antimony, germanium and silver, and thus is known by the acronym "TAGS." Thermoelectricity is based on the movement of charge carriers from their heated side to their cooler side, just as electrons travel along a wire.
The process, known as the Seebeck effect, was discovered in 1821 by Thomas Johann Seebeck, a physicist who lived in what is now Estonia. A related phenomenon observed in all thermoelectric materials is known as the Peltier effect, named after French physicist Jean-Charles Peltier, who discovered it in 1834. The Peltier effect can be utilized for solid-state heating or cooling with no moving parts.
In the nearly two centuries since the discovery of the Seebeck and Peltier effects, practical applications have been limited due to the low efficiency with which the materials performed either conversion. Significant work to improve that efficiency took place during the 1950s, when thermoelectric conversion was viewed as an ideal power source for deep-space probes, explained team member Cook. "Thermoelectric conversion was successfully used to power the Voyager, Pioneer, Galileo, Cassini, and Viking spacecrafts," he said.
Despite its use by NASA, the low efficiency of thermoelectric conversion still kept it from being harnessed for more down-to-earth applications – even as research around the world continued in earnest. "Occasionally, you would hear about a large increase in efficiency," Levin explained. But the claims did not hold up to closer scrutiny.

All that changed in 2010, when the Ames Laboratory researchers found that adding just one percent of the rare-earth elements cerium or ytterbium to a TAGS material was sufficient to boost its performance.
The results of the group's work appeared in the article, "Analysis of Ce- and Yb-Doped TAGS-85 Materials with Enhanced Thermoelectric Figure of Merit," published online in November 2010 in the journal Advanced Functional Materials.
The team has yet to understand exactly why such a small compositional change in the material is able to profoundly affect its properties. However, they theorize that doping the TAGS material with either of the two rare-earth elements could affect several possible mechanisms that influence thermoelectric properties.
Team member Schmidt-Rohr studied the materials using Ames Laboratory's solid-state nuclear magnetic resonance spectroscopy instruments. This enabled the researchers to verify that the one percent doping of cerium or ytterbium affected the structure of the thermoelectric material. In order to understand effect of magnetism of rare earths, team member Bud'ko studied magnetic properties of the materials. "Rare-earth elements modified the lattice," said Levin, referring to the crystal structure of the thermoelectric materials.
The group plans to test the material in order to better understand why the pronounced change took place and, hopefully, to boost its performance further.
The durable and relatively easy-to-produce material has innumerable applications, including recycling waste heat from industrial refineries or using auto exhaust heat to help recharge the battery in an electric car. "It's a very amazing area," Levin said, particularly since many years of prior research into TAGS materials enables researchers to understand their nature. Better understanding of the thermoelectric and their improvement can immediately result in applications at larger scale than now.
Additionally, the Ames Laboratory results – dependent as they were on doping TAGS with small amounts of cerium or ytterbium – provide yet more evidence of rare-earth elements' strategic importance. Cerium or ytterbium are members of a group of 15 lanthanides, deemed essential to just about every new technology from consumer electronics and cell phones to hybrid car batteries and generator motors in wind turbines. The Ames Laboratory has been a leader in rare-earth research going back to the closing days of World War II. Fears of shortages of rare-earth elements have caused these little-known materials to be a much-talked-about subject in the news lately.

More information: E.M. Levin, B.A. Cook, J.L. Harringa, S. L. Bud'ko, R. Venkatasubramanian, K. Schmidt-Rohr, "Analysis of Ce- and Yb-Doped TAGS-85 Materials with Enhanced Thermoelectric Figure of Merit," Advanced Functional Materials, 2010, in press. DOI:10.1002/adfm.201001307

Provided by Ames Laboratory (news : web)