applied physics (29)

Green Homing...

11000128501?profile=RESIZE_710x

Divine light The Dean of Gloucester Cathedral, Stephen Lake, blesses the cathedral’s solar panels after the solar-energy firm MyPower installed them in November 2016. The array of PV panels generates just over 25% of the building’s electricity. (Courtesy: MyPower)

Topics: Alternate Energy, Applied Physics, Battery, Chemistry, Economics, Solar Power

With energy bills on the rise, plenty of people are interested in ditching the fossil fuels currently used to heat most UK homes. The question is how to make it happen, as Margaret Harris explains.

Deep beneath the flagstones of the medieval Bath Abbey church, a modern marvel with an ancient twist is silently making its presence felt. Completed in March 2021, the abbey’s heating system combines underfloor pipes with heat exchangers located seven meters below the surface. There, a drain built nearly 2000 years ago carries 1.1 million liters of 40 °C water every day from a natural hot spring into a complex of ancient Roman baths.

By tapping into this flow of warm water, the system provides enough energy to heat not only the abbey but also an adjacent row of Georgian cottages used for offices. No wonder the abbey’s rector praised it as “a sustainable solution for heating our beautiful historic church.”

But that wasn’t all. Once efforts to decarbonize the abbey’s heating were underway, officials in the £19.4m Bath Abbey Footprint project turned their attention to the building’s electricity. Like most churches, the abbey runs from east to west, giving its roof an extensive south-facing aspect. At the UK’s northerly latitudes, such roofs are bathed in sunlight for much of the day, making them ideal for solar photovoltaic (PV) panels. Gloucester Cathedral – an hour’s drive north of Bath – has already taken advantage of this favorable orientation, becoming – in 2016 – the UK’s first major ancient cathedral to have solar panels installed on its roof.

To find out if a similar set-up might be suitable at Bath Abbey, the Footprint project worked with Ph.D. students in the University of Bath-led Centre for Doctoral Training (CDT) in New and Sustainable Photovoltaics. In a feasibility study published in Energy Science & Engineering (2022 10 892), the students calculated that a well-designed array of PV panels could supply 35.7% of the abbey’s electricity, plus 4.6% that could be sold back to the grid on days when a surplus was generated. The array would pay for itself within about 13 years and generate a total profit of £139,000 ± £12,000 over its 25-year lifetime.

Home, green home: scientific solutions for cutting carbon and (maybe) saving money, Margaret Harris, Physics World

Read more…

Caveat Super...

10997708055?profile=RESIZE_584x

A diamond anvil is used to put superconducting materials under high pressure. Credit: J. Adam Fenster/University of Rochester

Topics: Applied Physics, Condensed Matter Physics, Materials Science, Superconductors

Will a possible breakthrough for room-temperature superconducting materials hold up to scrutiny?

This week researchers claimed to have discovered a superconducting material that can shuttle electricity with no loss of energy under near-real-world conditions. But drama and controversy behind the scenes have many worried that the breakthrough may not hold up to scientific scrutiny.

“If you were to find a room-temperature, room-pressure superconductor, you’d have a completely new host of technologies that would occur—that we haven’t even begun to dream about,” says Eva Zurek, a computational chemist at the University at Buffalo, who was not involved in the new study. “This could be a real game changer if it turns out to be correct.”

Scientists have been studying superconductors for more than a century. By carrying electricity without shedding energy in the form of heat, these materials could make it possible to create incredibly efficient power lines and electronics that never overheat. Superconductors also repel magnetic fields. This property lets researchers levitate magnets over a superconducting material as a fun experiment—and it could also lead to more efficient high-speed maglev trains. Additionally, these materials could produce super strong magnets for use in wind turbines, portable magnetic resonance imaging machines, or even nuclear fusion power plants.

The only superconducting materials previously discovered require extreme conditions to function, which makes them impractical for many real-world applications. The first known superconductors had to be cooled with liquid helium to temperatures only a few degrees above absolute zero. In the 1980s, researchers found superconductivity in a category of materials called cuprates, which work at higher temperatures yet still require cooling with liquid nitrogen. Since 2015 scientists have measured room-temperature superconductive behavior in hydrogen-rich materials called hydrides. but they have to be pressed in a sophisticated viselike instrument called a diamond anvil cell until they reach a pressure of about a quarter to half of that found near the center of Earth.

The new material, called nitrogen-doped lutetium hydride, is a blend of hydrogen, the rare-earth metal lutetium, and nitrogen. Although this material also relies on a diamond anvil cell, the study found that it begins exhibiting superconductive behavior at a pressure of about 10,000 atmospheres—roughly 100 times lower than the pressures that other hydrides require. The new material is “much closer to ambient pressure than previous materials,” says David Ceperley, a condensed matter physicist at the University of Illinois at Urbana-Champaign, who was not involved in the new study. He also notes that the material remains stable when stored at a room pressure of one atmosphere. “Previous stuff was only stable at a million atmospheres, so you couldn’t really take it out of the diamond anvil” cell, he says. “The fact that it’s stable at one atmosphere of pressure also means that it’d be easier to manufacture.”

Controversy Surrounds Blockbuster Superconductivity Claim, Sophie Bushwick, Scientific American

Read more…

When Water Outpaces Silicon…

10948713060?profile=RESIZE_710x

On target: Water is fanned out through a specially developed nozzle, and then a laser pulse is passed through it to create a switch. (Courtesy: Adrian Buchmann)

Topics: Applied Physics, Lasers, Materials Science, Photonics, Semiconductor Technology

A laser-controlled water-based switch that operates twice as fast as existing semiconductor switches has been developed by a trio of physicists in Germany. Adrian Buchmann, Claudius Hoberg, and Fabio Novelli at Ruhr University Bochum used an ultrashort laser pulse to create a temporary metal-like state in a jet of liquid water. This altered the transmission of terahertz pulses over timescales of just tens of femtoseconds.

With the latest semiconductor-based switches approaching fundamental upper limits on how fast they can operate, researchers are searching for faster ways of switching signals. One unexpected place to look for inspiration is the curious behavior of water under extreme conditions – like those deep within ice-giant planets or created by powerful lasers.

Molecular dynamics simulations suggest water enters a metallic state at pressures of 300 GPa and temperatures of 7000 K. While such conditions do not occur on Earth, it is possible that this state contributes to the magnetic fields of Uranus and Neptune. To study this effect closer to home, recent experiments have used powerful, ultrashort laser pulses to trigger photo-ionization in water-based solutions – creating fleeting, metal-like states.

Water-based switch outpaces semiconductor devices, described in APL Photonics.

Read more…

Chip Act and Wave Surfing...

10943737673?profile=RESIZE_584x

Massive subsidies to regain the edge of the US semiconductor industry will not likely succeed unless progress is made in winning the global race of idea flow and monetization.

Topics: Applied Physics, Chemistry, Computer Science, Electrical Engineering, Semiconductor Technology

Intelligent use of subsidies for winning the global idea race is a must for gaining and regaining semiconductor edge.

The US semiconductor industry started with the invention of Bell Labs. Subsequently, it attained supremacy in semiconductor production due to the success of making computers better and cheaper. Notably, the rise of the PC wave made Intel and Silicon Valley seemingly unsinkable technology superpowers. But during the first two decades of the 21st century, America has lost it. The USA now relies on Asia to import the most advanced chips. Its iconic Intel is now a couple of technology generation behind Asia’s TSMC and Samsung.

Furthermore, China’s aggressive move has added momentum to America’s despair, triggering a chip war. But why has America lost the edge? Why does it rely on TSMC and Samsung to supply the most advanced chips to power iPhones, Data centers, and Weapons? Is it due to Asian Governments’ subsidies? Or is it due to America’s failure to understand dynamics, make prudent decisions and manage technology and innovation?

Invention and rise and fall of US semiconductor supremacy

In 1947, Bell Labs of the USA invented a semiconductor device—the Transistor. Although American companies developed prototypes of Transistor radios and other consumer electronic products, they did not immediately pursue them. But American firms were very fast in using the Transistor to reinvent computers—by changing the vacuum tube technology core. Due to weight advantage, US Airforce and NASA found transistors suitable for onboard computers. Besides, the invention of integrated circuits by Fairchild and Texas instruments accelerated the weight and size reduction of digital logic circuits. Consequentially, the use of semiconductors in building onboard computers kept exponentially growing. Hence, by the end of the 1960s, the US had become a powerhouse in logic circuit semiconductors. But America remained 2nd to Japan in global production, as Japanese companies were winning the race of consumer electronics by using transistors.

US Semiconductor–from invention, supremacy to despair, Rokon Zaman, The-Waves.org

Read more…

CEM and SEI...

10928839087?profile=RESIZE_710x

Panel A shows how the native SEI on Li metal is passivating to nitrogen, which means that no reactivity with Li metal is possible. Panel B shows that a proton donor like Ethanol will disrupt the SEI passivation and enable Li metal to react with nitrogen species. Panel C describes 3 potential mechanisms through which the proton donor can disrupt the SEI passivation. Credit: Steinberg et al.

Topics: Applied Physics, Battery, Chemistry, Climate Change, Environment

Ammonia (NH3), the chemical compound made of nitrogen and hydrogen, currently has many valuable uses, for instance, serving as a crop fertilizer, purifying agent, and refrigerant gas. In recent years, scientists have been exploring its potential as an energy carrier to reduce global carbon emissions and help tackle global warming.

Ammonia is produced via the Haber-Bosch process, a carbon-producing industrial chemical reaction that converts nitrogen and hydrogen into NH3. As this process is known to contribute heavily to global carbon emissions, electrifying ammonia synthesis would benefit our planet.

One of the most promising strategies for electrically synthesizing ammonia at ambient conditions is using lithium metal. However, some aspects of these processes, including the properties and role of lithium's passivation layer, known as the solid electrolyte interphase (SEI), remain poorly understood.

Researchers at the Massachusetts Institute of Technology (MIT), the University of California- Los Angeles (UCLA), and the California Institute of Technology have recently conducted a study closely examining the reactivity of lithium and its SEI, as this could enhance lithium-based pathways to electrically synthesize ammonia. Their observations, published in Nature Energy, were collected using a state-of-the-art imaging method known as cryogenic transmission electron microscopy.

Using cryogenic electron microscopy to study the lithium SEI during electrocatalysis, Ingrid Fadelli, Phys.org

Read more…

Caveat Emptor...

10913832662?profile=RESIZE_710x

National Ignition Facility operators inspect a final optics assembly during a routine maintenance period in August. Photo credit: Lawrence Livermore National Laboratory

Topics: Alternate Energy, Applied Physics, Climate Change, Energy, Global Warming, Lasers, Nuclear Fusion

After the heady, breathtaking coverage of pop science journalism, I dove into the grim world inhabited by the Bulletin of the Atomic Scientists on their take on the first-ever fusion reaction. I can say that I wasn’t surprised. With all this publicity, it will probably get the Nobel Prize nomination (my guess). Cool Trekkie trivia: the National Ignition Facility was the backdrop for the Enterprise's warp core for Into Darkness.

*****

This week’s headlines have been full of reports about a “major breakthrough” in nuclear fusion technology that, many of those reports misleadingly suggested, augurs a future of abundant clean energy produced by fusion nuclear power plants. To be sure, many of those reports lightly hedged their enthusiasm by noting that (as The Guardian put it) “major hurdles” to a fusion-powered world remain.

Indeed, they do.

The fusion achievement that the US Energy Department announced this week is scientifically significant, but the significance does not relate primarily to electricity generation. Researchers at Lawrence Livermore National Laboratory’s National Ignition Facility, or NIF, focused the facility’s 192 lasers on a target containing a small capsule of deuterium–tritium fuel, compressing it and inducing what is known as ignition. In a written press release, the Energy Department described the achievement this way: “On December 5, a team at LLNL’s National Ignition Facility (NIF) conducted the first controlled fusion experiment in history to reach this [fusion ignition] milestone, also known as scientific energy breakeven, meaning it produced more energy from fusion than the laser energy used to drive it. This historic, first-of-its-kind achievement will provide the unprecedented capability to support [the National Nuclear Security Administration’s] Stockpile Stewardship Program and will provide invaluable insights into the prospects of clean fusion energy, which would be a game-changer for efforts to achieve President Biden’s goal of a net-zero carbon economy.”

Because of how the Energy Department presented the breakthrough in a news conference headlined by Energy Secretary Jennifer Granholm, news coverage has largely glossed over its implications for monitoring the country’s nuclear weapons stockpile. Instead, even many serious news outlets focused on the possibility of carbon-free, fusion-powered electricity generation—even though the NIF achievement has, at best, a distant and tangential connection to power production.

To get a balanced view of what the NIF breakthrough does and does not mean, I (John Mecklin) spoke this week with Bob Rosner, a physicist at the University of Chicago and a former director of the Argonne National Laboratory who has been a longtime member of the Bulletin’s Science and Security Board. The interview has been lightly edited and condensed for readability.

See their chat at the link below.

The Energy Department’s fusion breakthrough: It’s not really about generating electricity, John Mecklin, The Bulletin of the Atomic Scientists, Editor-in-Chief

Read more…

OPVs...

10884770301?profile=RESIZE_710x

V. ALTOUNIAN/SCIENCE

Topics: Alternate Energy, Applied Physics, Chemistry, Materials Science, Solar Power

As ultrathin organic solar cells hit new efficiency records, researchers see green energy potential in surprising places.

In November 2021, while the municipal utility in Marburg, Germany, was performing scheduled maintenance on a hot water storage facility, engineers glued 18 solar panels to the outside of the main 10-meter-high cylindrical tank. It’s not the typical home for solar panels, most of which are flat, rigid silicon and glass rectangles arrayed on rooftops or in solar parks. The Marburg facility’s panels, by contrast, are ultrathin organic films made by Heliatek, a German solar company. In the past few years, Heliatek has mounted its flexible panels on the sides of office towers, the curved roofs of bus stops, and even the cylindrical shaft of an 80-meter-tall windmill. The goal: expanding solar power’s reach beyond flat land. “There is a huge market where classical photovoltaics do not work,” says Jan Birnstock, Heliatek’s chief technical officer.

Organic photovoltaics (OPVs) such as Heliatek’s are more than 10 times lighter than silicon panels and in some cases cost just half as much to produce. Some are even transparent, which has architects envisioning solar panels, not just on rooftops, but incorporated into building facades, windows, and even indoor spaces. “We want to change every building into an electricity-generating building,” Birnstock says.

Heliatek’s panels are among the few OPVs in practical use, and they convert about 9% of the energy in sunlight to electricity. But in recent years, researchers around the globe have come up with new materials and designs that, in small, lab-made prototypes, have reached efficiencies of nearly 20%, approaching silicon and alternative inorganic thin-film solar cells, such as those made from a mix of copper, indium, gallium, and selenium (CIGS). Unlike silicon crystals and CIGS, where researchers are mostly limited to the few chemical options nature gives them, OPVs allow them to tweak bonds, rearrange atoms, and mix in elements from across the periodic table. Those changes represent knobs chemists can adjust to improve their materials’ ability to absorb sunlight, conduct charges, and resist degradation. OPVs still fall short of those measures. But, “There is an enormous white space for exploration,” says Stephen Forrest, an OPV chemist at the University of Michigan, Ann Arbor.

Solar Energy Gets Flexible, Robert F. Service, Science Magazine

Read more…

Mirror, Mirror...

10780308878?profile=RESIZE_584x

Various views of a 3D-printed object are captured by a single camera using a dome-shaped array of mirrors. Left: The raw image. Right: closeups of some of the individual views. (Image: Sanha Cheong, SLAC National Accelerator Laboratory)

Topics: Applied Physics, Atomic-Scale Microscopy, Materials Science, Optics

(Nanowerk News) When it goes online, the MAGIS-100 experiment at the Fermi National Accelerator Laboratory and its successors will explore the nature of gravitational waves and search for certain kinds of wavelike dark matter. But first, researchers need to figure out something pretty basic: how to get good photographs of the clouds of atoms at the heart of their experiment.

Researchers at the Department of Energy's SLAC National Accelerator Laboratory realized that task would be perhaps the ultimate exercise in ultra-low light photography.

But a SLAC team that included Stanford graduate students Sanha Cheong and Murtaza Safdari, SLAC Professor Ariel Schwartzman, and SLAC scientists Michael Kagan, Sean Gasiorowski, Maxime Vandegar, and Joseph Frish found a simple way to do it: mirrors. By arranging mirrors in a dome-like configuration around an object, they can reflect more light towards the camera and image multiple sides of an object simultaneously.

And, the team reports in the Journal of Instrumentation ("Novel light field imaging device with an enhanced light collection for cold atom clouds"), that there's an additional benefit. Because the camera now gathers views of an object taken from many different angles, the system is an example of “light-field imaging”, which captures not just the intensity of light but also which direction light rays travel. As a result, the mirror system can help researchers build a three-dimensional model of an object, such as an atom cloud.

How do you take a better image of atom clouds? Mirrors - lots of mirrors, SLAC National Accelerator Laboratory

Read more…

ARDP...

10736885455?profile=RESIZE_400x

The design concept of BWXT Advanced Nuclear Reactor. BWX Technologies

Topics: Applied Physics, Alternate Energy, Climate Change, Nuclear Power

According to the US Energy Information Administration, the US uses a mixture of 60.8% fossil fuel sources to generate 2,504 billion kilowatt hours of energy. Our nuclear expenditure is a paltry 18.9%. The totality of renewable sources (wind, hydropower, solar, biomass, and geothermal) is a little higher: 20.1%. This is the crux of the "Green New Deal."

Though I long for the cleaner, neater version of nuclear power in fusion, it's kind of hard to mimic the pressures and magnetic fields necessary to spark essentially a mini sun on the planet. I think the resistance to nuclear fission is cultural: from the atomic bomb, Oppenheimer quoting the Bhagavad-Gita at the first successful testing, a classic "what have we done" trope. Popular fiction emphasizes doomsday scenarios and radioactive zombies. Honorable mention: Space 1999, which like zombies I doubt could ever happen, but it kept my attention in my youth. There are also genuine concerns about Chernobyl (still in Ukraine), Three-Mile Island, and Fukushima Daichi that come to the public's mind.

The reason the percentages on fossil fuels are so high is that they release extreme amounts of energy to superheat water for turbines to turn magnets superfast in copper coils. That is how most of the electricity we consume is made.

France currently generates 70% of its energy from nuclear power plants, with plans to reduce this to 50% as they mix in renewables. This is proportional to the percentage the US already has in renewables. My only caveat is an obsolescence plan for solar panels (they have to be implanted with caustic impurities to MAKE them conductive, and after twenty years, could end up in a landfill near humans). Battery-operated vehicles are fine, but Lithium has to be mined, it requires a lot of water, typically the indigenous peoples near the mines don't make a profit, and their land and resources are spoiled.

If we truly are going to transition from fossil fuels to "cleaner energy," I think we should realize that power plant designs have improved greatly since the aforementioned disasters.

As an engineer, I always tried to follow this edict from my father: "Experience isn't the best teacher: other people's experiences are the best teacher." In short, learn from others' mistakes, and try to not repeat them. It works in other nontechnical areas of life as well.

I (fingers crossed) assume nuclear power plant design engineers follow something similar to improve on future designs for safety, and as we've been exposed to with the war in Ukraine, global energy security.

I'm proposing an "everything on the table strategy," not Pollyanna. By the way, our "carbon footprint" appears to be a boondoggle by the industries that caused our current malaise.

The U.S. Department of Energy's Advanced Reactor Demonstration Program commonly referred to as ARDP, is designed to help our domestic nuclear industry demonstrate its advanced reactor designs on accelerated timelines. This will ultimately help us build a competitive portfolio of new U.S. reactors that offer significant improvements over today’s technology.

The advanced reactors selected for risk-reduction awards are an excellent representation of the diverse designs currently under development in the United States. They range from advanced light-water-cooled small modular reactors to new designs that use molten salts and high-temperature gases to flexibly operate at even higher temperatures and lower pressures.

All of them have the potential to compete globally once deployed. They will offer consumers more access to a reliable, clean power source that can be depended on in the near future to flexibly generate electricity, drive industrial processes, and even provide potable drinking water to communities in water-scarce locations.

5 Advanced Reactor Designs to Watch in 2030, Alice Caponiti, Deputy Assistant Secretary for Reactor Fleet and Advanced Reactor Deployment, Office of Nuclear Energy

Read more…

DUNE Detector...

10674590872?profile=RESIZE_400x

The ore pass at the Sanford Underground Research Facility in South Dakota. (Courtesy of Sanford Underground Research Facility, CC BY-NC-ND 4.0.)

Topics: Applied Physics, Modern Physics, Particle Physics, Theoretical Physics

The Deep Underground Neutrino Experiment (DUNE) will be the world’s largest cryogenic particle detector. Its aim is to study the most elusive of particles: neutrinos. Teams from around the world are developing and constructing detector components that they will ship to the Sanford Underground Research Facility, commonly called Sanford Lab, in the Black Hills of South Dakota. There the detector components will be lowered more than a kilometer underground through a narrow shaft to the caverns, where they will be assembled and operated while being sheltered from the cosmic rays that constantly rain down on Earth’s surface.

For at least two decades, the detector will be exposed to the highest-intensity neutrino beam on the planet. The beam will be generated 1300 km away by a megawatt-class proton accelerator and beamline under development at Fermilab in Batavia, Illinois. A smaller detector just downstream of the beamline will measure the neutrinos at the start of their journey, thereby enabling the experiment’s precision and scientific reach.

Building a ship in a bottle for neutrino science, Anne Heavey, FERMILAB, Physics Today

Read more…

Perovskite and Maxima...

10584322495?profile=RESIZE_710x

The effective mass of the electrons can be derived from the curvature around the maxima of the ARPES measurement data (image, detail). (Courtesy: HZB)

Topics: Alternate Energy, Applied Physics, Battery, Chemistry, Civilization, Climate Change

A longstanding explanation for why perovskite materials make such good solar cells has been cast into doubt thanks to new measurements. Previously, physicists ascribed the favorable optoelectronic properties of lead halide perovskites to the behavior of quasiparticles called polarons within the material’s crystal lattice. Now, however, detailed experiments at Germany’s BESSY II synchrotron revealed that no large polarons are present. The work sheds fresh light on how perovskites can be optimized for real-world applications, including light-emitting diodes, semiconductor lasers, and radiation detectors as well as solar cells.

Lead halide perovskites belong to a family of crystalline materials with an ABXstructure, where A is cesium, methylammonium (MA), or formamidinium (FA); B is lead or tin; and X is chlorine, bromine, or iodine. They are promising candidates for thin-film solar cells and other optoelectronic devices because their tuneable bandgaps enable them to absorb light over a broad range of wavelengths in the solar spectrum. Charge carriers (electrons and holes) also diffuse through them over long distances. These excellent properties give perovskite solar cells a power conversion efficiency of more than 18%, placing them on a par with established solar-cell materials such as silicon, gallium arsenide, and cadmium telluride.

Researchers are still unsure, however, exactly why charge carriers travel so well in perovskites, especially since perovskites contain far more defects than established solar-cell materials. One hypothesis is that polarons – composite particles made up of an electron surrounded by a cloud of ionic phonons, or lattice vibrations – act as screens, preventing charge carriers from interacting with the defects.

Charge-transport mystery deepens in promising solar-cell materials, Isabelle Dumé, Physics World

Read more…

Getting Back Mojo...

10244579465?profile=RESIZE_400x

Artist's representation of the circular phonons. (Courtesy: Nadja Haji and Peter Baum, University Konstanz)

Topics: Applied Physics, Lasers, Magnetism, Materials Science, Phonons

When a magnetic material is bombarded with short pulses of laser light, it loses its magnetism within femtoseconds (10–15 seconds). The spin, or angular momentum, of the electrons in the material, thus disappears almost instantly. Yet all that angular momentum cannot simply be lost. It must be conserved – somewhere.

Thanks to new ultrafast electron diffraction experiments, researchers at the University of Konstanz in Germany have now found that this “lost” angular momentum is in fact transferred from the electrons to vibrations of the material’s crystal lattice within a few hundred femtoseconds. The finding could have important implications for magnetic data storage and for developments in spintronics, a technology that exploits electron spins to process information without using much power.

In a ferromagnetic material, magnetism occurs because the magnetic moments of the material’s constituent atoms align parallel to each other. The atoms and their electrons then act as elementary electromagnets, and the magnetic fields are produced mainly by the spin of the electrons.

Because an ultrashort laser pulse can rapidly destroy this alignment, some scientists have proposed using such pulses as an off switch for magnetization, thereby enabling ultra-rapid data processing at frequencies approaching those of light. Understanding this ultrafast demagnetization process is thus crucial for developing such applications as well as for better understanding the foundations of magnetism.

Researchers find ‘lost’ angular momentum, Isabelle Dumé, Physics World

Read more…

Strain and Flow...

10001241864?profile=RESIZE_710x

Topography of the two-dimensional crystal on top of the microscopically small wire indicated by dashed lines. Excitons freely move along the wire-induced dent, but cannot escape it in the perpendicular direction. (Courtesy: Florian Dirnberger)

Topics: Applied Physics, Condensed Matter Physics, Electrical Engineering

Using a technique known as strain engineering, researchers in the US and Germany have constructed an “excitonic wire” – a one-dimensional channel through which electron-hole pairs (excitons) can flow in a two-dimensional semiconductor like water through a pipe. The work could aid the development of a new generation of transistor-like devices.

In the study, a team led by Vinod Menon at the City College of New York (CCNY) Center for Discovery and Innovation and Alexey Chernikov at the Dresden University of Technology and the University of Regensburg in Germany deposited atomically thin 2D crystals of tungsten diselenide (fully encapsulated in another 2D material, hexagonal boride nitride) atop a 100 nm-thin nanowire. The presence of the nanowire created a small, elongated dent in the tungsten diselenide by slightly pulling apart the atoms in the 2D material and so inducing strain in it. According to the study’s lead authors, Florian Dimberger and Jonas Ziegler, this dent behaves for excitons much like a pipe does for water. Once trapped inside, they explain, the excitons are bound to move along the pipe.

Strain guides the flow of excitons in 2D materials, Isabelle Dumé, Physics World

Read more…

Martian Windmills...

9972036484?profile=RESIZE_710x

Artist's rendition of a future colony on Mars., e71lena via Shutterstock

Topics: Applied Physics, Energy, Mars, Space Exploration

(Inside Science) -- Mars is known for its dust storms, which can cause problems for lander equipment and block out the sun that fuels solar panels. These punishing storms, which can last for weeks, have already caused damage to equipment and even killed NASA’s Opportunity rover. But they could also be dangerous to astronauts on the ground, who would rely on solar power for oxygen, heat, and water cleansing during future missions.

Vera Schorbach, a professor of wind energy at the Hamburg University of Applied Sciences in Germany, was curious to see whether wind turbines could harness the power of these storms, filling in for solar panels on the Red Planet during times of need.

"I asked myself, 'Why don't they have a wind turbine if they have dust storms,'" said Schorbach, the lead author of a study about the potential for wind power on Mars published recently in the journal Acta Astronautica.

Could martian dust storms help astronauts keep the lights on? Joshua Rapp Leam, Astronomy/Inside Science

Read more…

Time...

9817128673?profile=RESIZE_710x

GIF source: article link below

Topics: Applied Physics, Education, Research, Thermodynamics

Also note the Hyper Physics link on the Second Law of Thermodynamics, particularly "Time's Arrow."

"The two most powerful warriors are patience and time," Leo Tolstoy, War, and Peace

The short answer

We can measure time intervals — the duration between two events — most accurately with atomic clocks. These clocks produce electromagnetic radiation, such as microwaves, with a precise frequency that causes atoms in the clock to jump from one energy level to another. Cesium atoms make such quantum jumps by absorbing microwaves with a frequency of 9,192,631,770 cycles per second, which then defines the international scientific unit for time, the second.

The answer to how we measure time may seem obvious. We do so with clocks. However, when we say we’re measuring time, we are speaking loosely. Time has no physical properties to measure. What we are really measuring is time intervals, the duration separating two events.

Throughout history, people have recorded the passage of time in many ways, such as using sunrise and sunset and the phases of the moon. Clocks evolved from sundials and water wheels to more accurate pendulums and quartz crystals. Nowadays when we need to know the current time, we look at our wristwatch or the digital clock on our computer or phone. 

The digital clocks on our computers and phones get their time from atomic clocks, including the ones developed and operated by the National Institute of Standards and Technology (NIST).

How Do We Measure Time? NIST

Read more…

HETs...

9802247065?profile=RESIZE_584x

FIG. 1. Temporal evolution of chamber pressure assuming nominal operation for 30 s followed by a 40 s interval with flow rate reduced 100×. The colors correspond to 1 kW, 10 kW, 100 kW, and 1 MW power levels. The process is then repeated.

Topics: Applied Physics, Computer Modeling, NASA, Space Exploration, Spaceflight

Abstract

Hall effect thrusters operating at power levels in excess of several hundreds of kilowatts have been identified as enabling technologies for applications such as lunar tugs, large satellite orbital transfer vehicles, and solar system exploration. These large thrusters introduce significant testing challenges due to the propellant flow rate exceeding the pumping speed available in most laboratories. Even with proposed upgrades in mind, the likelihood that multiple vacuum facilities will exist in the near future to allow long-duration testing of high-power Hall thrusters operating at power levels in excess of 100 kW remains extremely low. In this article, we numerically explore the feasibility of testing Hall thrusters in a quasi-steady mode defined by pulsing the mass flow rate between a nominal and a low value. Our simulations indicate that sub-second durations available before the chamber reaches critical pressure are sufficiently long to achieve the steady-state current and flow field distributions, allowing us to characterize thruster performance and the near plume region.

I. INTRODUCTION

Hall effect thrusters (HETs) are spacecraft electric propulsion (EP) devices routinely used for orbit raising, repositioning, and solar system exploration applications. To date, the highest power Hall thruster flown is the 4.5 kW BPT-4000 launched in 2010 aboard the Advanced EHF satellite1 (which the HET helped to deliver to the correct orbit after a failure of the primary chemical booster), although a 13 kW system is being readied for near-term flight operation as part of the Lunar Gateway,2 and thrusters at 503,4–100 kWs power levels have been demonstrated in the laboratory. Solar cell advancements and a renewed interest in nuclear power have led the aerospace community to consider the use of Hall thrusters operating at even higher power levels. Multi-hundred kW EP systems would offer an economical solution for LEO to GEO orbit raising or for the deployment of an Earth-to-Moon delivery tug, and power levels in excess of 600 kW could be utilized for crewed transport to Mars.5–9 While such power levels could be delivered using existing devices, a single large thruster requires less system mass and has a reduced footprint than a cluster of smaller devices.10

Quasi-steady testing approach for high‐power Hall thrusters, Lubos Brieda, Yevgeny Raitses, Edgar Choueiri, Roger Myers, Michael Keidar, Journal of Applied Physics

Read more…

Wearable Pressure Sensor...

9777787875?profile=RESIZE_710x

Hybrid device: A diagram of the layers in the new soft pressure sensor. (Courtesy: the University of Texas at Austin)

Topics: Applied Physics, Biotechnology, Nanotechnology

Wearable pressure sensors are commonly used in medicine to track vital signs, and in robotics to help mechanical fingers handle delicate objects. Conventional soft capacitive pressure sensors only work at pressures below 3 kPa, however, meaning that something as simple as tight-fitting clothing can hinder their performance. A team of researchers at the University of Texas has now made a hybrid sensor that remains highly sensitive over a much wider range of pressures. The new device could find use in robotics and biomedicine.

The most common types of pressure sensors rely on piezoresistive, piezoelectric, capacitive, and/or optical mechanisms to operate. When such devices are compressed, their electrical resistance, voltage, capacitance, or light transmittance (respectively) changes in a well-characterized way that can be translated into a pressure reading.

The high sensitivity and long-term stability of capacitive pressure sensors make them one of the most popular types, and they are often incorporated into soft, flexible sensors that can be wrapped around curved surfaces. Such sensors are popular in fields such as prosthetics, robotics, and biometrics, where they are used to calibrate the strength of a robot’s grip, monitor pulse rates, and blood pressure, and measure footstep pressure. However, these different applications involve a relatively wide range of pressures: below 1 kPa for robotic electronic skin (e-skin) and pulse monitoring; between 1 and 10 kPa for manipulating objects; and more than 10 kPa for blood pressure and footstep pressure.

Wearable pressure sensors extend their range, Isabelle Dumé, Physics World

Read more…

E=mc^2...

9603781490?profile=RESIZE_710x

Image source: link below

Topics: Applied Physics, Einstein, General Relativity, Special Relativity

According to Einstein’s theory of special relativity, first published in 1905, light can be converted into matter when two light particles collide with intense force. But, try as they might, scientists have never been able to do this. No one could create the conditions needed to transform light into matter — until now.

Physicists claim to have generated matter from pure light for the first time — a spectacular display of Einstein’s most famous equation.

This is a significant breakthrough, overcoming a theoretical barrier that seemed impossible only a few decades ago.

What does E=mc2 mean? The world’s most famous equation is both straightforward and beyond comprehension at the same time: “Energy equals mass times the speed of light squared.” 

At its most fundamental level, it means energy and mass are various forms of the same thing. Energy may transform into mass and vice versa under the right circumstances. 

However, imagine a light beam transforming into, say, a paper clip, and it seems like pure magic. That’s where the “speed of light squared” factors in. It determines how much energy a paper clip or any piece of matter contains. The speed of light is the factor needed to make mass and energy equal. If every atom in a paper clip could be converted to pure energy, it would generate 18 kilotons of TNT. That’s around the size of the Hiroshima bomb from 1945. 

(Still can’t picture it? Me neither.) 

You can go the other way, too: if you crash two highly energized light particles, or photons, into each other, then you can create energy and mass. It sounds simple enough, but no one has been able to make it happen.

Since they couldn’t accelerate light particles, the team opted for ions and used the Relativistic Heavy Ion Collider (RHIC) to accelerate them at extreme speeds. In two accelerator rings at RHIC, the accelerated gold ions to 99.995% of the speed of light. With 79 protons, a gold ion has a strong positive charge. When a charged heavy ion is accelerated to incredible speeds, a strong magnetic field swirls around it. 

That magnetic field produces “virtual photons.” So, in a roundabout way, they accelerated light particles by piggybacking them on an ion.

When the team sped the ions in the accelerator rings with significant energy, the ions nearly collided, allowing the photon clouds surrounding them to interact and form an electron-positron pair — essentially, matter. They published their work in the journal Physical Review Letters.

Scientists observed what Einstein predicted a century ago, Teresa Carey, Free Think

Read more…

Exciton Surfing...

9582548256?profile=RESIZE_710x

Surfing excitons: Cambridge’s Alexander Sneyd with the transient-absorption microscopy set-up. (Courtesy: Alexander Sneyd)

Topics: Alternate Energy, Applied Physics, Materials Science, Nanotechnology, Solar Power

Organic solar cells (OSCs) are fascinating devices where layers of organic molecules or polymers carry out light absorption and subsequent transport of energy – the tasks that make a solar cell work. Until now, the efficiency of OSCs has been thought to be constrained by the speed at which energy carriers called excitons to move between localized sites in the organic material layer of the device. Now, an international team of scientists led by Akshay Rao at the UK’s University of Cambridge has shown that this is not the case. What is more, they have discovered a new quantum mechanical transport mechanism called transient delocalization, which allows OSCs to reach much higher efficiencies.

When light is absorbed by a solar cell, it creates electron-hole pairs called excitons and the motion of these excitons plays a crucial role in the operation of the device. An example of an organic material layer where light absorption and transport of excitons takes place is in a film of well-ordered poly(3-hexylthiophene) nanofibers. To study exciton transport, the team shone laser pulses at such a nanofiber film and observed its response.

Exciton wave functions were thought to be localized due to strong couplings with lattice vibrations (phonons) and electron-hole interactions. This means the excitons would move slowly from one localized site to the next. However, the team observed that the excitons were diffusing at speeds 1000 times greater than what had been shown for similar samples in previous research. These speeds correspond to a ground-breaking diffusion length of about 300 nm for such crystalline films. This means energy can be transported much faster and more efficiently than previously thought.

Exciton ‘surfing’ could boost the efficiency of organic solar cells, Rikke Plougmann, Physics World

Read more…

Cold Atmospheric Plasmas...

9537512892?profile=RESIZE_584x

FIG. 1. Schematic of the motivation and the method for this paper.

Topics: Applied Physics, Chemistry, Physics, Plasma, Research

ABSTRACT

Cold atmospheric plasmas have great application potential due to their production of diverse types of reactive species, so understanding the production mechanism and then improving the production efficiency of the key reactive species are very important. However, plasma chemistry typically comprises a complex network of chemical species and reactions, which greatly hinders the identification of the main production/reduction reactions of the reactive species. Previous studies have identified the main reactions of some plasmas via human experience, but since plasma chemistry is sensitive to discharge conditions, which are much different for different plasmas, widespread application of the experience-dependent method is difficult. In this paper, a method based on graph theory, namely, vital nodes identification, is used for the simplification of plasma chemistry in two ways: (1) holistically identifying the main reactions for all the key reactive species and (2) extracting the main reactions relevant to one key reactive species of interest. This simplification is applied to He + air plasma as a representative, chemically complex plasma, which contains 59 species and 866 chemical reactions, as reported previously. Simplified global models are then developed with the key reactive species and main reactions, and the simulation results are compared with those of the full global model, in which all species and reactions are incorporated. It was found that this simplification reduces the number of reactions by a factor of 8–20 while providing simulation results of the simplified global models, i.e., densities of the key reactive species, which are within a factor of two of the full global model. This finding suggests that the vital nodes identification method can capture the main chemical profile from a chemically complex plasma while greatly reducing the computational load for simulation.

Simplification of plasma chemistry by means of vital nodes identification

Bowen Sun, Dingxin Liu, Yifan Liu, Santu Luo, Mingyan Zhang, Jishen Zhang, Aijun Yang, Xiaohua Wang, and Mingzhe Rong, Journal of Applied Physics

Read more…