applied physics (65)

Wages of the Thermal Budget...

12998106495?profile=RESIZE_710x

 

Topics: Applied Physics, Astrobiology, Astrophysics, Civilization, Climate Change, Existentialism, Exoplanets, SETI, Thermodynamics

 

Well, this firmly puts a kink in the "Fermi Paradox."

 

The Industrial Revolution started in Britain around 1760 - 1840, and there was a colloquial saying that "the sun did not set on the British Empire." The former colony, America, cranked up its industrial revolution around 1790. Mary Shelley birthed the science fiction genre in the dystopian Frankenstein in 1818, around the time of climate-induced change of European weather, and a noticeable drop in temperature. It was also a warning of the overconfidence of science, the morality that should be considered when designing new technologies, its impact on the environment, and humans that sadly, don't think themselves a part of the environment. The divide between sci-fi is dystopian and Pollyannish: Star Trek mythology made that delicate balance between their fictional Eugenics Wars, World War III, the "Atomic Horror," and a 21st Century dark age, the discovery of superluminal space travel, and First Contact with benevolent, pointy-eared aliens, leading to Utopia post xenophobia. We somehow abandoned countries and currency, and thus, previous hierarchal power and inequality modalities. Roddenberry's dream was a secular version of Asgard, Heaven, Olympus, and Svarga: a notion of continuance for a species aware of its finite existence, buttressed by science and space lasers.

 

If aliens had a similar industrial revolution, they perhaps created currencies that allowed for trade and commerce, hierarchies to decide who would hoard resources, and which part of their societies were functionally peasantry. They would separate by tribes, complexions, and perhaps stripes if they're aquatic, and fight territorial wars over resources. Those wars would throw a lot of carbon dioxide in their oxygenated atmospheres. Selfishness, hoarding disorder, and avarice would convince the aliens that the weather patterns were "a hoax," they would pay the equivalent of lawyers to obfuscate the reality of their situations before it was too late on any of their planets to reverse the effects on their worlds. If they were colonizing the stars, it wouldn't be for the altruistic notion of expanding their knowledge by "seeking out life, and new civilizations": they would have exceeded the thermal budgets of their previous planets. Changing their galactic zip codes would only change the locations of their eventual outcomes.

 

Thermodynamics wins, and Lord Kelvin may have answered Enrico Fermi's question. Far be it for me to adjudicate whether or not anyone has had a "close encounter of the third kind," but I don't see starships coming out of this scenario. Cogito ergo sum homo stultus.

 

It may take less than 1,000 years for an advanced alien civilization to destroy its own planet with climate change, even if it relies solely on renewable energy, a new model suggests.

 

When astrophysicists simulated the rise and fall of alien civilizations, they found that, if a civilization were to experience exponential technological growth and energy consumption, it would have less than 1,000 years before the alien planet got too hot to be habitable. This would be true even if the civilization used renewable energy sources, due to inevitable leakage in the form of heat, as predicted by the laws of thermodynamics. The new research was posted to the preprint database arXiv and is in the process of being peer-reviewed.

 

While the astrophysicists wanted to understand the implications for life beyond our planet, their study was initially inspired by human energy use, which has grown exponentially since the 1800s. In 2023, humans used about 180,000 terawatt hours (TWh), which is roughly the same amount of energy that hits Earth from the sun at any given moment. Much of this energy is produced by gas and coal, which is heating up the planet at an unsustainable rate. But even if all that energy were created by renewable sources like wind and solar power, humanity would keep growing, and thus keep needing more energy."

 

This brought up the question, 'Is this something that is sustainable over a long period of time?'" Manasvi Lingam, an astrophysicist at Florida Tech and a co-author of the study, told Live Science in an interview.

 

Lingam and his co-author Amedeo Balbi, an associate professor of astronomy and astrophysics at Tor Vergata University of Rome, were interested in applying the second law of thermodynamics to this problem. This law says that there is no perfect energy system, where all energy created is efficiently used; some energy must always escape the system. This escaped energy will cause a planet to heat up over time.

 

"You can think of it like a leaky bathtub," Lingam said. If a bathtub that is holding only a little water has a leak, only a small amount can get out, he explained. But as the bathtub is filled more and more — as energy levels increase exponentially to meet demand — a small leak can suddenly turn into a flooded house.

 

Alien civilizations are probably killing themselves from climate change, bleak study suggests, Sierra Bouchér, Live Science

 

Read more…

Driven to Caveat Emptor...

12963597286?profile=RESIZE_710x

Meinzahn/Getty Images

Topics: Applied Physics, Atmospheric Science, Chemistry, Climate Change, Global Warming

Note: It's disheartening that geoengineering, made popular by science fiction novels and plots in Star Trek, is being considered because we're too selfish to change our behavior.

More and more climate scientists are supporting experiments to cool Earth by altering the stratosphere or the ocean.

As recently as 10 years ago most scientists I interviewed and heard speak at conferences did not support geoengineering to counteract climate change. Whether the idea was to release large amounts of sulfur dioxide into the stratosphere to “block” the sun’s heating or to spread iron across the ocean to supercharge algae that breathe in carbon dioxide, researchers resisted on principle: don’t mess with natural systems because unintended consequences could ruin Earth. They also worried that trying the techniques even at a small scale could be a slippery slope to wider deployment and that countries would use the promise of geoengineering as an excuse to keep burning carbon-emitting fossil fuels.

But today, climate scientists more openly support experimenting with these and other proposed strategies, partly because entrepreneurs and organizations are going ahead with the methods anyway—often based on little data or field trials. Scientists want to run controlled experiments to see if the methods are productive, to test consequences, and perhaps to show objectively that the approaches can cause serious problems.

“We do need to try the techniques to figure them out,” says Rob Jackson, a professor at Stanford University, chair of the international research partnership Global Carbon Project, and author of a book on climate solutions called Into the Clear Blue Sky (Scribner, 2024). “But doing research does make them more likely to happen. That is the knotty part of all this.”

As Earth’s Climate Unravels, More Scientists Are Ready to Test Geoengineering, Mark Fischetti, Scientific American

Read more…

Lasers and Plasma...

12958562296?profile=RESIZE_710x

A researcher holds the scaffolding with tiny copper foils attached. These copper pieces will be struck with lasers, heating them to thousands of degrees Fahrenheit.

Credit: Hiroshi Sawada

Topics: Applied Physics, Lasers, Materials Science, Plasma, Radiation, Thermodynamics

For the first time, researchers monitor the heat progression in laser-created plasma that occurs in only a few trillionths of a second.

A team of researchers supported by the U.S. National Science Foundation has developed a new method of tracking the ultra-fast heat progression in warm, dense matter plasmas — the type of matter created when metals are struck with high-powered lasers. Published in Nature Communications, the results of this study will help researchers better understand not only how plasma forms when metal is heated by high-powered lasers but also what's happening within the cores of giant planets and even aid in the development of fast ignition laser fusion with energy-generating potential here on Earth.

The research team aimed a high-powered laser at very thin strips of copper, which heated to 200,000 degrees Fahrenheit and momentarily shifted to a warm, dense matter plasma state before exploding. At the same time, the researchers used ultrashort-duration X-ray pulses from an X-ray free-electron laser to capture images of the copper's transformation down to a few picoseconds or trillionths of a second. By doing so, the researchers were able to observe the ultra-fast and microscopic transformation of matter.

"These findings shed new light on fundamental properties of plasmas in the warm dense matter state," says Vyacheslav Lukin, NSF program director for Plasma Physics. "The new methods to probe the plasma developed by this international team of researchers may also inform future experiments at extremely high-powered lasers, such as the NSF ZEUS Laser Facility."

Researchers track plasma creation using a novel ultra-fast laser method, National Science Foundation

Read more…

Twist in Storage...

12765834098?profile=RESIZE_710x

Power with a twist: Twisted ropes made from single-walled carbon nanotubes could store enough energy to power sensors within the human body while avoiding the chemical hazards associated with batteries. (Courtesy: Shigenori UTSUMI)

Topics: Applied Physics, Battery, Carbon Nanotubes, Chemistry, Materials Science, Nanoengineering

Mechanical watches and clockwork toys might seem like relics of a bygone age, but scientists in the US and Japan are bringing this old-fashioned form of energy storage into the modern era. By making single-walled carbon nanotubes (SWCNTs) into ropes and twisting them like the string on an overworked yo-yo, Katsumi KanekoSanjeev Kumar Ujjain , and colleagues showed that they can store twice as much energy per unit mass as the best commercial lithium-ion batteries. The nanotube ropes are also stable at a wide range of temperatures, and the team says they could be safer than batteries for powering devices such as medical sensors.

SWCNTs are made from sheets of pure carbon just one atom thick that have been rolled into a straw-like tube. They are impressively tough – five times stiffer and 100 times stronger than steel – and earlier theoretical studies by team member David Tománek and others suggested that twisting them could be a viable means of storing large amounts of energy in a compact, lightweight system.

Twisted carbon nanotubes store more energy than lithium-ion batteries, Margaret Harris, Physics World.

Read more…

FHM...

12762128884?profile=RESIZE_710x

Antiferromagnetically ordered particles are represented by red and blue spheres in this artist’s impression. The particles are in an array of optical traps. Credit: Chen Lei

Topics: Applied Physics, Computer Science, Quantum Computer, Quantum Mechanics

Experiments on the Fermi–Hubbard model can now be made much larger, more uniform, and more quantitative.

A universal quantum computer—capable of crunching the numbers of any complex problem posed to it—is still a work in progress. But for specific problems in quantum physics, there’s a more direct approach to quantum simulation: Design a system that captures the physics you want to study, and then watch what it does. One of the systems most widely studied that way is the Fermi–Hubbard model (FHM), in which spin-up and spin-down fermions can hop among discrete sites in a lattice. Originally conceived as a stripped-down description of electrons in a solid, the FHM has attracted attention for its possible connection to the mysterious physics of high-temperature superconductivity.

Stripped down, though it may be, the FHM defies solution, either analytical or numerical, except in the simplest cases, so researchers have taken to studying it experimentally. In 2017, Harvard University’s Markus Greiner and colleagues made a splash when they observed antiferromagnetic order—a checkerboard pattern of up and down spins—in their FHM experiment consisting of fermionic atoms in a 2D lattice of 80 optical traps. (See Physics Today, August 2017, page 17.) The high-temperature-superconductor phase diagram has an antiferromagnetic phase near the superconducting one, so the achievement promised more exciting results to come. But the small size of the experiment limited the observations the researchers could make.

A 10 000-fold leap for a quintessential quantum simulator, Johanna L. Miller, Physics Today.

Read more…

12664591664?profile=RESIZE_710x

AP Photo/Andres Kudacki

Topics: Applied Physics, Diversity in Science, Physics, Physiology

"B-boys and B-girls wield physics to pull off gravity-defying dance moves."

Okay, "gravity-defying" is a bit of hyperbole. Break dancing, as the article alludes, started in New York, and the movements can be found in martial arts like Brazilian Capoeira. It's more centrifugal force and torque, but I get that "gravity-defying" will get more clicks. I wish it didn't and the science behind it got more attention.

Two athletes square off for an intense dance battle. The DJ starts spinning tunes, and the athletes begin twisting, spinning and seemingly defying gravity, respectfully watching each other and taking turns showing off their skill.

The athletes converse through their movements, speaking through a dance that celebrates both athleticism and creativity. While the athletes probably aren’t consciously thinking about the physics behind their movements, these complex and mesmerizing dances demonstrate a variety of different scientific principles.

Breaking, also known as breakdancing, originated in the late 1970s in the New York City borough of the Bronx. Debuting as an Olympic sport in the 2024 Summer Olympics, breaking will showcase its dynamic moves on a global stage. This urban dance style combines hip-hop culture, acrobatic moves and expressive footwork.

Physics In Action: Paris 2024 Olympics To Debut High-Level Breakdancing, Amy Pope, Clemson University

Read more…

Climate CERN...

12563438297?profile=RESIZE_710x

Worrying trend Reliable climate models are needed so that societies can adapt to the impact of climate change. (Courtesy: Shutterstock/Migel)

Topics: Applied Physics, Atmospheric Science, CERN, Civilization, Climate Change

It was a scorcher last year. Land and sea temperatures were up to 0.2 °C (32.36 °F) higher every single month in the second half of 2023, with these warm anomalies continuing into 2024. We know the world is warming, but the sudden heat spike had not been predicted. As NASA climate scientist Gavin Schmidt wrote in Nature recently: “It’s humbling and a bit worrying to admit that no year has confounded climate scientists’ predictive capabilities more than 2023 has.”

As Schmidt went on to explain, a spell of record-breaking warmth had been deemed “unlikely” despite 2023 being an El Niño year, where the relatively cool waters in the central and eastern equatorial Pacific Ocean are replaced with warmer waters. Trouble is, the complex interactions between atmospheric deep convection and equatorial modes of ocean variability, which lie behind El Niño, are poorly resolved in conventional climate models.

Our inability to simulate El Niño properly with current climate models (J. Climate 10.1175/JCLI-D-21-0648.1) is symptomatic of a much bigger problem. In 2011 I argued that contemporary climate models were not good enough to simulate the changing nature of weather extremes such as droughts, heat waves and floods (see “A CERN for climate change” March 2011 p13). With grid-point spacings typically around 100 km, these models provide a blurred, distorted vision of the future climate. For variables like rainfall, the systematic errors associated with such low spatial resolution are larger than the climate-change signals that the models attempt to predict.

Reliable climate models are vitally required so that societies can adapt to climate change, assess the urgency of reaching net-zero or implement geoengineering solutions if things get really bad. Yet how is it possible to adapt if we don’t know whether droughts, heat waves, storms or floods cause the greater threat? How do we assess the urgency of net-zero if models cannot simulate “tipping” points? How is it possible to agree on potential geoengineering solutions if it is not possible to reliably assess whether spraying aerosols in the stratosphere will weaken the monsoons or reduce the moisture supply to the tropical rainforests? Climate modelers have to take the issue of model inadequacy much more seriously if they wish to provide society with reliable actionable information about climate change.

I concluded in 2011 that we needed to develop global climate models with spatial resolution of around 1 km (with compatible temporal resolution) and the only way to achieve this is to pool human and computer resources to create one or more internationally federated institutes. In other words, we need a “CERN for climate change” – an effort inspired by the particle-physics facility near Geneva, which has become an emblem for international collaboration and progress.

Why we still need a CERN for climate change, Tim Palmer, Physics World

Read more…

Esse Quam Videri...

12428240263?profile=RESIZE_710x

Credit: Menno Schaefer/Adobe

Starlings flock in a so-called murmuration, a collective behavior of interest in biological physics — one of many subfields that did not always “belong” in physics.

Topics: Applied Physics, Cosmology, Einstein, History, Physics, Research, Science

"To be rather than to seem." Translated from the Latin Esse Quam Videri, which also happens to be the state motto of North Carolina. It is from the treatise on Friendship by the Roman statesman Cicero, a reminder of the beauty and power of being true to oneself. Source: National Library of Medicine: Neurosurgery

If you’ve been in physics long enough, you’ve probably left a colloquium or seminar and thought to yourself, “That talk was interesting, but it wasn’t physics.”

If so, you’re one of many physicists who muse about the boundaries of their field, perhaps with colleagues over lunch. Usually, it’s all in good fun.

But what if the issue comes up when a physics faculty makes decisions about hiring or promoting individuals to build, expand, or even dismantle a research effort? The boundaries of a discipline bear directly on the opportunities departments can offer students. They also influence those students’ evolving identities as physicists, and on how they think about their own professional futures and the future of physics.

So, these debates — over physics and “not physics” — are important. But they are also not new. For more than a century, physicists have been drawing and redrawing the borders around the field, embracing and rejecting subfields along the way.

A key moment for “not physics” occurred in 1899 at the second-ever meeting of the American Physical Society. In his keynote address, the APS president Henry Rowland exhorted his colleagues to “cultivate the idea of the dignity” of physics.

“Much of the intellect of the country is still wasted in the pursuit of so-called practical science which ministers to our physical needs,” he scolded, “[and] not to investigations in the pure ethereal physics which our Society is formed to cultivate.”

Rowland’s elitism was not unique — a fact that first-rate physicists working at industrial laboratories discovered at APS meetings, when no one showed interest in the results of their research on optics, acoustics, and polymer science. It should come as no surprise that, between 1915 and 1930, physicists were among the leading organizers of the Optical Society of America (now Optica), the Acoustical Society of America, and the Society of Rheology.

That acousticians were given a cold shoulder at early APS meetings is particularly odd. At the time, acoustics research was not uncommon in American physics departments. Harvard University, for example, employed five professors who worked extensively in acoustics between 1919 and 1950. World War II motivated the U.S. Navy to sponsor a great deal of acoustics research, and many physics departments responded quickly. In 1948, the University of Texas hired three acousticians as assistant professors of physics. Brown University hired six physicists between 1942 and 1952, creating an acoustics powerhouse that ultimately trained 62 physics doctoral students.

The acoustics landscape at Harvard changed abruptly in 1946, when all teaching and research in the subject moved from the physics department to the newly created department of engineering sciences and applied physics. In the years after, almost all Ph.D. acoustics programs in the country migrated from physics departments to “not physics” departments.

The reason for this was explained by Cornell University professor Robert Fehr at a 1964 conference on acoustics education. Fehr pointed out that engineers like himself exploited the fundamental knowledge of acoustics learned from physicists to alter the environment for specific applications. Consequently, it made sense that research and teaching in acoustics passed from physics to engineering.

It took less than two decades for acoustics to go from being physics to “not physics.” But other fields have gone the opposite direction — a prime example being cosmology.

Albert Einstein applied his theory of general relativity to the cosmos in 1917. However, his work generated little interest because there was no empirical data to which it applied. Edwin Hubble’s work on extragalactic nebulae appeared in 1929, but for decades, there was little else to constrain mathematical speculations about the physical nature of the universe. The theoretical physicists Freeman Dyson and Steven Weinberg have both used the phrase “not respectable” to describe how cosmology was seen by physicists around 1960. The subject was simply “not physics.”

This began to change in 1965 with the discovery of thermal microwave radiation throughout the cosmos — empirical evidence of the nearly 20-year-old Big Bang model. Physicists began to engage with cosmology, and the percentage of U.S. physics departments with at least one professor who published in the field rose from 4% in 1964 to 15% in 1980. In the 1980s, physicists led the satellite mission to study the cosmic microwave radiation, and particle physicists — realizing that the hot early universe was an ideal laboratory to test their theories — became part-time cosmologists. Today, it’s hard to find a medium-to-large sized physics department that does not list cosmology as a research specialty.

Opinion: That's Not Physics, Andrew Zangwill, APS

Read more…

When Falsification Has Lease...

Topics: Applied Physics, Civics, Materials Science, Solid-State Physics, Superconductors

I'm a person who will get Nature on my home email, my previous graduate school email (that's active because it's also on my phone), and my work email. Because it said "physics," I was primed to read it.

What I read made me clasp my hands over my mouth, and periodically stared at the ceiling tiles. My forehead bumped the desk softly, symbolically in disbelief.

Ranga Dias, the physicist at the center of the room-temperature superconductivity scandal, committed data fabrication, falsification and plagiarism, according to a investigation commissioned by his university. Nature’s news team discovered the bombshell investigation report in court documents.

The 10-month investigation, which concluded on 8 February, was carried out by an independent group of scientists recruited by the University of Rochester in New York. They examined 16 allegations against Dias and concluded that it was more likely than not that in each case, the physicist had committed scientific misconduct. The university is now attempting to fire Dias, who is a tenure-track faculty member at Rochester, before his contract expires at the end of the 2024–25 academic year.

Exclusive: official investigation reveals how superconductivity physicist faked blockbuster results

The confidential 124-page report from the University of Rochester, disclosed in a lawsuit, details the extent of Ranga Dias’s scientific misconduct. By Dan Garisto, Nature.

In a nutshell, this is the Scientific Method and how it relates to this investigation:

1. Ask a Question. It can be as simple as "Why is that the way it is?" The question suggests observation, as in, the researcher has read, or seen something in the lab that piqued their curiosity. It is also known as the problem the researcher hopes to solve. The problem must be clear, concise, and testable, i.e., a designed experiment is possible, a survey to gather data can be crafted.

2. Research (n): "the systematic investigation into and study of materials and sources in order to establish facts and reach new conclusions" (Oxford languages). Here, you are "looking for the gaps" in knowledge. People are human, and due to the times and the technology available, something else about a subject may reveal itself through careful examination. The topic area is researched through credible sources, bibliographies, similar published research, textbooks from subject matter experts. Google Scholar counts; grainy YouTube videos don't.

3. The Hypothesis. This encapsulates your research in the form of an idea that can be tested by observation, or experiment. The null hypothesis is a statement or claim that the researcher makes they are trying to disprove, and the alternate hypothesis is a statement or claim the researcher makes they are trying to prove, and with sufficient evidence, disproves the null hypothesis.

4. Design an Experiment. Design of experiments (DOE) follows a set pattern, usually from statistics, or now, using software packages to evaluate input variables, and judging their relationship to output variables. If it sounds like y = f(x), it is.

5. Data Analysis. "The process of systematically applying statistical and/or logical techniques to describe and illustrate, condense and recap, and evaluate data." Source: Responsible Conduct of Research, Northern Illinois University. This succinct definition is the source of my faceplanting regarding this Nature article.

6. Conclusion. R-squared relates to the data gathered, also called the coefficient of determination. Back to the y = f(x) analogy, r-squared is the fit of the data between the independent variables (x) and the output variables (y). An r-squared of 0.90, or 90% and higher, is considered a "good fit" of the data, and the experimenter can make predictions from their results. Did the experimenter disprove the null hypothesis or prove the alternate hypothesis? Were both disproved? (That's called "starting over.")

7. Communication. You craft your results in a journal publication, hopefully one with a high impact factor. If your research helps others in their research ("looking for gaps"), you start seeing yourself appearing in "related research" and "citation" emails from Google Scholar. Your mailbox will fill up, as I hope your self-esteem.

Back to the faceplant:

The 124-page investigation report is a stunning account of Dias’s deceit across the two Nature papers, as well as two other now-retracted papers — one in Chemical Communications3 and one in Physical Review Letters (PRL)4. In the two Nature papers, Dias claimed to have discovered room-temperature superconductivity — zero electrical resistance at ambient temperatures — first in a compound made of carbon, sulfur and hydrogen (CSH)1 and then in a compound eventually found to be made of lutetium and hydrogen (LuH)2.

Capping years of allegations and analyses, the report methodically documents how Dias deliberately misled his co-authors, journal editors and the scientific community. A university spokesperson described the investigation as “a fair and thorough process,” which reached the correct conclusion.

When asked to surrender raw data, Dias gave "massaged" data.

"In several instances, the investigation found, Dias intentionally misled his team members and collaborators about the origins of data. Through interviews, the investigators worked out that Dias had told his partners at UNLV that measurements were taken at Rochester, but had told researchers at Rochester that they were taken at UNLV."

Dias also lied to journals. In the case of the retracted PRL paper4 — which was about the electrical properties of manganese disulfide (MnS2) — the journal conducted its own investigation and concluded that there was apparent fabrication and “a deliberate attempt to obstruct the investigation” by providing reviewers with manipulated data rather than raw data. The investigators commissioned by Rochester confirmed the journal’s findings that Dias had taken electrical resistance data on germanium tetraselenide from his own PhD thesis and passed these data off as coming from MnS2 — a completely different material with different properties (see ‘Odd similarity’). When questioned about this by the investigators, Dias sent them the same manipulated data that was sent to PRL.

12427329256?profile=RESIZE_584x

Winners and losers

Winners - Scientific Integrity.

The investigators of Nature were trying to preserve the reputation of physics and the rigor of peer review. Any results from any experiment has to be replicable in similar conditions in other laboratories. Usually, when retractions are ordered, it is because that didn't happen. If I drop tablets of Alka Seltzer in water in Brazil, and do it in Canada, I should still get "plop-plop-fizz-fizz." But the "odd similarity" graphs isn't that. The only differences between the two are 0.5 Gigapascals (109 Pascals, 1 Pascal = 1 Newton/meter squared = 1 N/m2), the materials under test, and the color of the graphs. Face. Plant.

Losers - The Public Trust.

"The establishment of our new government seemed to be the last **great experiment** for promoting human happiness." George Washington, January 9, 1790

As you can probably tell, I admire Carl Sagan and how he tried to popularize science communication. But Dr. Sagan, Bill Nye the Science Guy, the canceled reality series Myth Busters (that I actually LIKED) has not bridged the gap between society's obsession with spectacle, and though the previously mentioned gentlemen and television show were promoting "science as cool," it is still a discipline, it takes work and rigor to master subjects that are not part of casual conversations, nor can you "Google." There are late nights solving problems, early mornings running experiments while everyone else outside of your library or lab window seems to be enjoying college life and what it can offer.

Dr. Dias is as susceptible to Maslow's Hierarchy of Needs (physical, safety, love and belonging, esteem, and self-actualization) as anyone of us. Some humans express this need posting "selfies" or social media posts "going viral," no matter how outrageous, or the collateral damage to the non-cyber real world. Or, they like to see their names in print in journals, filling their inboxes with "related research" or "citation" emails with their names attached. There is even currency now in your research being MENTIONED in social media.

*****

Mr. Halsey was the librarian at Fairview Elementary School in Winston-Salem, North Carolina. Everyone in my fifth grade class had to do a book report, but before we could do that, we had to pass Mr. Halsey's exam - with an 85% or better - on the Dewey Decimal System, and SHOW him in a practicum, that we could find a book that he would give you using Dewey. If you didn't pass, you didn't do the book report, and you failed English. I thankfully made an 92%, and satisfied Mr. Halsey that I wouldn't get lost in the periodicals.

We now have search engines that we can utilize via supercomputers in our hip pockets. A lot of effort to know math, physics, chemistry applied to the manufacture of semiconductors for those supercomputers instead of facilitating access to knowledge might have inadvertently manufactured a generation suffering from Dunning-Kruger. Networking those supercomputers over a worldwide web, coupled with artificial intelligence gives malevolent actors inordinate power over a captive audience of 8 billion souls.

Couple this with the falsification of data having a lease in the realm of science; it only contributes to the mistrust of institutions like the academy, like our democracy, which has been referred to since Washington as "the great experiment." If the null, and the alternate hypotheses are discarded, what pray tell, is on the other side of what we've always known?

 

Read more…

Infinite Magazines...

12415395886?profile=RESIZE_710x

Topics: Applied Physics, Atmospheric Science, Existentialism, Futurism, Lasers, Robotics, Science Fiction

"Laser" is an acronym for Light Amplification by the Stimulated Emission of Radiation. As the article alludes to, the concept existed before the actual device. We have Charles Hard Townes to thank for his work on the Maser (Microwave Amplification by the Stimulated Emission of Radiation) and the Laser. He won the Nobel Prize for his work in 1964. In a spirit of cooperation remarkable for the Cold War era, he was awarded the Nobel with two Soviet physicists, Aleksandr M. Prokhorov and Nikolay Gennadiyevich Basov. He lived from 1915 - 2015. The Doomsday Clock was only a teenager, born two years after the end of the Second World War. As it was in 2023, it is still 90 seconds to midnight. I'm not sure going "Buck Rogers" on the battlefield will dial it back from the stroke of twelve. Infrared lasers are likely going to be deployed in any future battle space, but infrared is invisible to the human eye, a weapon for which you only need a power supply and not an armory; it might appeal not only to knock drones out of the sky, but to assassins, contracted by governments who can afford such a powerful device, that will not leave a ballistic fingerprint, or depending on the laser's power: DNA evidence.

Nations around the world are rapidly developing high-energy laser weapons for military missions on land and sea, and in the air and space. Visions of swarms of small, inexpensive drones filling the skies or skimming across the waves are motivating militaries to develop and deploy laser weapons as an alternative to costly and potentially overwhelmed missile-based defenses.

Laser weapons have been a staple of science fiction since long before lasers were even invented. More recently, they have also featured prominently in some conspiracy theories. Both types of fiction highlight the need to understand how laser weapons actually work and what they are used for.

A laser uses electricity to generate photons, or light particles. The photons pass through a gain medium, a material that creates a cascade of additional photons, which rapidly increases the number of photons. All these photons are then focused into a narrow beam by a beam director.

In the decades since the first laser was unveiled in 1960, engineers have developed a variety of lasers that generate photons at different wavelengths in the electromagnetic spectrum, from infrared to ultraviolet. The high-energy laser systems that are finding military applications are based on solid-state lasers that use special crystals to convert the input electrical energy into photons. A key aspect of high-power solid-state lasers is that the photons are created in the infrared portion of the electromagnetic spectrum and so cannot be seen by the human eye.

Based in part on the progress made in high-power industrial lasers, militaries are finding an increasing number of uses for high-energy lasers. One key advantage for high-energy laser weapons is that they provide an “infinite magazine.” Unlike traditional weapons such as guns and cannons that have a finite amount of ammunition, a high-energy laser can keep firing as long as it has electrical power.

The U.S. Army is deploying a truck-based high-energy laser to shoot down a range of targets, including drones, helicopters, mortar shells and rockets. The 50-kilowatt laser is mounted on the Stryker infantry fighting vehicle, and the Army deployed four of the systems for battlefield testing in the Middle East in February 2024.

High-energy laser weapons: A defense expert explains how they work and what they are used for, Iain Boyd, Director, Center for National Security Initiatives, and Professor of Aerospace Engineering Sciences, University of Colorado Boulder

Read more…

PV Caveats...

12401778677?profile=RESIZE_710x

 Graphical abstract. Credit: Joule (2024). DOI: 10.1016/j.joule.2024.01.025

Topics: Applied Physics, Chemistry, Energy, Green Tech, Materials Science, Photovoltaics

 

The energy transition is progressing, and photovoltaics (PV) is playing a key role in this. Enormous capacities are to be added over the next few decades. Experts expect several tens of terawatts by the middle of the century. That's 10 to 25 solar modules for every person. The boom will provide clean, green energy. But this growth also has its downsides.

 

Several million tons of waste from old modules are expected by 2050—and that's just for the European market. Even if today's PV modules are designed to last as long as possible, they will end up in landfill at the end of their life, and with them some valuable materials.

 

"Circular economy recycling in photovoltaics will be crucial to avoiding waste streams on a scale roughly equivalent to today's global electronic waste," explains physicist Dr. Marius Peters from the Helmholtz Institute Erlangen-Nürnberg for Renewable Energies (HI ERN), a branch of Forschungszentrum Jülich.

 

Today's solar modules are only suitable for this to a limited extent. The reason for this is the integrated—i.e., hardly separable—structure of the modules, which is a prerequisite for their long service life. Even though recycling is mandatory in the European Union, PV modules are, therefore, difficult to reuse in a circular way.

 

The current study by Dr. Ian Marius Peters, Dr. Jens Hauch, and Prof Christoph Brabec from HI ERN shows how important it is for the rapid growth of the PV industry to recycle these materials. "Our vision is to move away from a design for eternity towards a design for the eternal cycle," says Peters "This will make renewable energy more sustainable than any energy technology before.

 

The consequences of the PV boom: Study analyzes recycling strategies for solar modules, Forschungszentrum Juelich

 

Read more…

Plastics and Infarctions...

12399328276?profile=RESIZE_710x

Plastic chokes a canal in Chennai, India. Credit: R. Satish Babu/AFP via Getty

Topics: Applied Physics, Biology, Chemistry, Environment, Medicine

People who had tiny plastic particles lodged in a key blood vessel were more likely to experience heart attack, stroke or death during a three-year study.

Plastics are just about everywhere — food packaging, tyres, clothes, water pipes. And they shed microscopic particles that end up in the environment and can be ingested or inhaled by people.

Now, the first data of their kind show a link between these microplastics and human health. A study of more than 200 people undergoing surgery found that nearly 60% had microplastics or even smaller nanoplastics in a main artery1. Those who did were 4.5 times more likely to experience a heart attack, a stroke, or death in the approximately 34 months after the surgery than were those whose arteries were plastic-free.

“This is a landmark trial,” says Robert Brook, a physician-scientist at Wayne State University in Detroit, Michigan, who studies the environmental effects on cardiovascular health and was not involved with the study. “This will be the launching pad for further studies across the world to corroborate, extend, and delve into the degree of the risk that micro- and nanoplastics pose.”

But Brook, other researchers and the authors themselves caution that this study, published in The New England Journal of Medicine on 6 March, does not show that the tiny pieces caused poor health. Other factors that the researchers did not study, such as socio-economic status, could be driving ill health rather than the plastics themselves, they say.

Landmark study links microplastics to serious health problems, Max Kozlov, Nature.

Read more…

Limit Shattered...

12368038269?profile=RESIZE_710x

TSMC is building Two New Facilities to Accommodate 2nm Chip Production

Topics: Applied Physics, Chemistry, Electrical Engineering, Materials Science, Nanoengineering, Semiconductor Technology

 

Realize that Moore’s “law” isn’t like Newton’s Laws of Gravity or the three laws of Thermodynamics. It’s simply an observation based on experience with manufacturing silicon processors and the desire to make money from the endeavor continually.

 

As a device engineer, I had heard “7 nm, and that’s it” so often that it became colloquial folklore. TSMC has proven itself a powerhouse once again and, in our faltering geopolitical climate, made itself even more desirable to mainland China in its quest to annex the island, sadly by force if necessary.

 

Apple will be the first electronic manufacturer to receive chips built by Taiwan Semiconductor Manufacturing Company (TSMC) using a two-nanometer process. According to Korea’s DigiTimes Asia, inside sources said that Apple is "widely believed to be the initial client to utilize the process." The report noted that TSMC has been increasing its production capacity in response to “significant customer orders.” Moreover, the report added that the company has recently established a production expansion strategy aimed at producing 2nm chipsets based on the Gate-all-around (GAA) manufacturing process.

 

The GAA process, also known as gate-all-around field-effect transistor (GAA-FET) technology, defies the performance limitations of other chip manufacturing processes by allowing the transistors to carry more current while staying relatively small in size.

 

Apple to jump queue for TSMC's industry-first 2-nanometer chips: Report, Harsh Shivam, New Delhi, Business Standard.

 

Read more…

Boltwood Estimate...

12365551887?profile=RESIZE_710x

Credit: Public Domain

Topics: Applied Physics, Education, History, Materials Science, Philosophy, Radiation, Research

We take for granted that Earth is very old, almost incomprehensibly so. But for much of human history, estimates of Earth’s age were scattershot at best. In February 1907, a chemist named Bertram Boltwood published a paper in the American Journal of Science detailing a novel method of dating rocks that would radically change these estimates. In mineral samples gathered from around the globe, he compared lead and uranium levels to determine the minerals’ ages. One was a bombshell: A sample of the mineral thorianite from Sri Lanka (known in Boltwood’s day as Ceylon) yielded an age of 2.2 billion years, suggesting that Earth must be at least that old as well. While Boltwood was off by more than 2 billion years (Earth is now estimated to be about 4.5 billion years old), his method undergirds one of today’s best-known radiometric dating techniques.

In the Christian world, Biblical cosmology placed Earth’s age at around 6,000 years, but fossil and geology discoveries began to upend this idea in the 1700s. In 1862, physicist William Thomson, better known as Lord Kelvin, used Earth’s supposed rate of cooling and the assumption that it had started out hot and molten to estimate that it had formed between 20 and 400 million years ago. He later whittled that down to 20-40 million years, an estimate that rankled Charles Darwin and other “natural philosophers” who believed life’s evolutionary history must be much longer. “Many philosophers are not yet willing to admit that we know enough of the constitution of the universe and of the interior of our globe to speculate with safety on its past duration,” Darwin wrote. Geologists also saw this timeframe as much too short to have shaped Earth’s many layers.

Lord Kelvin and other physicists continued studies of Earth’s heat, but a new concept — radioactivity — was about to topple these pursuits. In the 1890s, Henri Becquerel discovered radioactivity, and the Curies discovered the radioactive elements radium and polonium. Still, wrote physicist Alois F. Kovarik in a 1929 biographical sketch of Boltwood, “Radioactivity at that time was not a science as yet, but merely represented a collection of new facts which showed only little connection with each other.”

February 1907: Bertram Boltwood Estimates Earth is at Least 2.2 Billion Years Old, Tess Joosse, American Physical Society

Read more…

On-Off Superconductor...

12364246686?profile=RESIZE_710x

A team of physicists has discovered a new superconducting material with unique tunability for external stimuli, promising advancements in energy-efficient computing and quantum technology. This breakthrough, achieved through advanced research techniques, enables unprecedented control over superconducting properties, potentially revolutionizing large-scale industrial applications.

Topics: Applied Physics, Materials Science, Solid-State Physics, Superconductors

Researchers used the Advanced Photon Source to verify the rare characteristics of this material, potentially paving the way for more efficient large-scale computing.

As industrial computing needs grow, the size and energy consumption of the hardware needed to keep up with those needs grows as well. A possible solution to this dilemma could be found in superconducting materials, which can reduce energy consumption exponentially. Imagine cooling a giant data center full of constantly running servers down to nearly absolute zero, enabling large-scale computation with incredible energy efficiency.

Breakthrough in Superconductivity Research

Physicists at the University of Washington and the U.S. Department of Energy’s (DOE) Argonne National Laboratory have made a discovery that could help enable this more efficient future. Researchers have found a superconducting material that is uniquely sensitive to outside stimuli, enabling the superconducting properties to be enhanced or suppressed at will. This enables new opportunities for energy-efficient switchable superconducting circuits. The paper was published in Science Advances.

Superconductivity is a quantum mechanical phase of matter in which an electrical current can flow through a material with zero resistance. This leads to perfect electronic transport efficiency. Superconductors are used in the most powerful electromagnets for advanced technologies such as magnetic resonance imaging, particle accelerators, fusion reactors, and even levitating trains. Superconductors have also found uses in quantum computing.

Scientists Discover Groundbreaking Superconductor With On-Off Switches, Argonne National Laboratory

Read more…

Fast Charger...

12359976866?profile=RESIZE_710x

Significant Li plating capacity from Si anode. a, Li discharge profile in a battery of Li/graphite–Li5.5PS4.5Cl1.5 (LPSCl1.5)–LGPS–LPSCl1.5–SiG at current density 0.2 mA cm–2 at room temperature. Note that SiG was made by mixing Si and graphite in one composite layer. Inset shows the schematic illustration of stages 1–3 based on SEM and EDS mapping, which illustrate the unique Li–Si anode evolution in solid-state batteries observed experimentally in Figs. 1 and 2. b, FIB–SEM images of the SiG anode at different discharge states (i), (ii), and (iii) corresponding to points 1–3 in a, respectively. c, SEM–EDS mapping of (i), (ii), and (iii), corresponding to SEM images in b, where carbon signal (C) is derived from graphite, oxygen (O) and nitrogen (N) signals are from Li metal reaction with air and fluorine (F) is from the PTFE binder. d, Discharge profile of battery with cell construction Li-1M LiPF6 in EC/DMC–SiG. Schematics illustrate typical Si anode evolution in liquid-electrolyte batteries. e, FIB–SEM image (i) of SiG anode following discharge in the liquid-electrolyte battery shown in d; zoomed-in image (ii). Credit: Nature Materials (2024). DOI: 10.1038/s41563-023-01722-x

Topics: Applied Physics, Battery, Chemistry, Climate Change, Electrical Engineering, Mechanical Engineering

Researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed a new lithium metal battery that can be charged and discharged at least 6,000 times—more than any other pouch battery cell—and can be recharged in a matter of minutes.

The research not only describes a new way to make solid-state batteries with a lithium metal anode but also offers a new understanding of the materials used for these potentially revolutionary batteries.

The research is published in Nature Materials.

"Lithium metal anode batteries are considered the holy grail of batteries because they have ten times the capacity of commercial graphite anodes and could drastically increase the driving distance of electric vehicles," said Xin Li, Associate Professor of Materials Science at SEAS and senior author of the paper. "Our research is an important step toward more practical solid-state batteries for industrial and commercial applications."

One of the biggest challenges in the design of these batteries is the formation of dendrites on the surface of the anode. These structures grow like roots into the electrolyte and pierce the barrier separating the anode and cathode, causing the battery to short or even catch fire.

These dendrites form when lithium ions move from the cathode to the anode during charging, attaching to the surface of the anode in a process called plating. Plating on the anode creates an uneven, non-homogeneous surface, like plaque on teeth, and allows dendrites to take root. When discharged, that plaque-like coating needs to be stripped from the anode, and when plating is uneven, the stripping process can be slow and result in potholes that induce even more uneven plating in the next charge.

Solid-state battery design charges in minutes and lasts for thousands of cycles, Leah Burrows, Harvard John A. Paulson School of Engineering and Applied Sciences, Tech Xplore

Read more…

10x > Kevlar...

12347948292?profile=RESIZE_400x

Scientists have developed amorphous silicon carbide, a strong and scalable material with potential uses in microchip sensors, solar cells, and space exploration. This breakthrough promises significant advancements in material science and microchip technology. An artist’s impression of amorphous silicon carbide nanostrings testing to its limit tensile strength. Credit: Science Brush

Topics: Applied Physics, Chemistry, Materials Science, Nanomaterials, Semiconductor Technology

A new material that doesn’t just rival the strength of diamonds and graphene but boasts a yield strength ten times greater than Kevlar, renowned for its use in bulletproof vests.

Researchers at Delft University of Technology, led by assistant professor Richard Norte, have unveiled a remarkable new material with the potential to impact the world of material science: amorphous silicon carbide (a-SiC).

Beyond its exceptional strength, this material demonstrates mechanical properties crucial for vibration isolation on a microchip. Amorphous silicon carbide is particularly suitable for making ultra-sensitive microchip sensors.

The range of potential applications is vast, from ultra-sensitive microchip sensors and advanced solar cells to pioneering space exploration and DNA sequencing technologies. The advantages of this material’s strength, combined with its scalability, make it exceptionally promising.

Researchers at Delft University of Technology, led by assistant professor Richard Norte, have unveiled a remarkable new material with the potential to impact the world of material science: amorphous silicon carbide (a-SiC).

The researchers adopted an innovative method to test this material’s tensile strength. Instead of traditional methods that might introduce inaccuracies from how the material is anchored, they turned to microchip technology. By growing the films of amorphous silicon carbide on a silicon substrate and suspending them, they leveraged the geometry of the nanostrings to induce high tensile forces. By fabricating many such structures with increasing tensile forces, they meticulously observed the point of breakage. This microchip-based approach ensures unprecedented precision and paves the way for future material testing.

Why the focus on nanostrings? “Nanostrings are fundamental building blocks, the foundation that can be used to construct more intricate suspended structures. Demonstrating high yield strength in a nanostring translates to showcasing strength in its most elemental form.”

10x Stronger Than Kevlar: Amorphous Silicon Carbide Could Revolutionize Material Science, Delft University Of Technology

Read more…

Scandium and Superconductors...

12347514059?profile=RESIZE_710x

Scandium is the only known elemental superconductor to have a critical temperature in the 30 K range. This phase diagram shows the superconducting transition temperature (Tc) and crystal structure versus pressure for scandium. The measured results on all the five samples studied show consistent trends. (Courtesy: Chinese Phys. Lett. 40 107403)

Topics: Applied Physics, Chemistry, Condensed Matter Physics, Materials Science, Superconductors, Thermodynamics

Scandium remains a superconductor at temperatures above 30 K (-243.15 Celsius, -405.67 Fahrenheit), making it the first element known to superconduct at such a high temperature. The record-breaking discovery was made by researchers in China, Japan, and Canada, who subjected the element to pressures of up to 283 GPa – around 2.3 million times the atmospheric pressure at sea level.

Many materials become superconductors – that is, they conduct electricity without resistance – when cooled to low temperatures. The first superconductor to be discovered, for example, was solid mercury in 1911, and its transition temperature Tc is only a few degrees above absolute zero. Several other superconductors were discovered shortly afterward with similarly frosty values of Tc.

In the late 1950s, the Bardeen–Cooper–Schrieffer (BCS) theory explained this superconducting transition as the point at which electrons overcome their mutual electrical repulsion to form so-called “Cooper pairs” that then travel unhindered through the material. But beginning in the late 1980s, a new class of “high-temperature” superconductors emerged that could not be explained using BCS theory. These materials have Tc above the boiling point of liquid nitrogen (77 K), and they are not metals. Instead, they are insulators containing copper oxides (cuprates), and their existence suggests it might be possible to achieve superconductivity at even higher temperatures.

The search for room-temperature superconductors has been on ever since, as such materials would considerably improve the efficiency of electrical generators and transmission lines while also making common applications of superconductivity (including superconducting magnets in particle accelerators and medical devices like MRI scanners) simpler and cheaper.

Scandium breaks temperature record for elemental superconductors, Isabelle Dumé, Physics World

Read more…

Cooling Circuitry...

12345221085?profile=RESIZE_710x

Illustration of a UCLA-developed solid-state thermal transistor using an electric field to control heat movement. Credit: H-Lab/UCLA

Topics: Applied Physics, Battery, Chemistry, Electrical Engineering, Energy, Thermodynamics

A new thermal transistor can control heat as precisely as an electrical transistor can control electricity.

From smartphones to supercomputers, electronics have a heat problem. Modern computer chips suffer from microscopic “hotspots” with power density levels that exceed those of rocket nozzles and even approach that of the sun’s surface. Because of this, more than half the total electricity burned at U.S. data centers isn’t used for computing but for cooling. Many promising new technologies—such as 3-D-stacked chips and renewable energy systems—are blocked from reaching their full potential by errant heat that diminishes a device’s performance, reliability, and longevity.

“Heat is very challenging to manage,” says Yongjie Hu, a physicist and mechanical engineer at the University of California, Los Angeles. “Controlling heat flow has long been a dream for physicists and engineers, yet it’s remained elusive.”

But Hu and his colleagues may have found a solution. As reported last November in Science, his team has developed a new type of transistor that can precisely control heat flow by taking advantage of the basic chemistry of atomic bonding at the single-molecule level. These “thermal transistors” will likely be a central component of future circuits and will work in tandem with electrical transistors. The novel device is already affordable, scalable, and compatible with current industrial manufacturing practices, Hu says, and it could soon be incorporated into the production of lithium-ion batteries, combustion engines, semiconductor systems (such as computer chips), and more.

Scientists Finally Invent Heat-Controlling Circuitry That Keeps Electronics Cool, Rachel Newur, Scientific American

Read more…

Fusion's Holy Grail...

12344656301?profile=RESIZE_710x

A view of the assembled experimental JT-60SA Tokamak nuclear fusion facility outside Tokyo, Japan. JT-60SA.ORG

Topics: Applied Physics, Economics, Energy, Heliophysics, Nuclear Fusion, Quantum Mechanics

Japan and the European Union have officially inaugurated testing at the world’s largest experimental nuclear fusion plant. Located roughly 85 miles north of Tokyo, the six-story JT-60SA “tokamak” facility heats plasma to 200 million degrees Celsius (around 360 million Fahrenheit) within its circular, magnetically insulated reactor. Although JT-60SA first powered up during a test run back in October, the partner governments’ December 1 announcement marks the official start of operations at the world’s biggest fusion center, reaffirming a “long-standing cooperation in the field of fusion energy.”

The tokamak—an acronym of the Russian-language designation of “toroidal chamber with magnetic coils”—has led researchers’ push towards achieving the “Holy Grail” of sustainable green energy production for decades. Often described as a large hollow donut, a tokamak is filled with gaseous hydrogen fuel that is then spun at immense high speeds using powerful magnetic coil encasements. When all goes as planned, intense force ionizes atoms to form helium plasma, much like how the sun produces its energy.

[Related: How a US lab created energy with fusion—again.]

Speaking at the inauguration event, EU energy commissioner Kadri Simson referred to the JT-60SA as “the most advanced tokamak in the world,” representing “a milestone for fusion history.”

“Fusion has the potential to become a key component for energy mix in the second half of this century,” she continued.

The world’s largest experimental tokamak nuclear fusion reactor is up and running, Andrew Paul, Popular Science.

Read more…