applied physics (60)

12664591664?profile=RESIZE_710x

AP Photo/Andres Kudacki

Topics: Applied Physics, Diversity in Science, Physics, Physiology

"B-boys and B-girls wield physics to pull off gravity-defying dance moves."

Okay, "gravity-defying" is a bit of hyperbole. Break dancing, as the article alludes, started in New York, and the movements can be found in martial arts like Brazilian Capoeira. It's more centrifugal force and torque, but I get that "gravity-defying" will get more clicks. I wish it didn't and the science behind it got more attention.

Two athletes square off for an intense dance battle. The DJ starts spinning tunes, and the athletes begin twisting, spinning and seemingly defying gravity, respectfully watching each other and taking turns showing off their skill.

The athletes converse through their movements, speaking through a dance that celebrates both athleticism and creativity. While the athletes probably aren’t consciously thinking about the physics behind their movements, these complex and mesmerizing dances demonstrate a variety of different scientific principles.

Breaking, also known as breakdancing, originated in the late 1970s in the New York City borough of the Bronx. Debuting as an Olympic sport in the 2024 Summer Olympics, breaking will showcase its dynamic moves on a global stage. This urban dance style combines hip-hop culture, acrobatic moves and expressive footwork.

Physics In Action: Paris 2024 Olympics To Debut High-Level Breakdancing, Amy Pope, Clemson University

Read more…

Climate CERN...

12563438297?profile=RESIZE_710x

Worrying trend Reliable climate models are needed so that societies can adapt to the impact of climate change. (Courtesy: Shutterstock/Migel)

Topics: Applied Physics, Atmospheric Science, CERN, Civilization, Climate Change

It was a scorcher last year. Land and sea temperatures were up to 0.2 °C (32.36 °F) higher every single month in the second half of 2023, with these warm anomalies continuing into 2024. We know the world is warming, but the sudden heat spike had not been predicted. As NASA climate scientist Gavin Schmidt wrote in Nature recently: “It’s humbling and a bit worrying to admit that no year has confounded climate scientists’ predictive capabilities more than 2023 has.”

As Schmidt went on to explain, a spell of record-breaking warmth had been deemed “unlikely” despite 2023 being an El Niño year, where the relatively cool waters in the central and eastern equatorial Pacific Ocean are replaced with warmer waters. Trouble is, the complex interactions between atmospheric deep convection and equatorial modes of ocean variability, which lie behind El Niño, are poorly resolved in conventional climate models.

Our inability to simulate El Niño properly with current climate models (J. Climate 10.1175/JCLI-D-21-0648.1) is symptomatic of a much bigger problem. In 2011 I argued that contemporary climate models were not good enough to simulate the changing nature of weather extremes such as droughts, heat waves and floods (see “A CERN for climate change” March 2011 p13). With grid-point spacings typically around 100 km, these models provide a blurred, distorted vision of the future climate. For variables like rainfall, the systematic errors associated with such low spatial resolution are larger than the climate-change signals that the models attempt to predict.

Reliable climate models are vitally required so that societies can adapt to climate change, assess the urgency of reaching net-zero or implement geoengineering solutions if things get really bad. Yet how is it possible to adapt if we don’t know whether droughts, heat waves, storms or floods cause the greater threat? How do we assess the urgency of net-zero if models cannot simulate “tipping” points? How is it possible to agree on potential geoengineering solutions if it is not possible to reliably assess whether spraying aerosols in the stratosphere will weaken the monsoons or reduce the moisture supply to the tropical rainforests? Climate modelers have to take the issue of model inadequacy much more seriously if they wish to provide society with reliable actionable information about climate change.

I concluded in 2011 that we needed to develop global climate models with spatial resolution of around 1 km (with compatible temporal resolution) and the only way to achieve this is to pool human and computer resources to create one or more internationally federated institutes. In other words, we need a “CERN for climate change” – an effort inspired by the particle-physics facility near Geneva, which has become an emblem for international collaboration and progress.

Why we still need a CERN for climate change, Tim Palmer, Physics World

Read more…

Esse Quam Videri...

12428240263?profile=RESIZE_710x

Credit: Menno Schaefer/Adobe

Starlings flock in a so-called murmuration, a collective behavior of interest in biological physics — one of many subfields that did not always “belong” in physics.

Topics: Applied Physics, Cosmology, Einstein, History, Physics, Research, Science

"To be rather than to seem." Translated from the Latin Esse Quam Videri, which also happens to be the state motto of North Carolina. It is from the treatise on Friendship by the Roman statesman Cicero, a reminder of the beauty and power of being true to oneself. Source: National Library of Medicine: Neurosurgery

If you’ve been in physics long enough, you’ve probably left a colloquium or seminar and thought to yourself, “That talk was interesting, but it wasn’t physics.”

If so, you’re one of many physicists who muse about the boundaries of their field, perhaps with colleagues over lunch. Usually, it’s all in good fun.

But what if the issue comes up when a physics faculty makes decisions about hiring or promoting individuals to build, expand, or even dismantle a research effort? The boundaries of a discipline bear directly on the opportunities departments can offer students. They also influence those students’ evolving identities as physicists, and on how they think about their own professional futures and the future of physics.

So, these debates — over physics and “not physics” — are important. But they are also not new. For more than a century, physicists have been drawing and redrawing the borders around the field, embracing and rejecting subfields along the way.

A key moment for “not physics” occurred in 1899 at the second-ever meeting of the American Physical Society. In his keynote address, the APS president Henry Rowland exhorted his colleagues to “cultivate the idea of the dignity” of physics.

“Much of the intellect of the country is still wasted in the pursuit of so-called practical science which ministers to our physical needs,” he scolded, “[and] not to investigations in the pure ethereal physics which our Society is formed to cultivate.”

Rowland’s elitism was not unique — a fact that first-rate physicists working at industrial laboratories discovered at APS meetings, when no one showed interest in the results of their research on optics, acoustics, and polymer science. It should come as no surprise that, between 1915 and 1930, physicists were among the leading organizers of the Optical Society of America (now Optica), the Acoustical Society of America, and the Society of Rheology.

That acousticians were given a cold shoulder at early APS meetings is particularly odd. At the time, acoustics research was not uncommon in American physics departments. Harvard University, for example, employed five professors who worked extensively in acoustics between 1919 and 1950. World War II motivated the U.S. Navy to sponsor a great deal of acoustics research, and many physics departments responded quickly. In 1948, the University of Texas hired three acousticians as assistant professors of physics. Brown University hired six physicists between 1942 and 1952, creating an acoustics powerhouse that ultimately trained 62 physics doctoral students.

The acoustics landscape at Harvard changed abruptly in 1946, when all teaching and research in the subject moved from the physics department to the newly created department of engineering sciences and applied physics. In the years after, almost all Ph.D. acoustics programs in the country migrated from physics departments to “not physics” departments.

The reason for this was explained by Cornell University professor Robert Fehr at a 1964 conference on acoustics education. Fehr pointed out that engineers like himself exploited the fundamental knowledge of acoustics learned from physicists to alter the environment for specific applications. Consequently, it made sense that research and teaching in acoustics passed from physics to engineering.

It took less than two decades for acoustics to go from being physics to “not physics.” But other fields have gone the opposite direction — a prime example being cosmology.

Albert Einstein applied his theory of general relativity to the cosmos in 1917. However, his work generated little interest because there was no empirical data to which it applied. Edwin Hubble’s work on extragalactic nebulae appeared in 1929, but for decades, there was little else to constrain mathematical speculations about the physical nature of the universe. The theoretical physicists Freeman Dyson and Steven Weinberg have both used the phrase “not respectable” to describe how cosmology was seen by physicists around 1960. The subject was simply “not physics.”

This began to change in 1965 with the discovery of thermal microwave radiation throughout the cosmos — empirical evidence of the nearly 20-year-old Big Bang model. Physicists began to engage with cosmology, and the percentage of U.S. physics departments with at least one professor who published in the field rose from 4% in 1964 to 15% in 1980. In the 1980s, physicists led the satellite mission to study the cosmic microwave radiation, and particle physicists — realizing that the hot early universe was an ideal laboratory to test their theories — became part-time cosmologists. Today, it’s hard to find a medium-to-large sized physics department that does not list cosmology as a research specialty.

Opinion: That's Not Physics, Andrew Zangwill, APS

Read more…

When Falsification Has Lease...

Topics: Applied Physics, Civics, Materials Science, Solid-State Physics, Superconductors

I'm a person who will get Nature on my home email, my previous graduate school email (that's active because it's also on my phone), and my work email. Because it said "physics," I was primed to read it.

What I read made me clasp my hands over my mouth, and periodically stared at the ceiling tiles. My forehead bumped the desk softly, symbolically in disbelief.

Ranga Dias, the physicist at the center of the room-temperature superconductivity scandal, committed data fabrication, falsification and plagiarism, according to a investigation commissioned by his university. Nature’s news team discovered the bombshell investigation report in court documents.

The 10-month investigation, which concluded on 8 February, was carried out by an independent group of scientists recruited by the University of Rochester in New York. They examined 16 allegations against Dias and concluded that it was more likely than not that in each case, the physicist had committed scientific misconduct. The university is now attempting to fire Dias, who is a tenure-track faculty member at Rochester, before his contract expires at the end of the 2024–25 academic year.

Exclusive: official investigation reveals how superconductivity physicist faked blockbuster results

The confidential 124-page report from the University of Rochester, disclosed in a lawsuit, details the extent of Ranga Dias’s scientific misconduct. By Dan Garisto, Nature.

In a nutshell, this is the Scientific Method and how it relates to this investigation:

1. Ask a Question. It can be as simple as "Why is that the way it is?" The question suggests observation, as in, the researcher has read, or seen something in the lab that piqued their curiosity. It is also known as the problem the researcher hopes to solve. The problem must be clear, concise, and testable, i.e., a designed experiment is possible, a survey to gather data can be crafted.

2. Research (n): "the systematic investigation into and study of materials and sources in order to establish facts and reach new conclusions" (Oxford languages). Here, you are "looking for the gaps" in knowledge. People are human, and due to the times and the technology available, something else about a subject may reveal itself through careful examination. The topic area is researched through credible sources, bibliographies, similar published research, textbooks from subject matter experts. Google Scholar counts; grainy YouTube videos don't.

3. The Hypothesis. This encapsulates your research in the form of an idea that can be tested by observation, or experiment. The null hypothesis is a statement or claim that the researcher makes they are trying to disprove, and the alternate hypothesis is a statement or claim the researcher makes they are trying to prove, and with sufficient evidence, disproves the null hypothesis.

4. Design an Experiment. Design of experiments (DOE) follows a set pattern, usually from statistics, or now, using software packages to evaluate input variables, and judging their relationship to output variables. If it sounds like y = f(x), it is.

5. Data Analysis. "The process of systematically applying statistical and/or logical techniques to describe and illustrate, condense and recap, and evaluate data." Source: Responsible Conduct of Research, Northern Illinois University. This succinct definition is the source of my faceplanting regarding this Nature article.

6. Conclusion. R-squared relates to the data gathered, also called the coefficient of determination. Back to the y = f(x) analogy, r-squared is the fit of the data between the independent variables (x) and the output variables (y). An r-squared of 0.90, or 90% and higher, is considered a "good fit" of the data, and the experimenter can make predictions from their results. Did the experimenter disprove the null hypothesis or prove the alternate hypothesis? Were both disproved? (That's called "starting over.")

7. Communication. You craft your results in a journal publication, hopefully one with a high impact factor. If your research helps others in their research ("looking for gaps"), you start seeing yourself appearing in "related research" and "citation" emails from Google Scholar. Your mailbox will fill up, as I hope your self-esteem.

Back to the faceplant:

The 124-page investigation report is a stunning account of Dias’s deceit across the two Nature papers, as well as two other now-retracted papers — one in Chemical Communications3 and one in Physical Review Letters (PRL)4. In the two Nature papers, Dias claimed to have discovered room-temperature superconductivity — zero electrical resistance at ambient temperatures — first in a compound made of carbon, sulfur and hydrogen (CSH)1 and then in a compound eventually found to be made of lutetium and hydrogen (LuH)2.

Capping years of allegations and analyses, the report methodically documents how Dias deliberately misled his co-authors, journal editors and the scientific community. A university spokesperson described the investigation as “a fair and thorough process,” which reached the correct conclusion.

When asked to surrender raw data, Dias gave "massaged" data.

"In several instances, the investigation found, Dias intentionally misled his team members and collaborators about the origins of data. Through interviews, the investigators worked out that Dias had told his partners at UNLV that measurements were taken at Rochester, but had told researchers at Rochester that they were taken at UNLV."

Dias also lied to journals. In the case of the retracted PRL paper4 — which was about the electrical properties of manganese disulfide (MnS2) — the journal conducted its own investigation and concluded that there was apparent fabrication and “a deliberate attempt to obstruct the investigation” by providing reviewers with manipulated data rather than raw data. The investigators commissioned by Rochester confirmed the journal’s findings that Dias had taken electrical resistance data on germanium tetraselenide from his own PhD thesis and passed these data off as coming from MnS2 — a completely different material with different properties (see ‘Odd similarity’). When questioned about this by the investigators, Dias sent them the same manipulated data that was sent to PRL.

12427329256?profile=RESIZE_584x

Winners and losers

Winners - Scientific Integrity.

The investigators of Nature were trying to preserve the reputation of physics and the rigor of peer review. Any results from any experiment has to be replicable in similar conditions in other laboratories. Usually, when retractions are ordered, it is because that didn't happen. If I drop tablets of Alka Seltzer in water in Brazil, and do it in Canada, I should still get "plop-plop-fizz-fizz." But the "odd similarity" graphs isn't that. The only differences between the two are 0.5 Gigapascals (109 Pascals, 1 Pascal = 1 Newton/meter squared = 1 N/m2), the materials under test, and the color of the graphs. Face. Plant.

Losers - The Public Trust.

"The establishment of our new government seemed to be the last **great experiment** for promoting human happiness." George Washington, January 9, 1790

As you can probably tell, I admire Carl Sagan and how he tried to popularize science communication. But Dr. Sagan, Bill Nye the Science Guy, the canceled reality series Myth Busters (that I actually LIKED) has not bridged the gap between society's obsession with spectacle, and though the previously mentioned gentlemen and television show were promoting "science as cool," it is still a discipline, it takes work and rigor to master subjects that are not part of casual conversations, nor can you "Google." There are late nights solving problems, early mornings running experiments while everyone else outside of your library or lab window seems to be enjoying college life and what it can offer.

Dr. Dias is as susceptible to Maslow's Hierarchy of Needs (physical, safety, love and belonging, esteem, and self-actualization) as anyone of us. Some humans express this need posting "selfies" or social media posts "going viral," no matter how outrageous, or the collateral damage to the non-cyber real world. Or, they like to see their names in print in journals, filling their inboxes with "related research" or "citation" emails with their names attached. There is even currency now in your research being MENTIONED in social media.

*****

Mr. Halsey was the librarian at Fairview Elementary School in Winston-Salem, North Carolina. Everyone in my fifth grade class had to do a book report, but before we could do that, we had to pass Mr. Halsey's exam - with an 85% or better - on the Dewey Decimal System, and SHOW him in a practicum, that we could find a book that he would give you using Dewey. If you didn't pass, you didn't do the book report, and you failed English. I thankfully made an 92%, and satisfied Mr. Halsey that I wouldn't get lost in the periodicals.

We now have search engines that we can utilize via supercomputers in our hip pockets. A lot of effort to know math, physics, chemistry applied to the manufacture of semiconductors for those supercomputers instead of facilitating access to knowledge might have inadvertently manufactured a generation suffering from Dunning-Kruger. Networking those supercomputers over a worldwide web, coupled with artificial intelligence gives malevolent actors inordinate power over a captive audience of 8 billion souls.

Couple this with the falsification of data having a lease in the realm of science; it only contributes to the mistrust of institutions like the academy, like our democracy, which has been referred to since Washington as "the great experiment." If the null, and the alternate hypotheses are discarded, what pray tell, is on the other side of what we've always known?

 

Read more…

Infinite Magazines...

12415395886?profile=RESIZE_710x

Topics: Applied Physics, Atmospheric Science, Existentialism, Futurism, Lasers, Robotics, Science Fiction

"Laser" is an acronym for Light Amplification by the Stimulated Emission of Radiation. As the article alludes to, the concept existed before the actual device. We have Charles Hard Townes to thank for his work on the Maser (Microwave Amplification by the Stimulated Emission of Radiation) and the Laser. He won the Nobel Prize for his work in 1964. In a spirit of cooperation remarkable for the Cold War era, he was awarded the Nobel with two Soviet physicists, Aleksandr M. Prokhorov and Nikolay Gennadiyevich Basov. He lived from 1915 - 2015. The Doomsday Clock was only a teenager, born two years after the end of the Second World War. As it was in 2023, it is still 90 seconds to midnight. I'm not sure going "Buck Rogers" on the battlefield will dial it back from the stroke of twelve. Infrared lasers are likely going to be deployed in any future battle space, but infrared is invisible to the human eye, a weapon for which you only need a power supply and not an armory; it might appeal not only to knock drones out of the sky, but to assassins, contracted by governments who can afford such a powerful device, that will not leave a ballistic fingerprint, or depending on the laser's power: DNA evidence.

Nations around the world are rapidly developing high-energy laser weapons for military missions on land and sea, and in the air and space. Visions of swarms of small, inexpensive drones filling the skies or skimming across the waves are motivating militaries to develop and deploy laser weapons as an alternative to costly and potentially overwhelmed missile-based defenses.

Laser weapons have been a staple of science fiction since long before lasers were even invented. More recently, they have also featured prominently in some conspiracy theories. Both types of fiction highlight the need to understand how laser weapons actually work and what they are used for.

A laser uses electricity to generate photons, or light particles. The photons pass through a gain medium, a material that creates a cascade of additional photons, which rapidly increases the number of photons. All these photons are then focused into a narrow beam by a beam director.

In the decades since the first laser was unveiled in 1960, engineers have developed a variety of lasers that generate photons at different wavelengths in the electromagnetic spectrum, from infrared to ultraviolet. The high-energy laser systems that are finding military applications are based on solid-state lasers that use special crystals to convert the input electrical energy into photons. A key aspect of high-power solid-state lasers is that the photons are created in the infrared portion of the electromagnetic spectrum and so cannot be seen by the human eye.

Based in part on the progress made in high-power industrial lasers, militaries are finding an increasing number of uses for high-energy lasers. One key advantage for high-energy laser weapons is that they provide an “infinite magazine.” Unlike traditional weapons such as guns and cannons that have a finite amount of ammunition, a high-energy laser can keep firing as long as it has electrical power.

The U.S. Army is deploying a truck-based high-energy laser to shoot down a range of targets, including drones, helicopters, mortar shells and rockets. The 50-kilowatt laser is mounted on the Stryker infantry fighting vehicle, and the Army deployed four of the systems for battlefield testing in the Middle East in February 2024.

High-energy laser weapons: A defense expert explains how they work and what they are used for, Iain Boyd, Director, Center for National Security Initiatives, and Professor of Aerospace Engineering Sciences, University of Colorado Boulder

Read more…

PV Caveats...

12401778677?profile=RESIZE_710x

 Graphical abstract. Credit: Joule (2024). DOI: 10.1016/j.joule.2024.01.025

Topics: Applied Physics, Chemistry, Energy, Green Tech, Materials Science, Photovoltaics

 

The energy transition is progressing, and photovoltaics (PV) is playing a key role in this. Enormous capacities are to be added over the next few decades. Experts expect several tens of terawatts by the middle of the century. That's 10 to 25 solar modules for every person. The boom will provide clean, green energy. But this growth also has its downsides.

 

Several million tons of waste from old modules are expected by 2050—and that's just for the European market. Even if today's PV modules are designed to last as long as possible, they will end up in landfill at the end of their life, and with them some valuable materials.

 

"Circular economy recycling in photovoltaics will be crucial to avoiding waste streams on a scale roughly equivalent to today's global electronic waste," explains physicist Dr. Marius Peters from the Helmholtz Institute Erlangen-Nürnberg for Renewable Energies (HI ERN), a branch of Forschungszentrum Jülich.

 

Today's solar modules are only suitable for this to a limited extent. The reason for this is the integrated—i.e., hardly separable—structure of the modules, which is a prerequisite for their long service life. Even though recycling is mandatory in the European Union, PV modules are, therefore, difficult to reuse in a circular way.

 

The current study by Dr. Ian Marius Peters, Dr. Jens Hauch, and Prof Christoph Brabec from HI ERN shows how important it is for the rapid growth of the PV industry to recycle these materials. "Our vision is to move away from a design for eternity towards a design for the eternal cycle," says Peters "This will make renewable energy more sustainable than any energy technology before.

 

The consequences of the PV boom: Study analyzes recycling strategies for solar modules, Forschungszentrum Juelich

 

Read more…

Plastics and Infarctions...

12399328276?profile=RESIZE_710x

Plastic chokes a canal in Chennai, India. Credit: R. Satish Babu/AFP via Getty

Topics: Applied Physics, Biology, Chemistry, Environment, Medicine

People who had tiny plastic particles lodged in a key blood vessel were more likely to experience heart attack, stroke or death during a three-year study.

Plastics are just about everywhere — food packaging, tyres, clothes, water pipes. And they shed microscopic particles that end up in the environment and can be ingested or inhaled by people.

Now, the first data of their kind show a link between these microplastics and human health. A study of more than 200 people undergoing surgery found that nearly 60% had microplastics or even smaller nanoplastics in a main artery1. Those who did were 4.5 times more likely to experience a heart attack, a stroke, or death in the approximately 34 months after the surgery than were those whose arteries were plastic-free.

“This is a landmark trial,” says Robert Brook, a physician-scientist at Wayne State University in Detroit, Michigan, who studies the environmental effects on cardiovascular health and was not involved with the study. “This will be the launching pad for further studies across the world to corroborate, extend, and delve into the degree of the risk that micro- and nanoplastics pose.”

But Brook, other researchers and the authors themselves caution that this study, published in The New England Journal of Medicine on 6 March, does not show that the tiny pieces caused poor health. Other factors that the researchers did not study, such as socio-economic status, could be driving ill health rather than the plastics themselves, they say.

Landmark study links microplastics to serious health problems, Max Kozlov, Nature.

Read more…

Limit Shattered...

12368038269?profile=RESIZE_710x

TSMC is building Two New Facilities to Accommodate 2nm Chip Production

Topics: Applied Physics, Chemistry, Electrical Engineering, Materials Science, Nanoengineering, Semiconductor Technology

 

Realize that Moore’s “law” isn’t like Newton’s Laws of Gravity or the three laws of Thermodynamics. It’s simply an observation based on experience with manufacturing silicon processors and the desire to make money from the endeavor continually.

 

As a device engineer, I had heard “7 nm, and that’s it” so often that it became colloquial folklore. TSMC has proven itself a powerhouse once again and, in our faltering geopolitical climate, made itself even more desirable to mainland China in its quest to annex the island, sadly by force if necessary.

 

Apple will be the first electronic manufacturer to receive chips built by Taiwan Semiconductor Manufacturing Company (TSMC) using a two-nanometer process. According to Korea’s DigiTimes Asia, inside sources said that Apple is "widely believed to be the initial client to utilize the process." The report noted that TSMC has been increasing its production capacity in response to “significant customer orders.” Moreover, the report added that the company has recently established a production expansion strategy aimed at producing 2nm chipsets based on the Gate-all-around (GAA) manufacturing process.

 

The GAA process, also known as gate-all-around field-effect transistor (GAA-FET) technology, defies the performance limitations of other chip manufacturing processes by allowing the transistors to carry more current while staying relatively small in size.

 

Apple to jump queue for TSMC's industry-first 2-nanometer chips: Report, Harsh Shivam, New Delhi, Business Standard.

 

Read more…

Boltwood Estimate...

12365551887?profile=RESIZE_710x

Credit: Public Domain

Topics: Applied Physics, Education, History, Materials Science, Philosophy, Radiation, Research

We take for granted that Earth is very old, almost incomprehensibly so. But for much of human history, estimates of Earth’s age were scattershot at best. In February 1907, a chemist named Bertram Boltwood published a paper in the American Journal of Science detailing a novel method of dating rocks that would radically change these estimates. In mineral samples gathered from around the globe, he compared lead and uranium levels to determine the minerals’ ages. One was a bombshell: A sample of the mineral thorianite from Sri Lanka (known in Boltwood’s day as Ceylon) yielded an age of 2.2 billion years, suggesting that Earth must be at least that old as well. While Boltwood was off by more than 2 billion years (Earth is now estimated to be about 4.5 billion years old), his method undergirds one of today’s best-known radiometric dating techniques.

In the Christian world, Biblical cosmology placed Earth’s age at around 6,000 years, but fossil and geology discoveries began to upend this idea in the 1700s. In 1862, physicist William Thomson, better known as Lord Kelvin, used Earth’s supposed rate of cooling and the assumption that it had started out hot and molten to estimate that it had formed between 20 and 400 million years ago. He later whittled that down to 20-40 million years, an estimate that rankled Charles Darwin and other “natural philosophers” who believed life’s evolutionary history must be much longer. “Many philosophers are not yet willing to admit that we know enough of the constitution of the universe and of the interior of our globe to speculate with safety on its past duration,” Darwin wrote. Geologists also saw this timeframe as much too short to have shaped Earth’s many layers.

Lord Kelvin and other physicists continued studies of Earth’s heat, but a new concept — radioactivity — was about to topple these pursuits. In the 1890s, Henri Becquerel discovered radioactivity, and the Curies discovered the radioactive elements radium and polonium. Still, wrote physicist Alois F. Kovarik in a 1929 biographical sketch of Boltwood, “Radioactivity at that time was not a science as yet, but merely represented a collection of new facts which showed only little connection with each other.”

February 1907: Bertram Boltwood Estimates Earth is at Least 2.2 Billion Years Old, Tess Joosse, American Physical Society

Read more…

On-Off Superconductor...

12364246686?profile=RESIZE_710x

A team of physicists has discovered a new superconducting material with unique tunability for external stimuli, promising advancements in energy-efficient computing and quantum technology. This breakthrough, achieved through advanced research techniques, enables unprecedented control over superconducting properties, potentially revolutionizing large-scale industrial applications.

Topics: Applied Physics, Materials Science, Solid-State Physics, Superconductors

Researchers used the Advanced Photon Source to verify the rare characteristics of this material, potentially paving the way for more efficient large-scale computing.

As industrial computing needs grow, the size and energy consumption of the hardware needed to keep up with those needs grows as well. A possible solution to this dilemma could be found in superconducting materials, which can reduce energy consumption exponentially. Imagine cooling a giant data center full of constantly running servers down to nearly absolute zero, enabling large-scale computation with incredible energy efficiency.

Breakthrough in Superconductivity Research

Physicists at the University of Washington and the U.S. Department of Energy’s (DOE) Argonne National Laboratory have made a discovery that could help enable this more efficient future. Researchers have found a superconducting material that is uniquely sensitive to outside stimuli, enabling the superconducting properties to be enhanced or suppressed at will. This enables new opportunities for energy-efficient switchable superconducting circuits. The paper was published in Science Advances.

Superconductivity is a quantum mechanical phase of matter in which an electrical current can flow through a material with zero resistance. This leads to perfect electronic transport efficiency. Superconductors are used in the most powerful electromagnets for advanced technologies such as magnetic resonance imaging, particle accelerators, fusion reactors, and even levitating trains. Superconductors have also found uses in quantum computing.

Scientists Discover Groundbreaking Superconductor With On-Off Switches, Argonne National Laboratory

Read more…

Fast Charger...

12359976866?profile=RESIZE_710x

Significant Li plating capacity from Si anode. a, Li discharge profile in a battery of Li/graphite–Li5.5PS4.5Cl1.5 (LPSCl1.5)–LGPS–LPSCl1.5–SiG at current density 0.2 mA cm–2 at room temperature. Note that SiG was made by mixing Si and graphite in one composite layer. Inset shows the schematic illustration of stages 1–3 based on SEM and EDS mapping, which illustrate the unique Li–Si anode evolution in solid-state batteries observed experimentally in Figs. 1 and 2. b, FIB–SEM images of the SiG anode at different discharge states (i), (ii), and (iii) corresponding to points 1–3 in a, respectively. c, SEM–EDS mapping of (i), (ii), and (iii), corresponding to SEM images in b, where carbon signal (C) is derived from graphite, oxygen (O) and nitrogen (N) signals are from Li metal reaction with air and fluorine (F) is from the PTFE binder. d, Discharge profile of battery with cell construction Li-1M LiPF6 in EC/DMC–SiG. Schematics illustrate typical Si anode evolution in liquid-electrolyte batteries. e, FIB–SEM image (i) of SiG anode following discharge in the liquid-electrolyte battery shown in d; zoomed-in image (ii). Credit: Nature Materials (2024). DOI: 10.1038/s41563-023-01722-x

Topics: Applied Physics, Battery, Chemistry, Climate Change, Electrical Engineering, Mechanical Engineering

Researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed a new lithium metal battery that can be charged and discharged at least 6,000 times—more than any other pouch battery cell—and can be recharged in a matter of minutes.

The research not only describes a new way to make solid-state batteries with a lithium metal anode but also offers a new understanding of the materials used for these potentially revolutionary batteries.

The research is published in Nature Materials.

"Lithium metal anode batteries are considered the holy grail of batteries because they have ten times the capacity of commercial graphite anodes and could drastically increase the driving distance of electric vehicles," said Xin Li, Associate Professor of Materials Science at SEAS and senior author of the paper. "Our research is an important step toward more practical solid-state batteries for industrial and commercial applications."

One of the biggest challenges in the design of these batteries is the formation of dendrites on the surface of the anode. These structures grow like roots into the electrolyte and pierce the barrier separating the anode and cathode, causing the battery to short or even catch fire.

These dendrites form when lithium ions move from the cathode to the anode during charging, attaching to the surface of the anode in a process called plating. Plating on the anode creates an uneven, non-homogeneous surface, like plaque on teeth, and allows dendrites to take root. When discharged, that plaque-like coating needs to be stripped from the anode, and when plating is uneven, the stripping process can be slow and result in potholes that induce even more uneven plating in the next charge.

Solid-state battery design charges in minutes and lasts for thousands of cycles, Leah Burrows, Harvard John A. Paulson School of Engineering and Applied Sciences, Tech Xplore

Read more…

10x > Kevlar...

12347948292?profile=RESIZE_400x

Scientists have developed amorphous silicon carbide, a strong and scalable material with potential uses in microchip sensors, solar cells, and space exploration. This breakthrough promises significant advancements in material science and microchip technology. An artist’s impression of amorphous silicon carbide nanostrings testing to its limit tensile strength. Credit: Science Brush

Topics: Applied Physics, Chemistry, Materials Science, Nanomaterials, Semiconductor Technology

A new material that doesn’t just rival the strength of diamonds and graphene but boasts a yield strength ten times greater than Kevlar, renowned for its use in bulletproof vests.

Researchers at Delft University of Technology, led by assistant professor Richard Norte, have unveiled a remarkable new material with the potential to impact the world of material science: amorphous silicon carbide (a-SiC).

Beyond its exceptional strength, this material demonstrates mechanical properties crucial for vibration isolation on a microchip. Amorphous silicon carbide is particularly suitable for making ultra-sensitive microchip sensors.

The range of potential applications is vast, from ultra-sensitive microchip sensors and advanced solar cells to pioneering space exploration and DNA sequencing technologies. The advantages of this material’s strength, combined with its scalability, make it exceptionally promising.

Researchers at Delft University of Technology, led by assistant professor Richard Norte, have unveiled a remarkable new material with the potential to impact the world of material science: amorphous silicon carbide (a-SiC).

The researchers adopted an innovative method to test this material’s tensile strength. Instead of traditional methods that might introduce inaccuracies from how the material is anchored, they turned to microchip technology. By growing the films of amorphous silicon carbide on a silicon substrate and suspending them, they leveraged the geometry of the nanostrings to induce high tensile forces. By fabricating many such structures with increasing tensile forces, they meticulously observed the point of breakage. This microchip-based approach ensures unprecedented precision and paves the way for future material testing.

Why the focus on nanostrings? “Nanostrings are fundamental building blocks, the foundation that can be used to construct more intricate suspended structures. Demonstrating high yield strength in a nanostring translates to showcasing strength in its most elemental form.”

10x Stronger Than Kevlar: Amorphous Silicon Carbide Could Revolutionize Material Science, Delft University Of Technology

Read more…

Scandium and Superconductors...

12347514059?profile=RESIZE_710x

Scandium is the only known elemental superconductor to have a critical temperature in the 30 K range. This phase diagram shows the superconducting transition temperature (Tc) and crystal structure versus pressure for scandium. The measured results on all the five samples studied show consistent trends. (Courtesy: Chinese Phys. Lett. 40 107403)

Topics: Applied Physics, Chemistry, Condensed Matter Physics, Materials Science, Superconductors, Thermodynamics

Scandium remains a superconductor at temperatures above 30 K (-243.15 Celsius, -405.67 Fahrenheit), making it the first element known to superconduct at such a high temperature. The record-breaking discovery was made by researchers in China, Japan, and Canada, who subjected the element to pressures of up to 283 GPa – around 2.3 million times the atmospheric pressure at sea level.

Many materials become superconductors – that is, they conduct electricity without resistance – when cooled to low temperatures. The first superconductor to be discovered, for example, was solid mercury in 1911, and its transition temperature Tc is only a few degrees above absolute zero. Several other superconductors were discovered shortly afterward with similarly frosty values of Tc.

In the late 1950s, the Bardeen–Cooper–Schrieffer (BCS) theory explained this superconducting transition as the point at which electrons overcome their mutual electrical repulsion to form so-called “Cooper pairs” that then travel unhindered through the material. But beginning in the late 1980s, a new class of “high-temperature” superconductors emerged that could not be explained using BCS theory. These materials have Tc above the boiling point of liquid nitrogen (77 K), and they are not metals. Instead, they are insulators containing copper oxides (cuprates), and their existence suggests it might be possible to achieve superconductivity at even higher temperatures.

The search for room-temperature superconductors has been on ever since, as such materials would considerably improve the efficiency of electrical generators and transmission lines while also making common applications of superconductivity (including superconducting magnets in particle accelerators and medical devices like MRI scanners) simpler and cheaper.

Scandium breaks temperature record for elemental superconductors, Isabelle Dumé, Physics World

Read more…

Cooling Circuitry...

12345221085?profile=RESIZE_710x

Illustration of a UCLA-developed solid-state thermal transistor using an electric field to control heat movement. Credit: H-Lab/UCLA

Topics: Applied Physics, Battery, Chemistry, Electrical Engineering, Energy, Thermodynamics

A new thermal transistor can control heat as precisely as an electrical transistor can control electricity.

From smartphones to supercomputers, electronics have a heat problem. Modern computer chips suffer from microscopic “hotspots” with power density levels that exceed those of rocket nozzles and even approach that of the sun’s surface. Because of this, more than half the total electricity burned at U.S. data centers isn’t used for computing but for cooling. Many promising new technologies—such as 3-D-stacked chips and renewable energy systems—are blocked from reaching their full potential by errant heat that diminishes a device’s performance, reliability, and longevity.

“Heat is very challenging to manage,” says Yongjie Hu, a physicist and mechanical engineer at the University of California, Los Angeles. “Controlling heat flow has long been a dream for physicists and engineers, yet it’s remained elusive.”

But Hu and his colleagues may have found a solution. As reported last November in Science, his team has developed a new type of transistor that can precisely control heat flow by taking advantage of the basic chemistry of atomic bonding at the single-molecule level. These “thermal transistors” will likely be a central component of future circuits and will work in tandem with electrical transistors. The novel device is already affordable, scalable, and compatible with current industrial manufacturing practices, Hu says, and it could soon be incorporated into the production of lithium-ion batteries, combustion engines, semiconductor systems (such as computer chips), and more.

Scientists Finally Invent Heat-Controlling Circuitry That Keeps Electronics Cool, Rachel Newur, Scientific American

Read more…

Fusion's Holy Grail...

12344656301?profile=RESIZE_710x

A view of the assembled experimental JT-60SA Tokamak nuclear fusion facility outside Tokyo, Japan. JT-60SA.ORG

Topics: Applied Physics, Economics, Energy, Heliophysics, Nuclear Fusion, Quantum Mechanics

Japan and the European Union have officially inaugurated testing at the world’s largest experimental nuclear fusion plant. Located roughly 85 miles north of Tokyo, the six-story JT-60SA “tokamak” facility heats plasma to 200 million degrees Celsius (around 360 million Fahrenheit) within its circular, magnetically insulated reactor. Although JT-60SA first powered up during a test run back in October, the partner governments’ December 1 announcement marks the official start of operations at the world’s biggest fusion center, reaffirming a “long-standing cooperation in the field of fusion energy.”

The tokamak—an acronym of the Russian-language designation of “toroidal chamber with magnetic coils”—has led researchers’ push towards achieving the “Holy Grail” of sustainable green energy production for decades. Often described as a large hollow donut, a tokamak is filled with gaseous hydrogen fuel that is then spun at immense high speeds using powerful magnetic coil encasements. When all goes as planned, intense force ionizes atoms to form helium plasma, much like how the sun produces its energy.

[Related: How a US lab created energy with fusion—again.]

Speaking at the inauguration event, EU energy commissioner Kadri Simson referred to the JT-60SA as “the most advanced tokamak in the world,” representing “a milestone for fusion history.”

“Fusion has the potential to become a key component for energy mix in the second half of this century,” she continued.

The world’s largest experimental tokamak nuclear fusion reactor is up and running, Andrew Paul, Popular Science.

Read more…

'Teleporting' Images...

12344019066?profile=RESIZE_584x

High-dimensional quantum transport enabled by nonlinear detection. In our concept, information is encoded on a coherent source and overlapped with a single photon from an entangled pair in a nonlinear crystal for up-conversion by sum frequency generation, the latter acting as a nonlinear spatial mode detector. The bright source is necessary to achieve the efficiency required for nonlinear detection. Information and photons flow in opposite directions: one of [the] Bob’s entangled photons is sent to Alice and has no information, while a measurement on the other in coincidence with the upconverted photon establishes the transport of information across the quantum link. Alice need not know this information for the process to work, while the nonlinearity allows the state to be arbitrary and unknown dimension and basis. Credit: Nature Communications (2023). DOI: 10.1038/s41467-023-43949-x

Topics: Applied Physics, Computer Science, Cryptography, Cybersecurity, Quantum Computers, Quantum Mechanics, Quantum Optics

Nature Communications published research by an international team from Wits and ICFO- The Institute of Photonic Sciences, which demonstrates the teleportation-like transport of "patterns" of light—this is the first approach that can transport images across a network without physically sending the image and a crucial step towards realizing a quantum network for high-dimensional entangled states.

Quantum communication over long distances is integral to information security and has been demonstrated with two-dimensional states (qubits) over very long distances between satellites. This may seem enough if we compare it with its classical counterpart, i.e., sending bits that can be encoded in 1s (signal) and 0s (no signal), one at a time.

However, quantum optics allow us to increase the alphabet and to securely describe more complex systems in a single shot, such as a unique fingerprint or a face.

"Traditionally, two communicating parties physically send the information from one to the other, even in the quantum realm," says Prof. Andrew Forbes, the lead PI from Wits University.

"Now, it is possible to teleport information so that it never physically travels across the connection—a 'Star Trek' technology made real." Unfortunately, teleportation has so far only been demonstrated with three-dimensional states (imagine a three-pixel image); therefore, additional entangled photons are needed to reach higher dimensions.

'Teleporting' images across a network securely using only light, Wits University, Phys.org.

Read more…

Funny How It's Not Aliens...

12332094695?profile=RESIZE_584x

The 3D model of Menga was drawn with AutoCAD, showing the biofacies (microfacies) present in the stones. The fourth pillar, currently missing, has been added, while capstones C-2, C-3, C-4, and C-5 have been removed in order to show the interior of the monument (Lozano Rodríguez et al.25). (a) Pillar P-3 with examples of biofacies (a1a3 observed in hand specimen). (b) Orthostat O-15 with examples of biofacies (b1b4 observed petrographically) and in hand specimen (b5). (c) Orthostat O-8 with examples observed petrographically (crossed polars) (c1,c2). (d) Orthostat O-5 with examples observed through the petrographic microscope (d1,d2). The star-shaped symbol indicates the place where a section was made for the petrographic study—Qtz: Quartz (designations after Kretz,49).

Topics: Applied Physics, Archaeology, Dark Humor, History

Abstract

The technical and intellectual capabilities of past societies are reflected in the monuments they were able to build. Tracking the provenance of the stones utilized to build prehistoric megalithic monuments through geological studies is of utmost interest for interpreting ancient architecture as well as contributing to their protection. According to the scarce information available, most stones used in European prehistoric megaliths originate from locations near the construction sites, which would have made transport easier. The Menga dolmen (Antequera, Malaga, Spain), listed in UNESCO World Heritage since July 2016, was designed and built with stones weighing up to nearly 150 tons, thus becoming the most colossal stone monument built in its time in Europe (c. 3800–3600 BC). Our study (based on high-resolution geological mapping as well as petrographic and stratigraphic analyses) reveals key geological and archaeological evidence to establish the precise provenance of the massive stones used in the construction of this monument. These stones are mostly calcarenites, a poorly cemented detrital sedimentary rock comparable to those known as 'soft stones' in modern civil engineering. They were quarried from a rocky outcrop located at a distance of approximately 1 km. In this study, it can be inferred the use of soft stone in Menga reveals the human application of new wood and stone technologies, enabling the construction of a monument of unprecedented magnitude and complexity.

The provenance of the stones in the Menga dolmen reveals one of the greatest engineering feats of the Neolithic. Scientific Reports, Nature

José Antonio Lozano Rodríguez, Leonardo García Sanjuán, Antonio M. Álvarez-Valero, Francisco Jiménez-Espejo, Jesús María Arrieta, Eugenio Fraile-Nuez, Raquel Montero Artús, Giuseppe Cultrone, Fernando Alonso Muñoz-Carballeda & Francisco Martínez-Sevilla

Read more…

Nano Racetracks...

In this image, optical pulses (solitons) can be seen circling through conjoined optical tracks. (Image: Yuan, Bowers, Vahala, et al.) An animated gif is at the original link below.

Topics: Applied Physics, Astronomy, Electrical Engineering, Materials Science, Nanoengineering, Optics

(Nanowerk News) When we last checked in with Caltech's Kerry Vahala three years ago, his lab had recently reported the development of a new optical device called a turnkey frequency microcomb that has applications in digital communications, precision timekeeping, spectroscopy, and even astronomy.

This device, fabricated on a silicon wafer, takes input laser light of one frequency and converts it into an evenly spaced set of many distinct frequencies that form a train of pulses whose length can be as short as 100 femtoseconds (quadrillionths of a second). (The comb in the name comes from the frequencies being spaced like the teeth of a hair comb.)

Now Vahala, Caltech's Ted and Ginger Jenkins, Professor of Information Science and Technology and Applied Physics and executive officer for applied physics and materials science, along with members of his research group and the group of John Bowers at UC Santa Barbara, have made a breakthrough in the way the short pulses form in an important new material called ultra-low-loss silicon nitride (ULL nitride), a compound formed of silicon and nitrogen. The silicon nitride is prepared to be extremely pure and deposited in a thin film.

In principle, short-pulse microcomb devices made from this material would require very low power to operate. Unfortunately, short light pulses (called solitons) cannot be properly generated in this material because of a property called dispersion, which causes light or other electromagnetic waves to travel at different speeds, depending on their frequency. ULL has what is known as normal dispersion, and this prevents waveguides made of ULL nitride from supporting the short pulses necessary for microcomb operation.

In a paper appearing in Nature Photonics ("Soliton pulse pairs at multiple colors in normal dispersion microresonators"), the researchers discuss their development of the new micro comb, which overcomes the inherent optical limitations of ULL nitride by generating pulses in pairs. This is a significant development because ULL nitride is created with the same technology used for manufacturing computer chips. This kind of manufacturing technique means that these microcombs could one day be integrated into a wide variety of handheld devices similar in form to smartphones.

The most distinctive feature of an ordinary microcomb is a small optical loop that looks a bit like a tiny racetrack. During operation, the solitons automatically form and circulate around it.

"However, when this loop is made of ULL nitride, the dispersion destabilizes the soliton pulses," says co-author Zhiquan Yuan (MS '21), a graduate student in applied physics.

Imagine the loop as a racetrack with cars. If some cars travel faster and some travel slower, then they will spread out as they circle the track instead of staying as a tight pack. Similarly, the normal dispersion of ULL means light pulses spread out in the microcomb waveguides, and the microcomb ceases to work.

The solution devised by the team was to create multiple racetracks, pairing them up so they look a bit like a figure eight. In the middle of that '8,' the two tracks run parallel to each other with only a tiny gap between them.

Conjoined 'racetracks' make new optical devices possible, Nanowerk.

Read more…

Microlenses...

12313740872?profile=RESIZE_710x

Chromatic imaging of white light with a single lens (left) and achromatic imaging of white light with a hybrid lens (right). Credit: The Grainger College of Engineering at the University of Illinois Urbana-Champaign

Topics: 3D Printing, Additive Manufacturing, Applied Physics, Materials Science, Optics

Using 3D printing and porous silicon, researchers at the University of Illinois Urbana-Champaign have developed compact, visible wavelength achromats that are essential for miniaturized and lightweight optics. These high-performance hybrid micro-optics achieve high focusing efficiencies while minimizing volume and thickness. Further, these microlenses can be constructed into arrays to form larger area images for achromatic light-field images and displays.

This study was led by materials science and engineering professors Paul Braun and David Cahill, electrical and computer engineering professor Lynford Goddard, and former graduate student Corey Richards. The results of this research were published in Nature Communications.

"We developed a way to create structures exhibiting the functionalities of classical compound optics but in highly miniaturized thin film via non-traditional fabrication approaches," says Braun.

In many imaging applications, multiple wavelengths of light are present, e.g., white light. If a single lens is used to focus this light, different wavelengths focus at different points, resulting in a color-blurred image. To solve this problem, multiple lenses are stacked together to form an achromatic lens. "In white light imaging, if you use a single lens, you have considerable dispersion, and so each constituent color is focused at a different position. With an achromatic lens, however, all the colors focus at the same point," says Braun.

The challenge, however, is that the required stack of lens elements required to make an achromatic lens is relatively thick, which can make a classical achromatic lens unsuitable for newer, scaled-down technological platforms, such as ultracompact visible wavelength cameras, portable microscopes, and even wearable devices.

A new (micro) lens on optics: Researchers develop hybrid achromats with high focusing efficiencies,  Amber Rose, University of Illinois Grainger College of Engineering

Read more…

Anthrobots...

This image has an empty alt attribute; its file name is anthrobots.png

An Anthrobot is shown, depth colored, with a corona of cilia that provides locomotion for the bot. Credit: Gizem Gumuskaya, Tufts University

Topics: Applied Physics, Biology, Biomimetics, Biotechnology, Research, Robotics

Researchers at Tufts University and Harvard University's Wyss Institute have created tiny biological robots that they call Anthrobots from human tracheal cells that can move across a surface and have been found to encourage the growth of neurons across a region of damage in a lab dish.

The multicellular robots, ranging in size from the width of a human hair to the point of a sharpened pencil, were made to self-assemble and shown to have a remarkable healing effect on other cells. The discovery is a starting point for the researchers' vision to use patient-derived biobots as new therapeutic tools for regeneration, healing, and treatment of disease.

The work follows from earlier research in the laboratories of Michael Levin, Vannevar Bush, Professor of Biology at Tufts University School of Arts & Sciences, and Josh Bongard at the University of Vermont, in which they created multicellular biological robots from frog embryo cells called Xenobots, capable of navigating passageways, collecting material, recording information, healing themselves from injury, and even replicating for a few cycles on their own.

At the time, researchers did not know if these capabilities were dependent on their being derived from an amphibian embryo or if biobots could be constructed from cells of other species.

In the current study, published in Advanced Science, Levin, along with Ph.D. student Gizem Gumuskaya, discovered that bots can, in fact, be created from adult human cells without any genetic modification, and they are demonstrating some capabilities beyond what was observed with the Xenobots.

The discovery starts to answer a broader question that the lab has posed—what are the rules that govern how cells assemble and work together in the body, and can the cells be taken out of their natural context and recombined into different "body plans" to carry out other functions by design?

Anthrobots: Scientists build tiny biological robots from human tracheal cells, Tufts University

Read more…