electrical engineering (18)

Running on Air...

12958550901?profile=RESIZE_710x

Running on air Close-up of the air-powered sensing device. (Courtesy: William Grover/UCR)

Topics: Computer Science, Electrical Engineering, Materials Science, Microfluidics

A device containing a pneumatic logic circuit made from 21 microfluidic valves could be used as a new type of air-powered computer that does not require any electronic components. The device could help make a wide range of important air-powered systems safer and less expensive, according to its developers at the University of California at Riverside.

Electronic computers rely on transistors to control the flow of electricity. But in the new air-powered computer, the researchers use tiny valves instead of transistors to control the flow of air rather than electricity. “These air-powered computers are an example of microfluidics, a decades-old field that studies the flow of fluids (usually liquids but sometimes gases) through tiny networks of channels and valves,” explains team leader William Grover, a bioengineer at UC Riverside.

By combining multiple microfluidic valves, the researchers made air-powered versions of standard logic gates. For example, they combined two valves in a row to make a Boolean AND gate. This gate works because air will flow through the two valves only if both are open. Similarly, two valves connected in parallel make a Boolean OR gate. Here, air will flow if either one or the other of the valves is open.

Air-powered computers make a comeback, Isabelle Dumé, Physics World

Read more…

Limit Shattered...

12368038269?profile=RESIZE_710x

TSMC is building Two New Facilities to Accommodate 2nm Chip Production

Topics: Applied Physics, Chemistry, Electrical Engineering, Materials Science, Nanoengineering, Semiconductor Technology

 

Realize that Moore’s “law” isn’t like Newton’s Laws of Gravity or the three laws of Thermodynamics. It’s simply an observation based on experience with manufacturing silicon processors and the desire to make money from the endeavor continually.

 

As a device engineer, I had heard “7 nm, and that’s it” so often that it became colloquial folklore. TSMC has proven itself a powerhouse once again and, in our faltering geopolitical climate, made itself even more desirable to mainland China in its quest to annex the island, sadly by force if necessary.

 

Apple will be the first electronic manufacturer to receive chips built by Taiwan Semiconductor Manufacturing Company (TSMC) using a two-nanometer process. According to Korea’s DigiTimes Asia, inside sources said that Apple is "widely believed to be the initial client to utilize the process." The report noted that TSMC has been increasing its production capacity in response to “significant customer orders.” Moreover, the report added that the company has recently established a production expansion strategy aimed at producing 2nm chipsets based on the Gate-all-around (GAA) manufacturing process.

 

The GAA process, also known as gate-all-around field-effect transistor (GAA-FET) technology, defies the performance limitations of other chip manufacturing processes by allowing the transistors to carry more current while staying relatively small in size.

 

Apple to jump queue for TSMC's industry-first 2-nanometer chips: Report, Harsh Shivam, New Delhi, Business Standard.

 

Read more…

Fast Charger...

12359976866?profile=RESIZE_710x

Significant Li plating capacity from Si anode. a, Li discharge profile in a battery of Li/graphite–Li5.5PS4.5Cl1.5 (LPSCl1.5)–LGPS–LPSCl1.5–SiG at current density 0.2 mA cm–2 at room temperature. Note that SiG was made by mixing Si and graphite in one composite layer. Inset shows the schematic illustration of stages 1–3 based on SEM and EDS mapping, which illustrate the unique Li–Si anode evolution in solid-state batteries observed experimentally in Figs. 1 and 2. b, FIB–SEM images of the SiG anode at different discharge states (i), (ii), and (iii) corresponding to points 1–3 in a, respectively. c, SEM–EDS mapping of (i), (ii), and (iii), corresponding to SEM images in b, where carbon signal (C) is derived from graphite, oxygen (O) and nitrogen (N) signals are from Li metal reaction with air and fluorine (F) is from the PTFE binder. d, Discharge profile of battery with cell construction Li-1M LiPF6 in EC/DMC–SiG. Schematics illustrate typical Si anode evolution in liquid-electrolyte batteries. e, FIB–SEM image (i) of SiG anode following discharge in the liquid-electrolyte battery shown in d; zoomed-in image (ii). Credit: Nature Materials (2024). DOI: 10.1038/s41563-023-01722-x

Topics: Applied Physics, Battery, Chemistry, Climate Change, Electrical Engineering, Mechanical Engineering

Researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed a new lithium metal battery that can be charged and discharged at least 6,000 times—more than any other pouch battery cell—and can be recharged in a matter of minutes.

The research not only describes a new way to make solid-state batteries with a lithium metal anode but also offers a new understanding of the materials used for these potentially revolutionary batteries.

The research is published in Nature Materials.

"Lithium metal anode batteries are considered the holy grail of batteries because they have ten times the capacity of commercial graphite anodes and could drastically increase the driving distance of electric vehicles," said Xin Li, Associate Professor of Materials Science at SEAS and senior author of the paper. "Our research is an important step toward more practical solid-state batteries for industrial and commercial applications."

One of the biggest challenges in the design of these batteries is the formation of dendrites on the surface of the anode. These structures grow like roots into the electrolyte and pierce the barrier separating the anode and cathode, causing the battery to short or even catch fire.

These dendrites form when lithium ions move from the cathode to the anode during charging, attaching to the surface of the anode in a process called plating. Plating on the anode creates an uneven, non-homogeneous surface, like plaque on teeth, and allows dendrites to take root. When discharged, that plaque-like coating needs to be stripped from the anode, and when plating is uneven, the stripping process can be slow and result in potholes that induce even more uneven plating in the next charge.

Solid-state battery design charges in minutes and lasts for thousands of cycles, Leah Burrows, Harvard John A. Paulson School of Engineering and Applied Sciences, Tech Xplore

Read more…

Cooling Circuitry...

12345221085?profile=RESIZE_710x

Illustration of a UCLA-developed solid-state thermal transistor using an electric field to control heat movement. Credit: H-Lab/UCLA

Topics: Applied Physics, Battery, Chemistry, Electrical Engineering, Energy, Thermodynamics

A new thermal transistor can control heat as precisely as an electrical transistor can control electricity.

From smartphones to supercomputers, electronics have a heat problem. Modern computer chips suffer from microscopic “hotspots” with power density levels that exceed those of rocket nozzles and even approach that of the sun’s surface. Because of this, more than half the total electricity burned at U.S. data centers isn’t used for computing but for cooling. Many promising new technologies—such as 3-D-stacked chips and renewable energy systems—are blocked from reaching their full potential by errant heat that diminishes a device’s performance, reliability, and longevity.

“Heat is very challenging to manage,” says Yongjie Hu, a physicist and mechanical engineer at the University of California, Los Angeles. “Controlling heat flow has long been a dream for physicists and engineers, yet it’s remained elusive.”

But Hu and his colleagues may have found a solution. As reported last November in Science, his team has developed a new type of transistor that can precisely control heat flow by taking advantage of the basic chemistry of atomic bonding at the single-molecule level. These “thermal transistors” will likely be a central component of future circuits and will work in tandem with electrical transistors. The novel device is already affordable, scalable, and compatible with current industrial manufacturing practices, Hu says, and it could soon be incorporated into the production of lithium-ion batteries, combustion engines, semiconductor systems (such as computer chips), and more.

Scientists Finally Invent Heat-Controlling Circuitry That Keeps Electronics Cool, Rachel Newur, Scientific American

Read more…

Nano Racetracks...

In this image, optical pulses (solitons) can be seen circling through conjoined optical tracks. (Image: Yuan, Bowers, Vahala, et al.) An animated gif is at the original link below.

Topics: Applied Physics, Astronomy, Electrical Engineering, Materials Science, Nanoengineering, Optics

(Nanowerk News) When we last checked in with Caltech's Kerry Vahala three years ago, his lab had recently reported the development of a new optical device called a turnkey frequency microcomb that has applications in digital communications, precision timekeeping, spectroscopy, and even astronomy.

This device, fabricated on a silicon wafer, takes input laser light of one frequency and converts it into an evenly spaced set of many distinct frequencies that form a train of pulses whose length can be as short as 100 femtoseconds (quadrillionths of a second). (The comb in the name comes from the frequencies being spaced like the teeth of a hair comb.)

Now Vahala, Caltech's Ted and Ginger Jenkins, Professor of Information Science and Technology and Applied Physics and executive officer for applied physics and materials science, along with members of his research group and the group of John Bowers at UC Santa Barbara, have made a breakthrough in the way the short pulses form in an important new material called ultra-low-loss silicon nitride (ULL nitride), a compound formed of silicon and nitrogen. The silicon nitride is prepared to be extremely pure and deposited in a thin film.

In principle, short-pulse microcomb devices made from this material would require very low power to operate. Unfortunately, short light pulses (called solitons) cannot be properly generated in this material because of a property called dispersion, which causes light or other electromagnetic waves to travel at different speeds, depending on their frequency. ULL has what is known as normal dispersion, and this prevents waveguides made of ULL nitride from supporting the short pulses necessary for microcomb operation.

In a paper appearing in Nature Photonics ("Soliton pulse pairs at multiple colors in normal dispersion microresonators"), the researchers discuss their development of the new micro comb, which overcomes the inherent optical limitations of ULL nitride by generating pulses in pairs. This is a significant development because ULL nitride is created with the same technology used for manufacturing computer chips. This kind of manufacturing technique means that these microcombs could one day be integrated into a wide variety of handheld devices similar in form to smartphones.

The most distinctive feature of an ordinary microcomb is a small optical loop that looks a bit like a tiny racetrack. During operation, the solitons automatically form and circulate around it.

"However, when this loop is made of ULL nitride, the dispersion destabilizes the soliton pulses," says co-author Zhiquan Yuan (MS '21), a graduate student in applied physics.

Imagine the loop as a racetrack with cars. If some cars travel faster and some travel slower, then they will spread out as they circle the track instead of staying as a tight pack. Similarly, the normal dispersion of ULL means light pulses spread out in the microcomb waveguides, and the microcomb ceases to work.

The solution devised by the team was to create multiple racetracks, pairing them up so they look a bit like a figure eight. In the middle of that '8,' the two tracks run parallel to each other with only a tiny gap between them.

Conjoined 'racetracks' make new optical devices possible, Nanowerk.

Read more…

Confession...

12132319871?profile=RESIZE_710x

Credit: Freddie Pagani for Physics Today

Topics: African Americans, Diversity in Science, Electrical Engineering, Materials Science, Physics

Students should strategically consider where to apply to graduate school, and faculty members should provide up-to-date job resources so that undergraduates can make informed career decisions.

The number of bachelor’s degrees in physics awarded annually at US institutions is at or near an all-time high—nearly double what it was two decades ago. Yet the number of first-year physics graduate students has grown much more slowly, at only around 1–2% per year. The difference in the growth rates of bachelor’s recipients and graduate spots may be increasing the competition that students face when interested in pursuing graduate study.

With potentially more students applying for a relatively fixed number of first-year graduate openings, students may need to apply to more schools, which would take more time and cost more money. As the graduate school admissions process becomes more competitive, applicants may need even more accomplishments and experiences, such as postbaccalaureate research, to gain acceptance. Such opportunities are not available equally to all students. To read about steps one department has taken to make admissions more equitable, see the July Physics Today article by one of us (Young), Kirsten Tollefson, and Marcos D. Caballero.

We do not view the increasing gap between bachelor’s recipients and graduate spots as necessarily a problem, nor do we believe that all physics majors should be expected to go to graduate school. Rather, we assert that this trend is one that both prospective applicants and those advising them should be aware of so students can make an informed decision about their postgraduation plans.

The “itch” for graduate school has always been a constant with me. I wanted especially to go after meeting Dr. Ronald McNair after his maiden voyage on Challenger in 1984. Little did I know that he would perish two years later in the same vehicle. Things happened to set the itch aside: marriage, kids, sports leagues. Life can delay your decision, too. My gap was 33 years: 1984 to 2017.

The recent decision by the Supreme Court to overturn another precedent: Affirmative Action in college admissions, affects graduate schools as well as undergraduate admissions. After every effort of progress, whether in race (a social construct) relations, labor, or gender, history, if they allow us to study it, has always shown a backlash. The group that is in power wants to remain in power, and the inequity those of us lower on the totem poll are pointing out they see as the result of the "natural order," albeit by government fiat.

My pastor at the time could have called our congressman and gotten me an appointment. My grades weren't too bad, and being the highest-ranking cadet in the city and county probably would have helped my CV. I chose an HBCU, NC A&T State University, in my undergrad because Greensboro to Winston-Salem was and is a lot closer than the Air Force Academy in Colorado. I would have been away from my parents for an entire agonizing year of no contact: cell phones and video chatting weren't a thing. I also wasn’t a fan of my freshman year being called a “Plebe” (lower-born). I do support the decisions students and their parents make as the best decision for their future. I do not support an unelected body trying to do "reverse political Entropy," turning back the clock of progress to 1953. We are, however, in 2023, and issues like climate change can be solved by going aggressively towards renewables: Texas experienced some of the hottest days on the planet, and their off-the-national grid held because of solar and wind, in an impressive display of irony.

Physics majors who graduate and go to work are prepared for either teaching K-12 or engineering. I worked at Motorola, Advanced Micro Devices, and Applied Materials. I taught Algebra 1, Precalculus, and Physics. So, if it’s any consolation: physics majors will EARN a living and eat! As a generalist, you should be able to master anything you’d be exposed to.

Speaking of Harvard: when I worked at Motorola in Austin, Texas, one of my coworkers was promoted from process engineering to Section Manager of Implant/Diffusion/Thin Films. He attended Harvard, and I, A&T. I still worked in photo and etch, primarily as the etch process engineer on nights. I noticed he had a familiar green book on his bookshelf with yellow, sinusoidal lines on the cover.

Me: Hey! Isn't that a Halladay and Resnick?

Him: Why, yes! What do you know about it?

Me: I learned Physics I from Dr. Tom Sandin (who recently retired after 50 YEARS: 1968 - 2018). He taught Dr. Ron McNair, one of the astronauts on the Space Shuttle Challenger. Physics II was taught to me by Dr. Elvira Williams: she was the first African American woman to earn a Ph.D. in Physics in the state of North Carolina and the FOURTH to earn a Ph.D. in Theoretical Physics in the nation. Who were your professors?

Him: Look at the time! Got a meeting. Bye!

Life experiences, in the end, overcome legacy and connection. We need a diversity of opinions to solve complex problems. Depending on the same structures and constructs to produce our next innovators isn't just shortsighted: it's magical thinking.

I now do think that 18 might be a little too young for a freshman on any campus and 22 a little too early for graduate school.

Just make the gap a little less than three decades!

The gap between physics bachelor’s recipients and grad school spots is growing, Nicholas T. Young, Caitlin Hayward, and Eric F. Bell, AIP Publishing, Physics Today.

Read more…

Tunnel Falls...

12128045054?profile=RESIZE_710x

Chip off the old block: Intel’s Tunnel Falls chip is based on silicon spin qubits, which are about a million times smaller than other qubit types. (Courtesy: Intel Corporation)

Topics: Applied Physics, Chemistry, Electrical Engineering, Quantum Computer, Quantum Mechanics

Intel – the world’s biggest computer-chip maker – has released its newest quantum chip and has begun shipping it to quantum scientists and engineers to use in their research. Dubbed Tunnel Falls, the chip contains a 12-qubit array and is based on silicon spin-qubit technology.

The distribution of the quantum chip to the quantum community is part of Intel’s plan to let researchers gain hands-on experience with the technology while at the same time enabling new quantum research.

The first quantum labs to get access to the chip include the University of Maryland, Sandia National Laboratories, the University of Rochester, and the University of Wisconsin-Madison.

The Tunnel Falls chip was fabricated on 300 mm silicon wafers in Intel’s “D1” transistor fabrication facility in Oregon, which can carry out extreme ultraviolet lithography (EUV) and gate and contact processing techniques.

Intel releases 12-qubit silicon quantum chip to the quantum community, Martijn Boerkamp, Physics World.

Read more…

Straining Moore...

12126816677?profile=RESIZE_710x

Topics: Applied Physics, Chemistry, Computer Science, Electrical Engineering, Materials Science, Nanotechnology, Quantum Mechanics, Semiconductor Technology

Gordon Moore, the co-founder of Intel who died earlier this year, is famous for forecasting a continuous rise in the density of transistors that we can pack onto semiconductor chips. James McKenzie looks at how “Moore’s law” is still going strong after almost six decades but warns that further progress is becoming harder and ever more expensive to sustain.

When the Taiwan Semiconductor Manufacturing Company (TSMC) announced last year that it was planning to build a new factory to produce integrated circuits, it wasn’t just the eye-watering $33bn price tag that caught my eye. What also struck me is that the plant, set to open in 2025 in the city of Hsinchu, will make the world’s first “2-nanometer” chips. Smaller, faster, and up to 30% more efficient than any microchip that has come before, TSMC’s chips will be sold to the likes of Apple – the company’s biggest customer – powering everything from smartphones to laptops.

But our ability to build such tiny, powerful chips shouldn’t surprise us. After all, the engineer Gordon Moore – who died on 24 March this year, aged 94 – famously predicted in 1965 that the number of transistors we can squeeze onto an integrated circuit ought to double yearly. Writing for the magazine Electronics (38 114), Moore reckoned that by 1975 it should be possible to fit a quarter of a million components onto a single silicon chip with an area of one square inch (6.25 cm2).

Moore’s prediction, which he later said was simply a “wild extrapolation”, held true, although, in 1975, he revised his forecast, predicting that chip densities would double every two years rather than every year. What thereafter became known as “Moore’s law” proved amazingly accurate, as the ability to pack ever more transistors into a tiny space underpinned the almost non-stop growth of the consumer electronics industry. In truth, it was never an established scientific “law” but more a description of how things had developed in the past as well as a roadmap that the semiconductor industry imposed on itself, driving future development.

Moore's law: further progress will push hard on the boundaries of physics and economics, James McKenzie, Physics World

Read more…

Chiplets...

11812006900?profile=RESIZE_710x

Source: Semiengineering dot com - Chiplets

Topics: Computer Science, Electrical Engineering, Materials Science, Semiconductor Technology, Solid-State Physics

Depending on who you’re speaking with at the time, the industry’s adoption of chiplet technology to extend the reach of Moore’s Law is either continuing to roll along or is facing the absence of a commercial market. However, both assertions cannot be true. What is true is that chiplets have been used to build at least some commercial ICs for more than a decade and that semiconductor vendors continue to expand chiplet usability and availability. At the same time, the interface and packaging standards that are essential to widespread chiplet adoption remain in flux.

On the positive side of this question are existence proofs. Xilinx, now AMD, has been using 2.5D chiplet technology with large silicon interposers to make FPGAs for more than a decade. The first commercial use of this packaging technology appeared back in 2011 when Xilinx announced its Virtex-7 2000T FPGA, a 2-Mgate device built from four FPGA semiconductor tiles bonded to a silicon interposer. Xilinx jointly developed this chiplet-packaging technology with its foundry, TSMC, which now offers this CoWoS (Chip-on-Wafer-on-Substrate) interposer-and-chiplet technology to its other foundry customers. TSMC customers that have announced chiplet-based products include Broadcom and Fujitsu. AMD is now five generations along the learning curve with this packaging technology, which is now essential to the continued development of bigger and more diverse FPGAs. AMD will be presenting an overview of this multi-generation, chiplet-based technology, including a status update at the upcoming Hot Chips 2023 conference being held at Stanford University in Palo Alto, California, in August.

Similarly, Intel has long been developing and using chiplet technology in its own packaged ICs. The company has been using its 2.5D EMIB (embedded multi-die interconnect bridge) chiplet-packaging technology for years to manufacture its Stratix 10 FPGAs. That technology has now spread throughout Intel’s product line to include CPUs and SoCs. The poster child for Intel’s chiplet-packaging technologies is unquestionably the company’s Ponte Vecchio GPU, which packages 47 active “tiles” – Intel’s name for chiplets – in a multi-chip package. These 47 dies are manufactured by multiple semiconductor vendors using five different semiconductor process nodes, all combined in one package using Intel’s EMIB 2.5D and 3D Foveros chiplet-packaging techniques to produce an integrated product with more than 100 billion transistors – something not currently possible on one silicon die. Intel is now opening these chiplet-packaging technologies to select customers through IFS – Intel Foundry Services – and consequently expanding the size and number of its packaging facilities.

The Chiplet’s Time Is Coming. It’s Here, Or Not. Steven Leibson, Tirias Research, Forbes

Read more…

Balsa Chips...

11135716495?profile=RESIZE_710x

Modified wood modulates electrical current: researchers at Linköping University and colleagues from the KTH Royal Institute of Technology have developed the world’s first electrical transistor made of wood. (Courtesy: Thor Balkhed)

Topics: Applied Physics, Biomimetics, Electrical Engineering, Materials Science, Research

Researchers in Sweden have built a transistor out of a plank of wood by incorporating electrically conducting polymers throughout the material to retain space for an ionically conductive electrolyte. The new technique makes it possible, in principle, to use wood as a template for numerous electronic components, though the Linköping University team acknowledges that wood-based devices cannot compete with traditional circuitry on speed or size.

Led by Isak Engquist of Linköping’s Laboratory for Organic Electronics, the researchers began by removing the lignin from a plank of balsa wood (chosen because it is grainless and evenly structured) using a NaClO2 chemical and heat treatment. Since lignin typically constitutes 25% of wood, removing it creates considerable scope for incorporating new materials into the structure that remains.

The researchers then placed the delignified wood in a water-based dispersion of an electrically conducting polymer called poly(3,4-ethylene-dioxythiophene)–polystyrene sulfonate, or PEDOT: PSS. Once this polymer diffuses into the wood, the previously insulating material becomes a conductor with an electrical conductivity of up to 69 Siemens per meter – a phenomenon the researchers attribute to the formation of PEDOT: PSS microstructures inside the 3D wooden “scaffold.”

Next, Engquist and colleagues constructed a transistor using one piece of this treated balsa wood as a channel and additional pieces on either side to form a double transistor gate. They also soaked the interface between the gates and channels in an ion-conducting gel. In this arrangement, known as an organic electrochemical transistor (OECT), applying a voltage to the gate(s) triggers an electrochemical reaction in the channel that makes the PEDOT molecules non-conducting and therefore switches the transistor off.

A transistor made from wood, Isabelle Dumé, Physics World

Read more…

Chips for America...

10993614889?profile=RESIZE_400x

Topics: Economics, Electrical Engineering, Materials Science, Semiconductor Technology

WASHINGTON — The Biden-Harris administration, through the U.S. Department of Commerce’s National Institute of Standards and Technology, today launched the first CHIPS for America funding opportunity for manufacturing incentives to restore U.S. leadership in semiconductor manufacturing, support good-paying jobs across the semiconductor supply chain, and advance U.S. economic and national security.

As part of the bipartisan CHIPS and Science Act, the Department of Commerce oversees $50 billion to revitalize the U.S. semiconductor industry, including $39 billion in semiconductor incentives. The first funding opportunity seeks applications for projects to construct, expand or modernize commercial facilities for the production of leading-edge, current-generation, and mature-node semiconductors. This includes both front-end wafer fabrication and back-end packaging. The department will also release a funding opportunity for semiconductor materials and equipment facilities in the late spring and one for research and development facilities in the fall.

“The CHIPS and Science Act presents a historic opportunity to unleash the next generation of American innovation, protect our national security and preserve our global economic competitiveness,” said Secretary of Commerce Gina M. Raimondo. “When we have finished implementing CHIPS for America, we will be the premier destination in the world where new leading-edge chip architectures can be invented in our research labs, designed for every end-use application, manufactured at scale, and packaged with the most advanced technologies. Throughout our work, we are committed to protecting taxpayer dollars, strengthening America’s workforce, and giving America’s businesses a platform to do what they do best: innovate, scale, and compete.”

The CHIPS and Science Act is part of President Joe Biden’s economic plan to invest in America, stimulating private sector investment, creating good-paying jobs, making more in the United States, and revitalizing communities left behind. 

CHIPS for America also today released a “Vision for Success,” laying out strategic objectives building on the vision Secretary Raimondo shared in her speech last week at Georgetown University’s School of Foreign Service. To advance U.S. economic and national security, the department aims to reach the following goals by the end of the decade: (1) make the U.S. home to at least two new large-scale clusters of leading-edge logic chip fabs, (2) make the U.S. home to multiple high-volume advanced packaging facilities, (3) produce high-volume leading-edge memory chips, and (4) increase production capacity for current-generation and mature-node chips, especially for critical domestic industries. Read more about these goals in the Vision for Success paper here.

NIST: Biden-Harris Administration Launches First CHIPS for America Funding Opportunity

Read more…

Chip Act and Wave Surfing...

10943737673?profile=RESIZE_584x

Massive subsidies to regain the edge of the US semiconductor industry will not likely succeed unless progress is made in winning the global race of idea flow and monetization.

Topics: Applied Physics, Chemistry, Computer Science, Electrical Engineering, Semiconductor Technology

Intelligent use of subsidies for winning the global idea race is a must for gaining and regaining semiconductor edge.

The US semiconductor industry started with the invention of Bell Labs. Subsequently, it attained supremacy in semiconductor production due to the success of making computers better and cheaper. Notably, the rise of the PC wave made Intel and Silicon Valley seemingly unsinkable technology superpowers. But during the first two decades of the 21st century, America has lost it. The USA now relies on Asia to import the most advanced chips. Its iconic Intel is now a couple of technology generation behind Asia’s TSMC and Samsung.

Furthermore, China’s aggressive move has added momentum to America’s despair, triggering a chip war. But why has America lost the edge? Why does it rely on TSMC and Samsung to supply the most advanced chips to power iPhones, Data centers, and Weapons? Is it due to Asian Governments’ subsidies? Or is it due to America’s failure to understand dynamics, make prudent decisions and manage technology and innovation?

Invention and rise and fall of US semiconductor supremacy

In 1947, Bell Labs of the USA invented a semiconductor device—the Transistor. Although American companies developed prototypes of Transistor radios and other consumer electronic products, they did not immediately pursue them. But American firms were very fast in using the Transistor to reinvent computers—by changing the vacuum tube technology core. Due to weight advantage, US Airforce and NASA found transistors suitable for onboard computers. Besides, the invention of integrated circuits by Fairchild and Texas instruments accelerated the weight and size reduction of digital logic circuits. Consequentially, the use of semiconductors in building onboard computers kept exponentially growing. Hence, by the end of the 1960s, the US had become a powerhouse in logic circuit semiconductors. But America remained 2nd to Japan in global production, as Japanese companies were winning the race of consumer electronics by using transistors.

US Semiconductor–from invention, supremacy to despair, Rokon Zaman, The-Waves.org

Read more…

Pushing Beyond Moore...

10917425279?profile=RESIZE_710x

Clean-room technicians at the AIM Photonics NanoTech chip fabrication facility in Albany, New York.  Credit: SUNY Polytechnic Institute

Topics: Computer Science, Electrical Engineering, Materials Science, Nanotechnology, Semiconductor Technology

Over 50 Years of Moore's Law - Intel

GAITHERSBURG, Md. — The U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) has entered into a cooperative research and development agreement with AIM Photonics that will give chip developers a critical new tool for designing faster chips that use both optical and electrical signals to transmit information. Called integrated photonic circuits, these chips are key components in fiber-optic networks and high-performance computing facilities. They are used in laser-guided missiles, medical sensors, and other advanced technologies. 

AIM Photonics, a Manufacturing USA institute, is a public-private partnership that accelerates the commercialization of new technologies for manufacturing photonic chips. The New York-based institute provides small and medium-sized businesses, academics, and government researchers access to expertise and fabrication facilities during all phases of the photonics development cycle, from design to fabrication and packaging.

NIST and AIM Photonics Team Up on High-Frequency Optical/Electronic Chips

Read more…

Thermo Limits

10249327499?profile=RESIZE_584x

A radical reimagining of information processing could greatly reduce the energy use—as well as greenhouse gas emissions and waste heat—from computers. Credit: vchal/Getty Images

Topics: Climate Change, Computer Science, Electrical Engineering, Global Warming, Semiconductor Technology, Thermodynamics

In case you had not noticed, computers are hot—literally. A laptop can pump out thigh-baking heat, while data centers consume an estimated 200 terawatt-hours each year—comparable to the energy consumption of some medium-sized countries. The carbon footprint of information and communication technologies as a whole is close to that of fuel used in the aviation industry. And as computer circuitry gets ever smaller and more densely packed, it becomes more prone to melting from the energy it dissipates as heat.

Now physicist James Crutchfield of the University of California, Davis, and his graduate student Kyle Ray have proposed a new way to carry out computation that would dissipate only a small fraction of the heat produced by conventional circuits. In fact, their approach, described in a recent preprint paper, could bring heat dissipation below even the theoretical minimum that the laws of physics impose on today’s computers. That could greatly reduce the energy needed to both perform computations and keep circuitry cool. And it could all be done, the researchers say, using microelectronic devices that already exist.

In 1961 physicist Rolf Landauer of IBM’s Thomas J. Watson Research Center in Yorktown Heights, N.Y., showed that conventional computing incurs an unavoidable cost in energy dissipation—basically, in the generation of heat and entropy. That is because a conventional computer has to sometimes erase bits of information in its memory circuits in order to make space for more. Each time a single bit (with the value 1 or 0) is reset, a certain minimum amount of energy is dissipated—which Ray and Crutchfield have christened “the Landauer.” Its value depends on ambient temperature: in your living room, one Landauer would be around 10–21 joule. (For comparison, a lit candle emits on the order of 10 joules of energy per second.)

‘Momentum Computing’ Pushes Technology’s Thermodynamic Limits, Phillip Ball, Scientific American

Read more…

Strain and Flow...

10001241864?profile=RESIZE_710x

Topography of the two-dimensional crystal on top of the microscopically small wire indicated by dashed lines. Excitons freely move along the wire-induced dent, but cannot escape it in the perpendicular direction. (Courtesy: Florian Dirnberger)

Topics: Applied Physics, Condensed Matter Physics, Electrical Engineering

Using a technique known as strain engineering, researchers in the US and Germany have constructed an “excitonic wire” – a one-dimensional channel through which electron-hole pairs (excitons) can flow in a two-dimensional semiconductor like water through a pipe. The work could aid the development of a new generation of transistor-like devices.

In the study, a team led by Vinod Menon at the City College of New York (CCNY) Center for Discovery and Innovation and Alexey Chernikov at the Dresden University of Technology and the University of Regensburg in Germany deposited atomically thin 2D crystals of tungsten diselenide (fully encapsulated in another 2D material, hexagonal boride nitride) atop a 100 nm-thin nanowire. The presence of the nanowire created a small, elongated dent in the tungsten diselenide by slightly pulling apart the atoms in the 2D material and so inducing strain in it. According to the study’s lead authors, Florian Dimberger and Jonas Ziegler, this dent behaves for excitons much like a pipe does for water. Once trapped inside, they explain, the excitons are bound to move along the pipe.

Strain guides the flow of excitons in 2D materials, Isabelle Dumé, Physics World

Read more…

Stop-Motion Efficiency...

9404591299?profile=RESIZE_710x

A team of researchers created a new method to capture ultrafast atomic motions inside the tiny switches that control the flow of current in electronic circuits. Pictured here are Aditya Sood (left) and Aaron Lindenberg (right). Courtesy: Greg Stewart/SLAC National Accelerator Laboratory

Topics: Applied Physics, Electrical Engineering, Nanotechnology, Semiconductor Technology

A new ultrafast imaging technique that captures the motion of atoms in nanoscale electronic devices has revealed the existence of a short-lived electronic state that could make it possible to develop faster and more energy-efficient computers. The imaging technique, which involves switching the devices on and off while taking snapshots of them with an electron diffraction camera, could also help researchers probe the limits of electronic switching.

“In general, we know very little about the intermediate phases materials pass through during electronic switching operations,” explains Aditya Sood, a postdoctoral researcher at the US Department of Energy’s SLAC National Accelerator Laboratory and lead author of a paper in Science about the new method. “Our technique allows for a new way to visualize this process and therefore address what is arguably one of the most important questions at the heart of computing – that is, what are the fundamental limits of electronic switches in terms of speed and energy consumption?”

Ultrafast electron diffraction camera

Sood and colleagues at SLACStanford UniversityHewlett Packard LabsPennsylvania State University, and Purdue University chose to study devices made from vanadium dioxide (VO2) because the material is known to transition between insulating and electrically conducting states near room temperature. It thus shows promise as a switch, but the exact pathway underlying electric field-induced switching in VOhas long been a mystery, Sood tells Physics World.

To take snapshots of VO2’s atomic structure, the team used periodic voltage pulses to switch an electronic device made from the material on and off. The researchers synchronized the timing of these voltage pulses with the high-energy electron pulses produced by SLAC’s ultrafast electron diffraction (UED) camera. “Each time a voltage pulse excited the sample, it was followed by an electron pulse with a delay that we could tune,” Sood explains. “By repeating this process many times and changing the delay each time, we created a stop-motion movie of the atoms moving in response to the voltage pulse.”

This is the first time that anyone has used UED, which detects tiny atomic movements in a material by scattering a high-energy beam of electrons off a sample, to observe an electronic device during operation. “We started thinking about this subject three years ago and soon realized that existing techniques were simply not fast enough,” says Aaron Lindenberg, a professor of materials science and engineering at Stanford and the study’s senior author. “So we decided to construct our own.”

‘Stop-motion movie of atoms’ reveals short-lived state in nanoscale switch, Isabelle Dumé, Physics World

Read more…

Gold Anniversary...

9247217255?profile=RESIZE_710x

Images are from the article, link below

Topics: Electrical Engineering, Materials Science, Nanotechnology, Solid-State Physics

It's not exactly a wedding anniversary, but it is significant.

Fifty years ago this month, Intel introduced the first commercial microprocessor, the 4004. Microprocessors are tiny, general-purpose chips that use integrated circuits made up of transistors to process data; they are the core of a modern computer. Intel created the 12 mm2 chip for a printing calculator made by the Japanese company Busicom. The 4004 had 2,300 transistors—a number dwarfed by the billions found in today’s chips. But the 4004 was leaps and bounds ahead of its predecessors, packing the computing power of the room-sized, vacuum tube-based first computers into a chip the size of a fingernail. In the past 50 years, microprocessors have changed our culture and economy in unimaginable ways.

The microprocessor turns 50, Katherine Bourzac, Chemical & Engineering News

Read more…

Scrofulous Signaling...

9243654887?profile=RESIZE_584x

FIG. 1. Schematics of pulse sequences for spin-locking measurement with (a) two π/2 pulses and (b) two composite pulses. (c) Schematics of a SCROFULOUS composite pulse composed of three pulses. (d) Evolution of the spin state in the Bloch sphere. The spin state is initialized to the |0⟩ state by the first laser pulse. (e) The first π/2 pulse rotates the spin by 90∘ to the (−y)-direction. A y-driving microwave field is applied parallel to the spin in the rotation frame. (f) The second π/2 pulse rotates the spin by 90∘ to the (−z)-direction in the pulse sequence pattern A, or (g) the second −π/2 pulse rotates the spin by −90∘ to the z-direction in the pulse sequence pattern B. Finally, the spin state is read out from the PL by applying the second laser pulse. (h) Schematics of the experimental setup.

Topics: Applied Physics, Electrical Engineering, Materials Science, Optics

We present results of near-field radio-frequency (RF) imaging at micrometer resolution using an ensemble of nitrogen-vacancy (NV) centers in diamond. The spatial resolution of RF imaging is set by the resolution of an optical microscope, which is markedly higher than the existing RF imaging methods. High sensitivity RF field detection is demonstrated through spin locking. SCROFULOUS composite pulse sequence is used for manipulation of the spins in the NV centers for reduced sensitivity to possible microwave pulse amplitude error in the field of view. We present procedures for acquiring an RF field image under spatially inhomogeneous microwave field distribution and demonstrate a near-field RF imaging of an RF field emitted from a photolithographically defined metal wire. The obtained RF field image indicates that the RF field intensity has maxima in the vicinity of the edges of the wire, in accord with a calculated result by a finite-difference time-domain method. Our method is expected to be applied in a broad variety of application areas, such as material characterizations, characterization of RF devices, and medical fields.</em>

Near-field radio-frequency imaging by spin-locking with a nitrogen-vacancy spin sensor, Shintaro Nomura1,a), Koki Kaida1, Hideyuki Watanabe2, and Satoshi Kashiwaya3, Journal of Applied Physics

Read more…