computer_science (9)

ATCG Drive...

Oz%2Bon%2BDNA.PNG
MGM/VICTOR TANGERMANN

 

Topics: Biology, Computer Science, DNA

Why cannot we write the entire 24 volumes of the Encyclopedia Britannica on the head of a pin? Dr. Richard P. Feynman, "There's Plenty of Room at the Bottom," said to be the seminal talk that started the concept of atomic-level engineering, soon known as nanotechnology, (named by Professor Norio Taniguchi, 1974, of the Tokyo Science University).

The intricate arrangement of base pairs in our DNA encodes just about everything about us. Now, DNA contains the entirety of “The Wonderful Wizard of Oz” as well.

A team of University of Texas Austin scientists just vastly improved the storage capacity of DNA and managed to encode the entire novel — translated into the geek-friendly language of Esperanto — in a double strand of DNA far more efficiently than has been done before. DNA storage isn’t new, but this work could help finally make it practical.

Big tech companies like Microsoft are already exploring DNA-storage technology, as the biomolecule can encode several orders of magnitude more information per unit volume than a hard drive. But DNA is particularly error-prone. It can easily be damaged and erase whatever’s stored on it.

“The key breakthrough is an encoding algorithm that allows accurate retrieval of the information even when the DNA strands are partially damaged during storage,” molecular biologist Ilya Finkelstein said in a UT Austin press release.

Scientists Stored "The Wizard of Oz" on a Strand of DNA, Dan Robitzgi, Futurism

Read more…

Threshold Cryptography...

20ITL011_threshold.gif
This artist’s conception of threshold cryptography shows a lock that can only be opened by three people working together. When the threshold cryptosystem receives a request to process information with a secret key, it initially splits the key into shares and sends them to the entire group, each share to a different participant. The three people must agree to work together and also perform their own secret operations on the incoming message. From these actions, each person uses their share key — represented by the three colored circles — to process the message, and then sends the result back to the system. Only the combination of all three partial results can open the lock, reducing the likelihood that a single corrupt party could compromise the system.

 

Topics: Cryptography, Computer Science, Electrical Engineering, NIST

A new publication by cryptography experts at the National Institute of Standards and Technology (NIST) proposes the direction the technical agency will take to develop a more secure approach to encryption. This approach, called threshold cryptography, could overcome some of the limitations of conventional methods for protecting sensitive transactions and data.

The document, released today in a final version as NIST Roadmap Toward Criteria for Threshold Schemes for Cryptographic Primitives (NISTIR 8214A), offers an outline for developing a new way to implement the cryptographic tools that developers use to secure their systems. Its authors are inviting the cryptography community to collaborate with them on NIST’s budding Threshold Cryptography project, which in part seeks to ensure that threshold implementations are interoperable.

“We are kicking the threshold cryptography development effort into high gear,” said Apostol Vassilev, a NIST computer scientist. “Over the coming months, the Threshold Cryptography project will be engaging with the public to define criteria for this work. We want to get feedback from the community so we can consider a variety of threshold schemes and standardization paths.”

Threshold cryptography takes its name from the idea that individual keyholders cannot open a lock on their own, as is common in conventional cryptography. Instead, out of a group of keyholders, there must be a minimum number of them — a “threshold” number — working together to open the lock. In practice, this lock is an electronic cryptosystem that protects confidential information, such as a bank account number or an authorization to transfer money from that account.

NIST Kick-Starts ‘Threshold Cryptography’ Development Effort

Read more…

Hybrid Quantum Networking...

DFB14820-609D-4B15-828A2C4BC8442539_source.jpg
Credit: Getty Images

 

Topics: Computer Science, Modern Physics, Quantum Computer, Quantum Mechanics

In a world’s first, researchers in France and the U.S. have performed a pioneering experiment demonstrating “hybrid” quantum networking. The approach, which unites two distinct methods of encoding information in particles of light called photons, could eventually allow for more capable and robust communications and computing.

Similar to how classical electronics can represent information as digital or analog signals, quantum systems can encode information as either discrete variables (DVs) in particles or continuous variables (CVs) in waves. Researchers have historically used one approach or the other—but not both—in any given system.

“DV and CV encoding have distinct advantages and drawbacks,” says Hugues de Riedmatten of the Institute of Photonic Sciences in Barcelona, who was not a part of the research. CV systems encode information in the varying intensity, or phasing, of light waves. They tend to be more efficient than DV approaches but are also more delicate, exhibiting stronger sensitivity to signal losses. Systems using DVs, which transmit information by the counting of photons, are harder to pair with conventional information technologies than CV techniques. They are also less error-prone and more fault-tolerant, however. Combining the two, de Riedmatten says, could offer “the best of both worlds.”

‘Hybrid’ Quantum Networking Demonstrated for First Time, Dhananjay Khadilkar, Scientific American

Read more…

Dr. Mark Dean, repost...

mark-dean.jpg
Dr. Mark Dean - Biography.com

 

Topics: African Americans, Computer Science, Electrical Engineering, Nanotechnology, STEM


This is admittedly a repost that appears during the month of February. The popular celebrities of sports, music and "reality" television dominate the imaginations of youth from all cultural backgrounds. It's important especially that African American children see themselves doing and making a living at STEM careers. A diverse workforce doesn't just "happen." Like the opposite of diversity - segregation - has to be intentionally planned and executed. For our country to survive and compete in nanotechnology, it MUST be a priority.

Computer scientist and engineer Mark Dean is credited with helping develop a number of landmark technologies, including the color PC monitor, the Industry Standard Architecture system bus and the first gigahertz chip.

Synopsis

Born in Jefferson City, Tennessee, in 1957, computer scientist and engineer Mark Dean helped develop a number of landmark technologies for IBM, including the color PC monitor and the first gigahertz chip. He holds three of the company's original nine patents. He also invented the Industry Standard Architecture system bus with engineer Dennis Moeller, allowing for computer plug-ins such as disk drives and printers.

Early Life and Education

Computer scientist and inventor Mark Dean was born on March 2, 1957, in Jefferson City, Tennessee. Dean is credited with helping to launch the personal computer age with work that made the machines more accessible and powerful.

From an early age, Dean showed a love for building things; as a young boy, Dean constructed a tractor from scratch with the help of his father, a supervisor at the Tennessee Valley Authority. Dean also excelled in many different areas, standing out as a gifted athlete and an extremely smart student who graduated with straight A's from Jefferson City High School. In 1979, he graduated at the top of his class at the University of Tennessee, where he studied engineering.

Innovation with IBM

Not long after college, Dean landed a job at IBM, a company he would become associated with for the duration of his career. As an engineer, Dean proved to be a rising star at the company. Working closely with a colleague, Dennis Moeller, Dean developed the new Industry Standard Architecture (ISA) systems bus, a new system that allowed peripheral devices like disk drives, printers and monitors to be plugged directly into computers. The end result was more efficiency and better integration.

But his groundbreaking work didn't stop there. Dean's research at IBM helped change the accessibility and power of the personal computer. His work led to the development of the color PC monitor and, in 1999, Dean led a team of engineers at IBM's Austin, Texas, lab to create the first gigahertz chip—a revolutionary piece of technology that is able to do a billion calculations a second.

In all, Dean holds three of the company's original nine patents for the IBM personal computer - a market the company helped create in 1981 and, in total, has more 20 patents associated with his name.

 

Biography.com: Mark Dean, Ph.D.

Read more…

Annie Easley...

easley.jpg
Image Source: NASA

Topics: African Americans, Computer Science, NASA, Women in Science


Ms. Easley likely did her great work with a slide rule. It's a lost art, like cursive writing.

Annie Easley had never heard of the National Advisory Committee for Aeronautics (NACA) when she read an article about twin sisters who were “human computers” at the Aircraft Engine Research Laboratory in Cleveland, Ohio. The Lab (the predecessor of the NASA Glenn Research Center) was in need of people with strong math skills, and she was in need of a job after recently relocating from Birmingham, Alabama. Two weeks after reading the article, Easley began a career that would span 34 years. She would contribute to numerous programs as a computer scientist, inspire many through her enthusiastic participation in outreach programs, break down barriers for women and people of color in science, technology, engineering, and mathematics (STEM) fields, and win the admiration and respect of her coworkers.

In 1955, Easley began her career as a “human computer,” doing computations for researchers. This involved analyzing problems and doing calculations by hand. Her earliest work involved running simulations for the newly planned Plum Brook Reactor Facility. When hired, she was one of only four African-American employees at the Lab. In a 2001 interview she said that she had never set out to be a pioneer. “I just have my own attitude. I’m out here to get the job done, and I knew I had the ability to do it, and that’s where my focus was.” Even in the face of discrimination, she persevered. “My head is not in the sand. But my thing is, if I can’t work with you, I will work around you. I was not about to be [so] discouraged that I’d walk away. That may be a solution for some people, but it’s not mine.”

When human computers were replaced by machines, Easley evolved along with the technology. She became an adept computer programmer, using languages like the Formula Translating System (FORTRAN) and the Simple Object Access Protocol (SOAP) to support a number of NASA’s programs. She developed and implemented code used in researching energy-conversion systems, analyzing alternative power technology—including the battery technology that was used for early hybrid vehicles, as well as for the Centaur upper-stage rocket.

In the 1970s, Easley returned to school to earn her degree in mathematics from Cleveland State, doing much of her coursework while also working full time. A firm believer in education and in her mother’s advice “You can be anything you want to be, but you have to work at it,” Easley was very dedicated in her outreach efforts at NASA. She not only participated in school tutoring programs but was a very active participant in the speaker’s bureau—telling students about NASA’s work and inspiring especially female and minority students to consider STEM careers.

 

NASA biography: Annie Easley, April 23, 1933 - June 25, 2011

Read more…

Bots and Data...

d41586-020-00141-1_17599688.jpg
Social-media bots are growing more sophisticated. Credit: OMER MESSINGER/EPA-EFE/Shutterstock

 

Topics: Computer Science, Internet, Politics, Research, Sociology


Definition: a device or piece of software that can execute commands, reply to messages, or perform routine tasks, as online searches, either automatically or with minimal human intervention (often used in combination):

intelligent infobots; shopping bots that help consumers find the best prices. Dictionary.com

Social-media bots that pump out computer-generated content have been accused of swaying elections and damaging public health by spreading misinformation. Now, some social scientists have a fresh accusation: bots meddle with research studies that mine popular sites such as Twitter, Reddit and Instagram for information on human health and behavior.

Data from these sites can help scientists to understand how natural disasters affect mental health, why young people have flocked to e-cigarettes in the United States and how people join together in complex social networks. But such work relies on discerning the real voices from the automated ones.

“Bots are designed to behave online like people,” says Jon-Patrick Allem, a social scientist at the University of Southern California in Los Angeles. “If a researcher is interested in describing public attitudes, you have to be sure that the data you’re collecting on social media is actually from people.”

Computer scientist Sune Lehmann designed his first bots in 2013, as a social-network experiment for a class that he was teaching at the Technical University of Denmark in Kongens Lyngby. Back then, he says, bots on Twitter were simple, obscure and mainly meant to increase the number of followers for specific Twitter accounts. Lehmann wanted to show his students how such bots could manipulate social systems, so together they designed bots that impersonated fans of the singer Justin Bieber.

The ‘Bieber Bots’ were easy to design and quickly attracted thousands of followers. But social-media bots have continued to evolve, becoming more complex and harder to detect. They surged into the spotlight after the 2016 US presidential election – amid accusations that bots had been deployed on social media in an attempt to sway the vote in President Donald Trump’s favor. “All of a sudden, it became something of interest to people,” Allem says.

 

Social scientists battle bots to glean insights from online chatter, Heidi Ledford, Nature

Read more…

Theta...

ML-Engineering-Theta900x600.jpg
Cyber threat analysis requires high-speed supercomputers, such as Theta at Argonne’s Leadership Computing Facility, a DOE Office of Science User Facility. (Image by Argonne National Laboratory.)

 

Topics: Artificial Intelligence, Computer Science, Internet, Mathematical Models, Quantum Computing


"Locks are made for honest people."

Robert H. Goodwin, June 19, 1925 - August 26, 1999 ("Pop")

It is indisputable that technology is now a fundamental and inextricable part of our everyday existence—for most people, our employment, transportation, healthcare, education, and other quality of life measures are fully reliant on technology. Our dependence has created an urgent need for dynamic cybersecurity that protects U.S. government, research and industry assets in the face of technology advances and ever more sophisticated adversaries.

The U.S. Department of Energy’s (DOE) Argonne National Laboratory is helping lead the way in researching and developing proactive cybersecurity, including measures that leverage machine learning, to help protect data and critical infrastructure from cyberattacks.

Machine learning is a category of artificial intelligence that involves training machines to continually learn from and identify patterns in data sets.

“Applying machine learning approaches to cybersecurity efforts makes sense due to the large amount of data involved,” said Nate Evans, program lead for cybersecurity research in the Strategic Security Sciences (SSS) Division. ​“It is not efficient for humans to mine data for these patterns using traditional algorithms.”

Argonne computer scientists develop machine learning algorithms using large data sets— comprising log data from different devices, network traffic information, and instances of malicious behavior—that enable the algorithms to recognize specific patterns of events that lead to attacks. When such patterns are identified, a response team investigates instances matching those patterns.

Following an attack, the response team patches the vulnerability in the laboratory’s intrusion protection systems. Forensic analysis can then lead to changes that prevent similar future attacks.

“We are looking for ways to stop attacks before they happen,” said Evans. ​“We’re not only concerned with protecting our own lab, we’re also developing methods to protect other national labs, and the country as a whole, from potential cyberattacks.”

 

Argonne applies machine learning to cybersecurity threats
Savannah Mitchem, Argonne National Laboratory

Read more…

Quantum Robustness...

A study demonstrates that a combination of two materials, aluminum and indium arsenide, forming a device called a Josephson junction could make quantum bits more resilient. Credit: University of Copenhagen image/Antonio Fornieri

 

Topics: Computer Science, Quantum Computing, Quantum Mechanics


Researchers have been trying for many years to build a quantum computer that industry could scale up, but the building blocks of quantum computing, qubits, still aren't robust enough to handle the noisy environment of what would be a quantum computer.

A theory developed only two years ago proposed a way to make qubits more resilient through combining a semiconductor, indium arsenide, with a superconductor, aluminum, into a planar device. Now, this theory has received experimental support in a device that could also aid the scaling of qubits.

This semiconductor-superconductor combination creates a state of "topological superconductivity," which would protect against even slight changes in a qubit's environment that interfere with its quantum nature, a renowned problem called "decoherence."

The device is potentially scalable because of its flat "planar" surface – a platform that industry already uses in the form of silicon wafers for building classical microprocessors.

The work, published in Nature, was led by the Microsoft Quantum lab at the University of Copenhagen's Niels Bohr Institute, which fabricated and measured the device. The Microsoft Quantum lab at Purdue University grew the semiconductor-superconductor heterostructure using a technique called molecular beam epitaxy, and performed initial characterization measurements.

 

New robust device may scale up quantum tech, researchers say, Kayla Wiles, Purdue University

Read more…

AI, Control and Turing...

Image Source: Comic Book dot com - Star Trek


Topics: Artificial Intelligence, Computer Science, Existentialism, Star Trek


If you're fan enough as I am to pay for the CBS streaming service (it has some benefits: Young Sheldon and the umpteenth reboot of The Twilight Zone hosted by Oscar winner Jordan Peele), the AI in Starfleet's "Control" looks an awful lot like...The Borg. I've enjoyed the latest iteration immensely, and I'm rooting for at least a season 3.

There's already speculation on Screen Rant that this might be some sort of galactic "butterfly effect." Discovery has taken some license with my previous innocence even before Section 31: we're obviously not "the good guys" with phasers, technobabble and karate chops as I once thought.

That of course has been the nature of speculative fiction since Mary Shelley penned Frankenstein: that playing God, humanity would manage to create something that just might kill us. Various objects from nuclear power to climate change has taken on this personification. I've often wondered if intelligence is its own Entropy. Whole worlds above us might be getting along just fine without a single invention of language, science, tools, cities or spaceflight, animal species living and dying without anything more than their instinct, hunger and the inbred need to procreate unless a meteor sends them into extinction. Homo sapien or homo stultus...

It is the Greek word mimesis we translate to mean "imitate" but can actually be more accurately said as "re-presentation." It is the Plato-Aristotle origin of the colloquial phrase "art imitates life."

Re-presented for your consumption and contemplation:

Yoshua Bengio is one of three computer scientists who last week shared the US$1-million A. M. Turing award — one of the field’s top prizes.

The three artificial-intelligence (AI) researchers are regarded as the founders of deep learning, the technique that combines large amounts of data with many-layered artificial neural networks, which are inspired by the brain. They received the award for making deep neural networks a “critical component of computing”.

The other two Turing winners, Geoff Hinton and Yann LeCun, work for Google and Facebook, respectively; Bengio, who is at the University of Montreal, is one of the few recognized gurus of machine learning to have stayed in academia full time.

But alongside his research, Bengio, who is also scientific director of the Montreal Institute for Learning Algorithms (MILA), has raised concerns about the possible risks from misuse of technology. In December, he presented a set of ethical guidelines for AI called the Montreal declaration at the Neural Information Processing Systems (NeurIPS) meeting in the city.

Do you see a lot of companies or states using AI irresponsibly?

There is a lot of this, and there could be a lot more, so we have to raise flags before bad things happen. A lot of what is most concerning is not happening in broad daylight. It’s happening in military labs, in security organizations, in private companies providing services to governments or the police.

What are some examples?

Killer drones are a big concern. There is a moral question, and a security question. Another example is surveillance — which you could argue has potential positive benefits. But the dangers of abuse, especially by authoritarian governments, are very real. Essentially, AI is a tool that can be used by those in power to keep that power, and to increase it.

AI pioneer: ‘The dangers of abuse are very real’
Yoshua Bengio, winner of the prestigious Turing award for his work on deep learning, is establishing international guidelines for the ethical use of AI.
Davide Castelvecchi, Nature

Read more…