artificial_intelligence (3)


Cyber threat analysis requires high-speed supercomputers, such as Theta at Argonne’s Leadership Computing Facility, a DOE Office of Science User Facility. (Image by Argonne National Laboratory.)


Topics: Artificial Intelligence, Computer Science, Internet, Mathematical Models, Quantum Computing

"Locks are made for honest people."

Robert H. Goodwin, June 19, 1925 - August 26, 1999 ("Pop")

It is indisputable that technology is now a fundamental and inextricable part of our everyday existence—for most people, our employment, transportation, healthcare, education, and other quality of life measures are fully reliant on technology. Our dependence has created an urgent need for dynamic cybersecurity that protects U.S. government, research and industry assets in the face of technology advances and ever more sophisticated adversaries.

The U.S. Department of Energy’s (DOE) Argonne National Laboratory is helping lead the way in researching and developing proactive cybersecurity, including measures that leverage machine learning, to help protect data and critical infrastructure from cyberattacks.

Machine learning is a category of artificial intelligence that involves training machines to continually learn from and identify patterns in data sets.

“Applying machine learning approaches to cybersecurity efforts makes sense due to the large amount of data involved,” said Nate Evans, program lead for cybersecurity research in the Strategic Security Sciences (SSS) Division. ​“It is not efficient for humans to mine data for these patterns using traditional algorithms.”

Argonne computer scientists develop machine learning algorithms using large data sets— comprising log data from different devices, network traffic information, and instances of malicious behavior—that enable the algorithms to recognize specific patterns of events that lead to attacks. When such patterns are identified, a response team investigates instances matching those patterns.

Following an attack, the response team patches the vulnerability in the laboratory’s intrusion protection systems. Forensic analysis can then lead to changes that prevent similar future attacks.

“We are looking for ways to stop attacks before they happen,” said Evans. ​“We’re not only concerned with protecting our own lab, we’re also developing methods to protect other national labs, and the country as a whole, from potential cyberattacks.”


Argonne applies machine learning to cybersecurity threats
Savannah Mitchem, Argonne National Laboratory

Read more…


Copper free: two Münster researchers compare a prototype optical chip to a one-cent coin. (Courtesy: University of Münster)


Topics: Artificial Intelligence, Computer Engineering, Neuromorphic Devices

A prototype artificial neural network (ANN) that uses only light to function has been unveiled by researchers at the University of Münster in Germany and the University of Exeter and University of Oxford in the UK. Their system can learn how to recognize simple patterns and its all-optical design could someday be exploited to create ANNs that can process large amounts of information rapidly while consuming relatively small amounts of energy.

ANNs mimic the human brain by using artificial neurons and synapses. A neuron receives one or more input signals and then uses this information to decide whether to output its own signal to the network. Synapses are the connections between neurons and can be “weighted” to favor signal propagation between certain neurons. An ANN can be trained to perform a task such as recognizing a pattern by sending multiple examples of the target pattern through the ANN while tweaking the synaptic weights until all examples of the target pattern elicit the same output from the ANN.

Relatively simple ANNs can be implemented on a computer. However, the conventional computer architecture of having a separate processor and memory makes it very difficult to implement the large numbers of neurons and synapses required to perform practical tasks.

One alternative is to create an ANN in which signals flows in the form of light pulses through an optical network. This is attractive because unlike electronic signals in a silicon chip, large amounts of light-encoded data can move quickly through optical materials without generating much heat. Furthermore, large amounts of information can be sent through an optical system by multiplexing the data using several different colors of light.


All-optical network mimics the brain’s neurons and synapses
Hamish Johnston, Physics World

Read more…

AI, Control and Turing...

Image Source: Comic Book dot com - Star Trek

Topics: Artificial Intelligence, Computer Science, Existentialism, Star Trek

If you're fan enough as I am to pay for the CBS streaming service (it has some benefits: Young Sheldon and the umpteenth reboot of The Twilight Zone hosted by Oscar winner Jordan Peele), the AI in Starfleet's "Control" looks an awful lot like...The Borg. I've enjoyed the latest iteration immensely, and I'm rooting for at least a season 3.

There's already speculation on Screen Rant that this might be some sort of galactic "butterfly effect." Discovery has taken some license with my previous innocence even before Section 31: we're obviously not "the good guys" with phasers, technobabble and karate chops as I once thought.

That of course has been the nature of speculative fiction since Mary Shelley penned Frankenstein: that playing God, humanity would manage to create something that just might kill us. Various objects from nuclear power to climate change has taken on this personification. I've often wondered if intelligence is its own Entropy. Whole worlds above us might be getting along just fine without a single invention of language, science, tools, cities or spaceflight, animal species living and dying without anything more than their instinct, hunger and the inbred need to procreate unless a meteor sends them into extinction. Homo sapien or homo stultus...

It is the Greek word mimesis we translate to mean "imitate" but can actually be more accurately said as "re-presentation." It is the Plato-Aristotle origin of the colloquial phrase "art imitates life."

Re-presented for your consumption and contemplation:

Yoshua Bengio is one of three computer scientists who last week shared the US$1-million A. M. Turing award — one of the field’s top prizes.

The three artificial-intelligence (AI) researchers are regarded as the founders of deep learning, the technique that combines large amounts of data with many-layered artificial neural networks, which are inspired by the brain. They received the award for making deep neural networks a “critical component of computing”.

The other two Turing winners, Geoff Hinton and Yann LeCun, work for Google and Facebook, respectively; Bengio, who is at the University of Montreal, is one of the few recognized gurus of machine learning to have stayed in academia full time.

But alongside his research, Bengio, who is also scientific director of the Montreal Institute for Learning Algorithms (MILA), has raised concerns about the possible risks from misuse of technology. In December, he presented a set of ethical guidelines for AI called the Montreal declaration at the Neural Information Processing Systems (NeurIPS) meeting in the city.

Do you see a lot of companies or states using AI irresponsibly?

There is a lot of this, and there could be a lot more, so we have to raise flags before bad things happen. A lot of what is most concerning is not happening in broad daylight. It’s happening in military labs, in security organizations, in private companies providing services to governments or the police.

What are some examples?

Killer drones are a big concern. There is a moral question, and a security question. Another example is surveillance — which you could argue has potential positive benefits. But the dangers of abuse, especially by authoritarian governments, are very real. Essentially, AI is a tool that can be used by those in power to keep that power, and to increase it.

AI pioneer: ‘The dangers of abuse are very real’
Yoshua Bengio, winner of the prestigious Turing award for his work on deep learning, is establishing international guidelines for the ethical use of AI.
Davide Castelvecchi, Nature

Read more…