IJCNN 2011 Panel Session : “Brain-Mind Architectures: Module-Free, General Purpose, and Immediate Learning?”
1. The NSF report on the limitations of our state-of-the-art learning algorithms
Autonomous machine learning has become a priority in the science and engineering of learning. In July 2007, NSF had a workshop on the “Future Challenges for the Science and Engineering of Learning.” Here is the summary of the “Open Questions in Both Biological and Machine Learning” from the workshop (http://www.cnl.salk.edu/Media/NSFWorkshopReport.v4.pdf).
“Biological learners have the ability to learn autonomously, in an ever
changing and uncertain world. This property includes the ability to
generate their own supervision, select the most informative training
samples, produce their own loss function, and evaluate their own
performance. More importantly, it appears that biological learners can
effectively produce appropriate internal representations for composable
percepts -- a kind of organizational scaffold - - as part of the
learning process. By contrast, virtually all current approaches to
machine learning typically require a human supervisor to design the
learning architecture, select the training examples, design the form of
the representation of the training examples, choose the learning
algorithm, set the learning parameters, decide when to stop learning,
and choose the way in which the performance of the learning algorithm is
This strong dependence on human supervision is greatly retarding the
development and ubiquitous deployment of autonomous artificial learning
Although we are beginning to understand some of the learning systems
used by brains, many aspects of autonomous learning have not yet been
This obviously opens the door to developing a new generation of learning algorithms. And IJCNN could become the focal point for research collaboration on this new breed of learning algorithms.
The objective of this tutorial is to present some new ideas regarding brain-like learning to the IJCNN 2011 participants, ideas that can lead to the development of truly autonomous learning methods. Completely autonomous learning is extremely important from the point of view of robotics and computational intelligence. For example, we cannot develop autonomous robots of any type, those that can learn on their own, with learning algorithms that need constant human baby-sitting and human intervention. For autonomous robots, we have to have tweak-free learning algorithms that can design and train computational structures (e.g. neural networks) on their own without any kind of external assistance.
The tutorial will broadly introduce some new ideas about learning and some new types of learning methods developed over the last few years. Participants will learn about a set of principles for designing and constructing autonomous learning algorithms. There will also be a demonstration of these new autonomous learning algorithms on a variety of problems.
INNS also has a Special Interest Group (SIG) that focuses exclusively on autonomous machine learning (AML SIG). An important objective of the AML SIG is to organize a research group to work on this new breed of learning algorithms. From IJCNNs point of view, this tutorial would be an attempt to grow a set of researchers focused on autonomous learning, which in turn will lead to future IJCNN sessions in this area.
3. Intended audience:
People doing research on learning algorithms, which is the vast majority of IJCNN participants, should be interested in this tutorial. This should be of special interest to students and those in the industry doing research in the area of neural networks and machine learning. They are the ones who will be working on the next generation of learning algorithms that don’t depend on human supervision and intervention.
I expect an active exchange of ideas as we venture into this new frontier of research.
a) What properties NSF wants in future learning algorithms – on some general properties of biological learning
b) An overview of hypersphere nets without connection weights; similarities to RBF nets;
c) On approximate rule extraction from hypersphere nets
d) Demonstration of an autonomous learning system for pattern classification and discussion of its basic features
e) Problem decomposition and class-specific feature selection; finding the best feature set for each class based on separation maximization principle
f) Feature selection and generalization (minimum error, minimum description length)
g) A new hypersphere classification algorithm; iterative generation of hyperspheres
h) Next generation learning algorithms – no parameters to set, no optimization method used, self-selection of training examples
5. A note about the Autonomous Machine Learning (AML) SIG:
INNS formed the Autonomous Machine Learning Special Interest Group (AML SIG) for long term collaboration in this area within the robotics and neural network community. The AML SIG is already bringing a lot of focus to autonomous learning within the broader neural network community through special sessions and panel discussions. AML SIG has a special track within IJCNN with 7 special sessions and 2 panel discussions.
AML SIG now has over 130 members worldwide and everyone is encouraged to join. Here’s the link to the AML SIG website: http://autonomoussystems.org/default.html . Email Asim Roy at email@example.com to join the AML SIG.
Biography – Asim Roy
Asim Roy is a Professor of Information Systems at Arizona State University. He received his M.S. in Operations Research from Case Western Reserve University, Cleveland, Ohio, and Ph.D. in Operations Research from University of Texas at Austin. He has been a Visiting Scholar at Stanford University, visiting Prof. David Rumelhart in the Psychology Department, and a Visiting Scientist at the Robotics and Intelligent Systems Group at Oak Ridge National Laboratory, Oak Ridge, Tennessee.
His research interests are in brain-like learning, neural networks, machine learning, data mining, intelligent systems and nonlinear multiple objective optimization. His research has been published in Management Science, Mathematical Programming, Neural Networks, Neural Computation, various IEEE Transactions (Neural Networks, Fuzzy Systems, Systems, Man and Cybernetics) and other journals. He has been invited to many national and international conferences for plenary talks and for tutorials, workshops and short courses on his new learning theory and methods. He is listed in Who's Who in America, Who's Who in the World, Who's Who in American Education and Who's Who in Industry and Finance, among others.
Physorg.com recently wrote a story on his new brain theory postulating that parts of the brain control other parts and thus control theoretic architectures can be used to design brain-like systems. Here’s the link to the story: http://www.physorg.com/news146319784.html
“Architectural issues for autonomous learning systems”
IJCNN 2011, San Jose, CA, July 31 – Aug 5, 2011
Sponsor - Autonomous Machine Learning (AML) SIG of the International Neural Network Society (INNS)
The standard approach so far to creating certain functionality (for say image processing, motor control, language processing and so on) has been to build specialized modules for them. An emerging class of theories, broadly labeled as “neural reuse theories,” seems to question that approach. Neural reuse theories propose that reuse of neural circuitry for various cognitive functions is a central organizing principle of the brain. The quote here, from a recent BBS article by Michael Anderson (“Neural reuse: A fundamental organizational principle of the brain,” BEHAVIORAL AND BRAIN SCIENCES (2010) 33, 245–313), summarizes the basic ideas behind neural reuse theories:
“According to these theories, it is quite common for neural circuits established for one purpose to be exapted (exploited, recycled, redeployed) during evolution or normal development, and be put to different uses, often without losing their original functions. Neural reuse theories thus differ from the usual understanding of the role of neural plasticity (which is, after all, a kind of reuse) in brain organization along the following lines: According to neural reuse, circuits can continue to acquire new uses after an initial or original function is established; the acquisition of new uses need not involve unusual circumstances such as injury or loss of established function; and the acquisition of a new use need not involve (much) local change to circuit structure (e.g., it might involve only the establishment of functional connections to new neural partners). Thus, neural reuse theories offer a distinct perspective on several topics of general interest, such as: the evolution and development of the brain, including (for instance) the evolutionary-developmental pathway supporting primate tool use and human language; the degree of modularity in brain organization; the degree of localization of cognitive function; and the cortical parcellation problem and the prospects (and proper methods to employ) for function to structure mapping. The idea also has some practical implications in the areas of rehabilitative medicine and machine interface design.”
To support this theory with neurophysiological evidence, Anderson reviews studies that show that some task-specific neural structures in the brain are activated and used across a wide range of cognitive tasks. Here’s from Anderson again:
“For instance, although Broca’s area has been strongly associated with language processing, it turns out to also be involved in many different action- and imagery-related tasks, including movement preparation (Thoenissen et al. 2002), action sequencing (Nishitani et al. 2005), action recognition (Decety et al. 1997; Hamzei et al. 2003; Nishitani et al. 2005), imagery of human motion (Binkofski et al. 2000), and action imitation (Nishitani et al. 2005; for reviews, see Hagoort 2005; Tettamanti & Weniger 2006). Similarly, visual and motor areas – long presumed to be among the most highly specialized in the brain – have been shown to be active in various sorts of language processing and other higher cognitive tasks (Damasio & Tranel 1993; Damasio et al. 1996; Glenberg & Kaschak 2002; Hanakawa et al. 2002; Martin et al. 1995; 1996; 2000; Pulvermu¨ ller 2005; see sect. 4 for a discussion). Excitement over the discovery of the Fusiform Face Area (Kanwisher et al. 1997) was quickly tempered when it was discovered that the area also responded to cars, birds, and other stimuli (Gauthier et al. 2000; Grill-Spector et al. 2006; Rhodes et al. 2004).”
Neural reuse theories obviously have deep implications for autonomous system architectures. Here are some broad issues to be addressed by this panel:
1. From the examples of neural reuse cited by Anderson, is it fair to say that neural reuse is more about connectivity of different specialized modules and functions and therefore nothing new?
2. Is the neural reuse idea in a way related to optimal use of neural circuitry? If so, how do we take advantage of the idea? Is neural reuse in any way related to minimum description length idea?
3. Is flexible functional modularity in the brain (and across developmental stages and species) enabled by the structural modularity of the cortex (columns, hypercolumns, etc.)? In what ways is this related to neural reuse?
4. Is the concept of dynamic neurons related to neural circuitry reuse?
5. Whatever structure modules have in the brain, how do they combine and recombine flexibly at rates in the alpha and theta ranges?
6. In what ways do functional brain networks and modules overlap and is it about taking advantage of and reusing existing neural circuitry?
7. What do we mean by brain's distributed representation? Do brain's distributed representations have rigid boundaries between "modules"?
8. Does the brain have modules where each module has a specific function (e.g., edge detection)?
9. How do we determine the functions of a certain brain area?
10. Does the brain have "symbols" in it?
11. How does neural reuse relate to mechanisms of interaction between language and cognition? How does it relate to previous mysteries, including association of words and objects, phrases and abstract thoughts, conscious and unconscious in thinking, and symbolic thoughts.
How does neural reuse
relate to symbol grounding (connections of words to their relevant
objects, actions to their relevant verbs) or symbol creation and
1. Bruno Apolloni, University of Milan, Italy (http://www.dsi.unimi.it/persona.php?z=1;id=51 )
2. Wlodek Duch, Nicolaus Copernicus University, Poland (http://www.fizyka.umk.pl/~duch/ )
3. Walter Freeman, University of California, Berkeley, USA (http://mcb.berkeley.edu/index.php?option=com_mcbfaculty&name=freemanw )
4. Ali Minai, University of Cincinnati, USA (http://www.ece.uc.edu/~aminai/ )
5. Carlo Francesco Morabito, University "Mediterranea" of Reggio Calabria, Italy (http://www.ing.unirc.it/scheda_persona.php?id=432&PHPSESSID=0cjm5gqsn61urtt7lehg3gmgj6 )
6. Leonid Perlovsky, Harvard University and The Air Force Research Laboratory, USA (http://www.leonid-perlovsky.com/ )
7. John Taylor, King’s College London, UK (http://www.mth.kcl.ac.uk/staff/jg_taylor.html )
8. Juyang (John) Weng, Michigan State University, USA (http://www.cse.msu.edu/~weng/ )
9. Asim Roy, Arizona State University, USA (http://lifeboat.com/ex/bios.asim.roy)
“Mapping spiking neurons to cognition and behavior – the challenges”
IJCNN 2011, San Jose, CA, July 31 – Aug 5, 2011
Chairs – Narayan Srinivasa, HRL Laboratories, and
Asim Roy, Arizona State University
Sponsor - Autonomous Machine Learning (AML) SIG of INNS
Nik Kasabov - KEDRI/AUT, Auckland, New Zealand and INI/ETH Zurich (http://www.aut.ac.nz/study-at-aut/study-areas/computing--mathematical-sciences/learning-environment/our-people/our-staff/nikola-kasabov )
John Weng - Michigan State University, USA (http://www.cse.msu.edu/~weng/ )
Yoonsuck Choe – Texas A&M University, USA (http://faculty.cs.tamu.edu/choe/ )
Leonid Perlovsky - Harvard University and The Air Force Research Laboratory, USA (http://www.leonid-perlovsky.com/ )
Paolo Arena – University of Catania, Italy (http://www.dees.unict.it/users/parena/index.html )
Harry Erwin – University of Sunderland, UK (http://www.his.sunderland.ac.uk/~cs0her/ )
Marley Vellasco – Pontifícia Universidade Católica do Rio de Janeiro, Brazil (http://www.ica.ele.puc-rio.br/marley/ )
Narayan Srinivasa, HRL Laboratories, USA (http://www.hrl.com/cnes/cnes_contacts.html )
Asim Roy, Arizona State University, USA (http://lifeboat.com/ex/bios.asim.roy )
Format - Short presentations by participants followed by group discussion with audience participation. This could lead to future research collaborations in this challenging area.
Biological systems are adept at learning such that they are capable of exhibiting robust, efficient and intelligent behaviors under constantly changing and/or new conditions. Recent discoveries in neuroscience suggest that the brain utilizes a spike based communication for energy efficient computation. It also suggests that the brain employs spike timing dependent plasticity or STDP as the fundamental spike timing based learning rule to accomplish various learning tasks. This field is however nascent and a lot of research work needs to be done to bridge the gap between cellular mechanisms such as STDP and how large scale behavioral functions can be realized from these basic mechanisms. The focus of this workshop would be to explore computational modeling of such spike based networks that can realize system level behavioral functions from networks composed of spike based cellular mechanisms. It is believed that addressing this gap will enable the realization of autonomous learning systems that scale in both computational efficiency and behavioral complexity compared to biological systems.
Francesco Carlo Morabito
STEERING COMMITTEE SIREN
Università di Milano
Seconda Università di Napoli
Università di Genova
Francesco Carlo Morabito
Università Mediterranea di Reggio Calabria
Politecnico di Torino
Università di Salerno
Università di Roma “
Università di Palermo
ADVISORY COMMITTEE SIREN
Piero P. Bonissone
Computing and Decision Sciences
Leon O. Chua
NTT Communication Science Laboratories
Harold H. Szu
Army Night Vision Electronic Sensing Directory
John G. Taylor
Università Joseph Fourier,
Fredric M. Ham
Florida Institute of Technology
This Workshop is sponsored by the
International Neural Network Society (www.inns.org)
SECOND CALL FOR PAPERS WIRN 2011
The Italian Workshop on Neural Networks (WIRN) is the annual
conference of the Italian Society of Neural Networks (SIREN).
The conference is organized continuously, since
The 21st Edition of the Italian Workshop on Neural
Networks (WIRN 2011)
will be held, as usual, at the beautiful town of
CALL FOR PAPERS, SPECIAL SESSIONS PROPOSALS: Prospective authors are invited to contribute high quality papers in the topic areas listed below and proposals for special sessions. Special sessions aim to bring together researchers in special focused topics. Each special session should include at least 3 contributing papers. A proposal for a special session should include a summary statement (1 page long) describing the motivation and relevance of the proposed special session, together with the article titles and author names of the papers that will be included in the track. The co-ordinator of the proposal will also be responsible for the reviewing procedure.
Contributions should be high quality, original and not published elsewhere or submitted for publication during the review period. Please visit the web site for further details of the required paper format. Papers will be reviewed by the Program Committee, and may be accepted for oral or poster presentation. All contributions will be published in a proceeding volume by IOS Press. Authors will be limited to one paper per registration.
The submission of the manuscripts should be done through the following website (page limit: 8 pages):
SPECIAL SESSION ALREADY FINALIZED: The WIRN 2011 will feature the following three Special Sessions:
i) Models of Behaviours for Human-Machine Interaction (Chairs: A. Esposito, M. Maldonato, L. Trojano)
ii) Autonomous Machine Learning (Chairs: A. Roy, P. Arena), in cooperation with INNS SIG AML
iii) Neuromorphic Engineering (Chairs: E. Chicca, e. Pasero)
Contributions are also sought for the Special sessions (see website for additional info)
TOPIC AREAS: Suggested topics for the conference include the research and application areas indicated in the First Call for Papers (see website) and cover the main areas of interest of Computational Intelligence.
CALL FOR PROF. EDUARDO R. CAIANIELLO Ph.D. THESIS PRIZE
During the Workshop the "Premio
E.R. Caianiello" will be assigned to the best Italian Ph.D.
thesis in the area of Neural Networks and related fields. The
prize consists in a diploma and a 1.000,00 € check. Interested
applicants must send their CV and thesis in
pdf format to “Premio
PAPER SUBMISSION: Important Dates
Special session/workshops proposals: March 1, 2011
Paper Submission deadline: March 28, 2011
Notification of acceptance: April 30, 2011
Camera-ready copy: on site, June 3, 2011
Conference Dates: June 3-5, 2011
Papers submitted could be also sent by electronic mail to the address: firstname.lastname@example.org
More detailed instructions can be found at the WIRN 2011 home page