|
Professor Ian
Apperly, School of Psychology, University of Birmingham
Topic: "Reasoning about
mental states"
Abstract:
Reasoning about beliefs, knowledge, desires and
intentions is central to humans’ ability to understand one
another, and would surely be necessary for artificial
agents to interact on equal terms with humans in a wide
range of circumstances. However, ascribing mental states
runs into classic difficulties that Artificial
Intelligence encounters with unbounded information
processing because, in any given situation, it is
difficult to specify clearly what information is relevant
for inferring what someone else believes, knows, or
intends. Psychological research suggests that humans
address this problem in two ways: by confronting it, with
only partial success; and by side-stepping it with
efficient but incomplete solutions.
Short biography:
Ian Apperly is an experimental psychologist, and
his main research interest is in “mindreading” – the
ability to take other people’s perspectives. He is the
author of a book entitled “Mindreaders: The cognitive
basis of theory of mind”, and over 80 papers on the
development of these abilities, and their cognitive and
neural basis. He is particularly interested in how
mindreading can be simultaneously flexible and efficient,
and with Stephen Butterfill he has proposed a “two
systems” account of these abilities. |
|
Professor Gordon
Brown, Department of Psychology, University of
Warwick
Topic: "Human memory and
timing"
Abstract:
Human memory appears to be organised adaptively in
that, at a given point in time and in a given context, the
memories that are easiest for us to retrieve are the ones
that are most likely to be needed. How does it achieve
this? Human memories appear to be organized at least
partly in terms of their temporal distances (i.e., how far
in the past they occurred), and this organisation may be
adaptive. In the talk I will discuss the time-scale
invariant properties of human memory, the notion of
temporal distinctiveness (memories that are temporally
distinct are less confusable in memory), and related
"ratio-rule" models of memory. I will also introduce the
notion of contextual diversity as a principle underlying
human memory.
Short biography:
Gordon
Brown leads the Behavioural Science Group in the
Department of Psychology at the University of Warwick,
where he was appointed in 1994. He has held posts at the
University of Wales, the University of Hong Kong, and the
University of Essex and has over 150 academic
publications. Much of his recent research has involved
computational models of human timing and memory and, at
the interface between economics and psychology, the
psychology of judgement and decision-making as applied to
consumer choice as well as agent-based models of political
polarisation.
|
|
Professor Alan Bundy, School of Informatics, University
of Edinburgh
Topic: "Representation change"
Abstract:
Human-like computing will entail the building of
hybrid teams of persistant, autonomous agents: robots,
softbots and humans. By 'persistant', we mean that they
will have to deal with changes to both their goals and
their environments, including changes to the agents with
which they must interact. Such persistant agents must have
internal representations of their environment, including
of other agents. Autonomy wil entail that these
representations must themselves change automatically as
the agent's goals and environments evolve. The changes
must be to the language of the representation as well as
to the beliefs represented in this language. Despite its
importance, automatic language change is a neglected
research area. We will illustrate the need for such
automated representational change, and describe some early
experimental systems that implement it.
Short biography:
Alan Bundy is Professor of Automated Reasoning at
the University of Edinburgh. He is a fellow of the Royal
Society, the Royal Academy of Engineering and the
Association for Computing Machinery. He was awarded the
IJCAI Research Excellence Award (2007), the CADE Herbrand
Award (2007) and a CBE (2012). He was Edinburgh's Head of
Informatics (1998-2001) and a member of: the
Hewlett-Packard Research Board (1989-91); both the 2001
and 2008 Computer Science RAE panels (1999-2001,
2005-2008). He was the founding Convener of UKCRC
(2000-2005) and a Vice President of the BCS (2010-12). He
is the author of over 290 publications. |
|
Professor Nick Chater, Department of
Psychology,
University of Warwick
Topic: "Virtual bargaining as theory of social
Interaction and communication"
Abstract:
Successful social interaction between agents
(whether human or artificial) involves coordinating
thoughts and behaviour. But how can such coordination
achieved? If each agent attempts to second-guess the
thoughts and behaviour of the other, there is a danger of
an infinite regress. A tries to infer what B will do; and
knows that B will try to infer what A will do; so A needs
to figure out what B thinks that A will do; but what A
will do in turn depends on what B thinks A thinks that B
will do, and so on, forever. We introduce a different
approach: agents should reason jointly about what they
would agree to think or do, were they able to negotiate.
That is, they reason not about “What will you do?” and
“What should I do?, but rather “What should we agree to
do?” Where it is “obvious” what a result of such
negotiation would be, no actual communication is required:
agents can coordinate their thoughts and actions through a
simulation of the bargaining process. Virtual bargaining
provides a new foundation for understanding the reasoning
that underpins social behaviour, including communication
itself.
Short biography:
Nick Chater joined WBS in 2010, after holding
chairs in psychology at Warwick and UCL. He has over 200
publications, has won four national awards for
psychological research, and has served as Associate Editor
for the journals Cognitive Science, Psychological Review,
and Psychological Science. He was elected a Fellow of the
Cognitive Science Society in 2010 and a Fellow of the
British Academy in 2012. Nick is co-founder of the
research consultancy Decision Technology; and is on the
advisory board of the Cabinet Office's Behavioral Insight
Team (BIT), popularly known as the 'Nudge Unit'.
|
|
Professor Anthony Cohn, School of
Computing University of Leeds
Topic: "Spatial reasoning"
Abstract:
Being able to represent space and time is
fundamental to an agent’s ability to operate effectively
in the world it inhabits, to process language, and to
recognise activities which it observes. In
this talk I will present approaches to represent, reason
about, and also to learn such spatio-temporal knowledge,
focussing on qualitative representations. these have a
number of advantages, especially in relation to human
level computing, since much of human spatial knowledge is
qualitative, certainly as it appears in language. I will
also discuss the issue of grounding spatio-temporal
language to the visual world.
Short biography:
My PhD was in the area of many sorted logic, a way
of encoding taxonomic knowledge efficiently in a
computational logic. After that I got interested in naďve
physics and common sense knowledge and focussed in
particular on spatial representation and reasoning, which
is fundamental for any agent operating in a physical
world. My particular interest is qualitative spatial
representation and reasoning and I am known as one of the
founders of this field. My present focus is mostly on
*using* such calculi for activity modelling and as the
representation language for machine learning of activity
models, exploiting and developing a variety of machine
learning techniques. I have also become interested in
grounding language in vision, in particular relating to
unsupervised learning of the perceptual semantics of
spatial language and activity descriptions.
|
|
Professor Simon
Colton, Department of Computing, Goldsmiths
College, University of London
Topic: "Computational Creativity in Human Society "
Abstract:
Human-like Simulating creative behaviours and
producing artefacts of real cultural value has long been a
prized goal in AI research, and has been studied
intensively in the sub-field of Computational Creativity.
As this area begins to draw in researchers from broader AI
fields, and begins to make an impact outside of
academic research, it is worth reflecting on some lessons
learned with respect to the way creative software can
become a part of human cultures. In many respects, a
celebration of creativity is actually a celebration of
humanity itself, and some art forms, for instance poetry,
serve - at least in part - to help people make connections
to other people, over and above the value of the art
itself. This raises questions about the role of creative
software in such a context, and in the talk, I’ll
highlight and try to address some stakeholder issues
we have faced with projects such as The Painting
Fool (thepaintingfool.com), The WhatIf Machine
(whim-project.eu) and our latest offering, Gamika
Technologies (metamakersinstitute.com).
Short biography:
Simon Colton is a Professor of Digital Game
Technologies at Falmouth University, and part-time
Professor of Computational Creativity in the Department of
Computing at Goldsmiths College, University of London. He
holds an ERA Chair and an EPSRC leadership fellowship, and
was previously a Reader in Computational Creativity in the
Department of Computing at Imperial College, London. He is
an Artificial Intelligence researcher, specialising
in questions of Computational Creativity by developing and
investigating novel AI techniques and applying them
to creative tasks in domains such as pure mathematics,
graphic design, video game design, creative language and
the visual arts. By taking an overview of creativity in
such domains, he has added to the philosophical discussion
of creativity, by addressing issues raised by the idea of
autonomously creative software. This has enabled the
driving forward of various formalisms aimed at bringing
more rigour to the assessment of creativity in software.
Prof. Colton has also advanced public engagement around
issues of Computational Creativity through the development
and public-deployment of creative software, which has led
to the study of stakeholder issues in the field, and
offers prospects for commercialisation projects.
|
|
Professor Ulrike
Hahn, Department of Psychological Sciences, Birkbeck
University of London
Topic: "Lessons for Human Like Computing from Cognitive
Modelling "
Abstract:
The talk gives a brief overview of approaches to
computational modelling within Cognitive Psychology and
Cognitive Science, seeking to highlight relevant criteria
of 'success'. The implications of this for human like
computing are then discussed.
Short biography:
Professor of Psychology in the Department of
Psychological Sciences, has been awarded the Alexander von
Humboldt Foundation Anneliese Maier Research Award. This
award is presented to world class researchers in the
humanities and social sciences with the aim of encouraging
collaboration between international researchers in
Germany. Winners work on research projects funded for up
to five years. Professor Hahn’s research investigates
aspects of human cognition including argumentation,
decision-making, concept acquisition, and language
learning. Her work involves both experimentation and
modelling. She is Director of the Centre for Cognition,
Computation and Modelling which was launched earlier in
2013.
|
|
Dr Caroline Jay, School of School of
Computer Science, University of Manchester
Topic: "Human-like software engineering"
Abstract:
Engineering software is a challenging endeavour.
Development processes are incrementally improving,
allowing us to construct increasingly complex artefacts,
yet software continues to contain errors, or behave in
unforeseen ways. This is partly to due to the 'unknown
unknowns' introduced by a changing external environment,
but it is also because algorithms often fail to work as
expected: the formal representations underlying machine
computation are frequently at odds with the heuristics
used by the human brain. Observation of the programming
process has resulted in huge technological advances. A
notable example of this is locality of reference, a
principle uncovered when trying to ascertain how to page
data in and out of memory, which has gone on to touch
virtually every aspect of modern systems. What we
understand about human-machine interaction in software
engineering remains limited, however, and progress in
development has occurred primarily through craft-based
iteration, rather than rigorous empirical study. Advances
in hardware, such as parallel processing, have yet to
achieve their full potential, as we struggle to translate
serial human-written programs onto distributed
architectures. Automated programming offer a means to
reduce and repair errors, but even with machine-written
programs, human input to a system means a bottleneck will
always remain. As we move into the era of quantum
computing and beyond, a true understanding of how our
minds map themselves onto the machines we create is a
vital component of achieving a step change in the creation
and performance of software.
Short biography:
Caroline Jay is a Senior Lecturer in Empirically Sound
Software Engineering in the School of Computer Science at
the University of Manchester. She is qualified as both a
Psychologist (BA, CPsychol) and Computer Scientist (MSc,
PhD), and undertakes research crossing these domains. She
is a Fellow of the Software Sustainability Institute, and
an advocate for open and reproducible science. She is
currently leading the 'Data Science Meets Creative Media’
project between the University of Manchester and BBC
Research and Development.
|
|
Professor
Pat Langley, Institute for the Study of Learning
and Expertise (ISLE), Palo Alto, California
Topic: "Intelligent Behavior in Humans and Machines"
Abstract:
In this talk, I review the role of cognitive
psychology in the origins
of artificial intelligence and in our pursuit of AI's
initial objectives.
I examine how many key ideas about representation,
performance, and learning had their inception in
computational models of human cognition, and I argue that
this approach to developing intelligent systems, although
no longer common, has an important place in the field. Not
only will research in this paradigm help us better
understand human mental abilities, but findings from
psychology can serve as useful heuristics to guide our
search for intelligent artifacts. I also claim that
another psychological notion - cognitive architecture - is
especially relevant to developing unified theories of the
mind and integrated intelligent systems.
Short biography:
Dr. Pat Langley serves as Director of the Institute
for the Study of Learning and Expertise and as Honorary
Professor of Computer Science at the University of
Auckland. He has contributed actively to artificial
intelligence and cognitive science for over 35 years, he
was founding Executive Editor of Machine Learning, and he
is currently Editor for Advances in Cognitive Systems. His
ongoing research focuses on induction of explanatory
scientific models and on architectures for intelligent
agents.
|
|
Professor Denis Mareschal, Centre
for Brain and Cognitive Development, School of
Psychology, Birkbeck College
Topic: "Constraints on Children’s Learning Across
Development"
Abstract:
Since the seminal work of Piaget we have understood
that children differ in the way they approach learning and
problem solving, depending on their age. Debates have
focussed on whether learning was qualitatively different
across ages, or rather, whether the same basic mechanisms
operated at different ages, but with ever-greater world
knowledge with increasing age. With this context in mind,
I will discuss two major factors impacting on the very
impressive early human learning: (1) one-shot learning (or
fast mapping), whereby children appear to learn words or
concepts robustly from a single exposure, (2) socially
guided learning, whereby children learn best from
trustworthy conspecific agents. In each case, I will
illustrate the impressive power of these two inductive
constraints, but also give examples of where they fall
down and of the impressive limitations of children’s
learning. I will argue that these two forms of inductive
biases arise from general learning and orienting
mechanisms rather than specialised, human-specific,
modules.
Short biography:
Denis Mareschal is Professor of Psychology and
Director of the Centre for Brain and Cognitive
Development, at Birkbeck University of London. His first
degree was in Natural Sciences (Physics and Theoretical
Physics) from Cambridge University, after which he
obtained an MA in psychology from McGill University,
followed by a DPhil in Psychology from Oxford
University. His research centers on developing
mechanistic models of perceptual and cognitive development
in infancy and childhood. His work combines computational
modelling, behavioural experiments and neuroimaging to
elucidate the mechanisms underlining human learning as it
unfolds across child development. He has published over 80
refereed journal article and 4 monographs, including most
recently Educational Neuroscience published by OUP. He has
received the Marr prize from the Cognitive Science Society
(USA), the Young Investigator Award from the International
Society on Infant Studies (USA), and the Margaret
Donaldson Prize from the British Psychological Society, as
well as the Queen's Anniversary Prize for Higher and
Further Education, and a Royal Society-Wolfson research
merit award. He is a fellow of the British Psychological
Society and the American Association of Psychological
Sciences, and served for 8 years as Editor-in-Chief of
Developmental Science, the leading journal of scientific
developmental psychology. |
|
Professor
Stephen Muggleton, Department of Computing, Imperial
college London
Topic: "Human-machine learning"
Abstract:
Traditionally Machine Learning has been seen as an
area in which computer programs are used automatically to
devise a prediction function on the basis of large
quantities of data. In this talk we will argue that the
properties of such computational systems differ radically
from those of human learning, which, unlike Machine
Learning, progresses incrementally over a lifetime and
involve building structured modules which allow
multi-modal integration of sensors, motor actions and
high-level plans. Consequently there has been little
research to date on the topic of how to effectively
integrate human and machine learning for tasks which
involve effective collaboration between computers and
machine agents which learn symmetrically from each other.
In this talk we will explore the requirements in this case
for what we will call Human-Machine Learning, and some of
the ongoing research relevant to this topic.
Short biography:
Stephen Muggleton is Professor of Machine Learning
in the Department of Computing at Imperial College London
and is internationally recognised as the founder of the
field of Inductive Logic Programming. SM’s career has
concentrated on the development of theory, implementations
and applications of Machine Learning, particularly in the
field of Inductive Logic Programming (ILP) and
Probabilistic ILP (PILP). Over the last decade he has
collaborated with biological colleagues, such as Prof Mike
Sternberg, on applications of Machine Learning to
Biological prediction tasks. SM’s group is situated within
the Department of Computing and specialises in the
development of novel general-purpose machine learning
algorithms, and their application to biological prediction
tasks. Widely applied software developed by the group
includes the ILP system Progol (publication has over 1600
citations on Google Scholar) as well as a family of
related systems including ASE-Progol (used in the Robot
Scientist project), Metagol and Golem. |
|
Professor Stephen
Payne, Department of Computer Science , University of
Bath
Topic: "Sensemaking"
Abstract:
Sensemaking refers to the behavioural and
cognitive processes required to find, collect and
understand wide-ranging information about a complex
multi-faceted topic. In library and information studies,
the term Sensemaking has been used to broaden the
conception of human knowledge, so as to incorporate
dynamic and collaborative processes (Dervin, 1998). In
cognitive science and human-computer interaction,
Sensemaking has been used to label a process of
schema-formation that is distributed in time and across
people and devices (Russell et al, 1993). In this talk I
will draw closer links between Sensemaking and the
psychology of comprehension, and review some pertinent
laboratory studies, so as to ask finer-grained questions
about the cognitive capabilities that allow sensemaking in
a world where there is typically more information
available than can be read and understood.
Short biography:
Professor Stephen Payne is an academic cognitive
scientist with a particular interest in human-computer
interaction, currently Professor of Human-Centric Systems
in the Department of Computer Science at the University of
Bath. Research interests in cognitive science and
human-computer interaction. Currently researching how
individuals allocate time and effort across multiple
tasks; how technology supports and shapes collaborative
problem solving and the formation and maintenance of
friendships; emotional and motivational constraints on the
exploration of novel interactive services.
|
|
Alex
Polozov, Computer Science & Engineering
Department, University of Washington, Seattle
Topic: "Automated Program Synthesis"
Abstract:
Program synthesis is the task of automatically
finding a program in the underlying programming language
that accomplishes the user's intent, given in a form of
some specification. Despite being a golden dream of
software engineering for decades, it gained significant
traction only in the last 15 years, when novel search
algorithms, achievements in SAT solving, and Moore's law
made many non-trivial synthesis problems tractable. Since
then, program synthesis has been successfully applied to
numerous domains, including tutoring systems, data
cleaning, task automation, robotics, and even discovering
biological phenomena. Methods of program synthesis are
traditionally categorized by (a) the form of intent
specification that they accept, and (b) the underlying
search algorithm. The challenge of the former lies in
ambiguity: the specification often communicates only
partial intent, and synthesizers need intuitive user
interaction models to arrive at the correct program. The
challenge of the later lies in navigating an enormous
space of possible candidate programs in the language. In
this talk, I will give a high-level overview of the field
of program synthesis, its most prominent challenges, the
most popular techniques, and some influential
applications.
Short biography:
Alex Polozov is a graduate student at University of
Washington in Seattle, USA, and a founding member of the
Microsoft Program Synthesis by Examples (PROSE) group. His
work includes inductive program synthesis, its
applications to data wrangling, intelligent tutoring
systems, and software engineering, as well as
investigating human-computer interaction in the context of
automatic learning systems. He is interested in combining
symbolic and stochastic approaches to fundamental AI, and
integrating domain-specific reasoning into machine
learning algorithms. Together with the PROSE group, Alex
is building an algorithmic framework that powers numerous
programming-by-example features in Microsoft products,
including Excel, Cortana, and Azure services.
|
|
Professor Yvonne
Rogers, Department of Computer Science,
University College London
Topic: "Human-Centred Data: Beyond AI "
Abstract:
Artificial intelligence (AI) is back in ascendancy.
Without question it is an exciting time to be working in
AI. Deep learning is very much at the heart of this
renaissance; enabling core AI research areas, such as
natural language processing and computer vision, to make
significant strides, developing more accurate
classification and recognition techniques. A diversity of
areas, including advertising, search, security, media
filtering, social media profiling, logistics, and content
curation are benefiting from the application of the new
generation of algorithms. So far, much of the focus in AI
has been on the artificial - making machines smarter –
with some adverse publicity arising as a result. For
example, modeling interaction with users in order to
optimize presentation of content has treated the users as
passive subjects that should be persuaded to click through
to the proposed pages. Humans are more often left out of
the loop in the push for ever more optimization and
efficiency. But the HCI community argues the
opposite: they should be viewed as central to tech
development. A core concern is how best to optimize
synergy in our interactions not efficiency. In my talk, I
will introduce the research we have been doing on data:
shifting from an automated data science perspective to a
human-centered one.
Short biography:
Yvonne Rogers is a Professor of Interaction Design,
the director of UCLIC and a deputy head of the Computer
Science department at UCL. Her research interests are in
the areas of ubiquitous computing, interaction design and
human-computer interaction. A central theme is how to
design interactive technologies that can enhance life by
augmenting and extending everyday, learning and work
activities. This involves informing, building and
evaluating novel user experiences through creating and
assembling a diversity of pervasive technologies. Yvonne
is the PI at UCL for the Intel Collaborative Research
Institute on Sustainable Connected Cities which was
launched in October 2012 as a joint collaboration with
Imperial College. She was awarded a prestigious EPSRC
dream fellowship rethinking the relationship between
ageing, computing and creativity. Food for Thought:
Thought for Food is the result of a workshop arising from
it, comprising a number of resources, including a short
documentary and the participant's reflections on dining,
design and novel technology. She is a visiting professor
in the Psychology Department at Sussex University and an
honorary professor in the Computer Science department at
the University of Cape Town.
|
|
Professor Claude
Sammut, Computer Science and Engineering at the
University of New South Wales
Topic: "Logic-based robotics"
Abstract:
Robot software architectures are often
characterised as hierarchical systems where the lower
layers handle motor control and feature extraction from
sensors, and the higher layers deal with problem solving
and planning. The lower layers usually deal with
continuous, noisy data at short time scales, whereas the
upper layers work on longest time scales and treat the
world as being more discrete and predictable. Early
attempts at building integrated robot systems focussed
more on the higher levels but often failed because they
were unable to handle the uncertainty inherent in
the physical world. Recent progress in robotics owes much
to the development of probabilistic and behaviour based
methods that overcome some of the shortcomings of the
early approaches. However, high level symbolic
reasoning and learning still have important roles to play.
We describe our work on hierarchical robot software
architectures that combine symbolic and sub-symbolic
methods for learning complex behaviours. Relational
learning is used to acquire an abstract model of robot
actions that is then used to constrain sub-symbolic
learning for low-level control. Models can be variously
expressed in the classical STRIPS representation, as
qualitative models or as teleo-reactive programs. The talk
will give examples of each in the context of the
RoboCup Rescue and the RoboCup Standard Platform
competitions.
Short biography:
Claude Sammut is a Professor of Computer Science
and Engineering at the University of New South Wales. His
early work on relational learning helped to the lay the
foundations for the field of Inductive Logic Programming
(ILP). With Donald Michie, he also did pioneering work in
Behavioural Cloning. His current work is focussed on
learning in robotics. He is a mentor for the UNSW teams
that have been RoboCup champions five times in the
Standard Platform league and the teams that won the award
for best autonomous robot at RoboCup Rescue three
times. In 2012, he was elected to the board of trustees of
the RoboCup Federation and is the general chair of RoboCup
2019, to be held in Sydney. He is also is
co-editor-in-chief of Springer's Encyclopaedia of Machine
Learning and Data Mining. |
|
Dr Amanda Seed,
School of Psychology and Neuroscience, University of St
Andrews
Topic: "What cognitive mechanisms underpin social and
physical problem-solving in non-verbal creatures?
searching for the conceptual middle-ground"
Abstract:
Recent work in comparative psychology has revealed
problem-solving abilities in some large-brained species of
animal such as apes, monkeys, corvids and elephants that
seem to defy explanation from traditional models of
associative learning. I will provide some examples
from studies of theory-of mind and physical
problem-solving. The difficulty in interpreting
these findings lies with the fact that often subjects fall
short of the kinds of solutions adult humans would be
expected to find. Applying labels from human
cognitive psychology to explain the performance of animals
(such as causal reasoning, or intention understanding) has
therefore met with reasonable resistance.
Explanations of the third kind (Call & Tomasello,
2005) try to find a middle ground between these extremes,
but lack theoretical models that specify the cognitive
mechanisms involved. I will describe two lines of research
aimed at addressing this problem: one an AHRC-funded
research project on ‘re-thinking mind and meaning’ that is
trying to grapple with conceptual issues such as the
distinction between implicit vs. explicit thinking; and
another an ERC-funded project trying to apply a Bayesian
modelling approach to move beyond null-hypothesis testing
in comparative psychology.
Short biography:
Amanda Seed is a comparative and developmental
psychologist studying the evolution of cognition, in
particular causal reasoning, episodic thinking and
executive function in primates and children. She was
recently awarded an ERC Starting Grant to explore the
relationship between some of these different cognitive
skills and how they combine to affect performance on
problem-solving tasks. The motivation for this
research is to shed light on the evolutionary changes in
representational, mnemonic and executive processes that
marked the origins of uniquely human thinking. Amanda is a
Senior Lecturer at the School of Psychology and
Neuroscience at the University of St Andrews where she is
a member of the Centre for Social Learning and Cognitive
Evolution, and the Scottish Primate Research Group.
She is the Director of the ‘Living Links to Human
Evolution’ Centre at Edinburgh Zoo, where capuchin and
squirrel monkeys take part in cognitive experiments in
full view of the visiting public, with accompanying
displays for public engagement with science.
|
|
Professor Mark Steedman, School
of Informatics, University of Edinburgh
Topic: "Computational linguistics and artificial
intelligence"
Abstract:
There is a long tradition associating language and
other serial cognitive behavior with an underlying motor
planning mechanism (Piaget 1936, Lashley 1951, Miller et
al. 1960, passim). The evidence is evolutionary,
neurophysiological, and developmental. It suggests that
language is much more closely related to embodied
cognition than current linguistic theories of grammar
suggest. The talk argues that practically every aspect of
language reflects this connection transparently. Building
on planning formalisms developed in Robotics and AI, with
some attention to applicable machine learning techniques,
two basic operation corresponding to seriation and
affordance will be shown to provide the basis for both
plan-composition in animals, and long-range dependency in
human language, of the kind found in constructions like
relative clauses and coordination. A connection this
direct raises a further obvious question: If language is
so closely related to animal planning, why don't any other
animals have language? The talk will further argue that
the specific requirements of human collaborative planning,
involving actions like helping and promising that depend
on an understanding of other minds that has been found to
be lacking in other animals, provides a distinctively
semantic precursor for recursive aspects distinguishing
human language from animal communication. It will show
that the automaton that is minimally necessary to conduct
search for collaborative plans, which is of only slightly
greater generality than the push-down automaton, is
exactly the automaton that also appears to characterize
the parsing problem for natural languages.
Short biography:
A computational linguist and cognitive scientist.
Professor Steedman graduated from the University of Sussex
in 1968, with a B.Sc. in Experimental Psychology, and from
the University of Edinburgh in 1973, with a Ph.D. in
Artificial Intelligence.
He has held posts as Lecturer in Psychology, University of
Warwick (1977–83); Lecturer and Reader in Computational
Linguistics, University of Edinburgh (1983-8); Associate
and full Professor in Computer and Information Sciences,
University of Pennsylvania (1988–98). He has held visiting
positions at the University of Texas at Austin, the Max
Planck Institute for Psycholinguistics, Nijmegen, and the
University of Pennsylvania, Philadelphia. Professor
Steedman currently holds the Chair of Cognitive Science in
the School of Informatics at the University of Edinburgh
(1998- ). He works in computational linguistics,
artificial intelligence, and cognitive science, on
Generation of Meaningful Intonation for Speech by
Artificial Agents, Animated Conversation, The
Communicative Use of Gesture, Tense and Aspect, and
Combinatory Categorial Grammar (CCG). He is also
interested in Computational Musical Analysis and
Combinatory Logic.
|
|
Professor Josh Tenenbaum,
Department of Brain and Cognitive Sciences,
Massachusetts Institute of Technology
Topic: "Building machines that see, learn and think
like people: Probabilistic programs and program
induction"
Abstract:
Many recent successes in computer vision, machine
learning and other areas of artificial intelligence have
been driven by methods for sophisticated pattern
recognition, such as deep neural networks. But human
intelligence is more than just pattern recognition.
In particular, it depends on a suite of cognitive
capacities for modeling the world: for explaining
and understanding what we see, imagining things we could
see but haven’t yet, solving problems and planning actions
to make these things real, and building new models as we
learn more about the world. I will talk about how we are
beginning to capture these distinctively human capacities
in computational models using the tools of probabilistic
programs and program induction, embedded in a Bayesian
framework for inference from data. These models help to
explain how humans can perceive rich three-dimensional
structure in visual scenes and objects, perceive and
predict objects' motion based on their intrinsic physical
characteristics, and learn new visual object concepts from
just one or a few examples.
Short biography:
Professor Josh Tenenbaum studies learning,
reasoning and perception in humans and machines, with the
twin goals of understanding human intelligence in
computational terms and bringing computers closer to human
capacities. His current work focuses on building
probabilistic models to explain how people come to be able
to learn new concepts from very sparse data, how we learn
to learn, and the nature and origins of people's intuitive
theories about the physical and social worlds. Professor
of Computational Cognitive Science in the Department of
Brain and Cognitive Sciences at the Massachusetts
Institute of Technology and is a member of the Computer
Science and Artificial Intelligence Laboratory (CSAIL). He
received his Ph.D. from MIT in 1999, and was a member of
the Stanford University faculty in Psychology and (by
courtesy) Computer Science from 1999 to 2002. His papers
have received awards at the IEEE Computer Vision and
Pattern Recognition (CVPR), NIPS, IJCAI and Cognitive
Science Society conferences. He is the recipient of early
career awards from the Society for Mathematical Psychology
(2005), the Society of Experimental Psychologists, and the
American Psychological Association (2008), and the Troland
Research Award from the National Academy of Sciences
(2011)
|
|
Professor Manos
Tsakiris, Department of Psychology,
Royal Holloway University of London
Topic: "The Multisensory Basis of the Self"
Abstract:
By grounding the self in the body, experimental
psychology has taken the body as the starting point for a
science of the self. One fundamental dimension of the
bodily self is the sense of body ownership that refers to
the special perceptual status of one's own body, the
feeling that "my body" belongs to me. The primary aim of
this talk is to highlight recent advances in the study of
body ownership and our understanding of the underlying
neurocognitive processes in three ways. I first consider
how the sense of body ownership has been investigated and
elucidated in the context of multisensory integration.
Beyond exteroception, recent studies have considered how
this exteroceptively driven sense of body ownership can be
linked to the other side of embodiment, that of the
unobservable, yet felt, interoceptive body, suggesting
that these two sides of embodiment interact to provide a
unifying bodily self. Lastly, the multisensorial
understanding of the self has been shown to have
implications for our understanding of social
relationships, especially in the context of self-other
boundaries. Taken together, these three research strands
motivate a unified model of the self inspired by current
predictive coding models.
Short biography:
Manos Tsakiris studied psychology and philosophy
before completing his PhD in psychology and cognitive
neurosciences at the Institute of Cognitive
Neuroscience, UCL. He is currently Professor of Psychology
at the Department of Psychology, Royal Holloway,
University of London where he investigates the
neurocognitive mechanisms that shape the experience of
embodiment and self-identity. He is the recipient of
the 2014 Young Mind and Brain Prize and of
the 22nd Experimental Psychology Society Prize. |