|
Professor Ivan Bratko, University of
Ljubljana
Topic: "Predicting difficulty of problems for humans"
Abstract:
A question related to Explainable AI is: How can we
automatically predict the difficulty of a given problem
for humans? The difficulty for a human also depends on how
the human would go about solving the problem. The need for
predicting difficulty arises in intelligent interaction
with a user, including intelligent tutoring systems. In
this talk we discuss one approach to automatic prediction
of difficulty for humans, of problems that are solved
through informed search. We present an experimental study
in predicting the difficulty of tactical chess problems,
and investigate human problem solving in this domain. In
solving tactical problems, humans use pattern-based
knowledge to guide their search extremely effectively.
Problem solving consists in detecting chess patterns −
motifs, and calculation of concrete variations trying to
exploit these motifs to the player’s advantage. Our
analysis includes players’ comments on how they tackled
individual problems, and the recordings of their eye
movements during problem solving. The conclusions are
indicative of the importance of calculation of variations
relative to chess pattern knowledge. Also, the findings
suggest how automated assessment of difficulty could be
implemented based on the amount of search needed to solve
the problem. When the amount of search is estimated by a
computer, it is important that the search algorithm takes
into account the chess motifs used by a human, which may
drastically affect the search complexity.
Short biography:
Professor Ivan Bratko is head of Artificial
intelligence Laboratory, Faculty of Computer and
Information Sc. of Ljubljana University. He has conducted
research in machine learning, knowledge-based systems,
qualitative modelling, intelligent robotics, heuristic
programming and computer chess. He has published over 200
scientific papers and a number of books, including Prolog
Programming for Artificial Intelligence
(Addison-Wesley/Pearson Education, third edition, 2001),
KARDIO: a Study in Deep and Qualitative Knowledge for
Expert Systems (MIT Press, 1989; co-authored by I. Mozetič
and N. Lavrač), and Machine Learning and Data Mining:
Methods and Applications (Wiley, 1998; co-edited by R.S.
Michalski and M. Kubat). He has been member of the
editorial boards of a number of scientific journals,
including Artificial Intelligence, Machine Learning,
Journal of AI Research, Journal of ML Research, and KAIS
(Journal of Knowledge and Information Systems). He was one
of the founders and the first chairman of SLAIS (Slovenian
AI Society) and chairman of ISSEK, International School
for the Synthesis of Expert Knowledge, based in Udine,
Italy. He is member of SAZU (Slovene Academy of Arts and
Sciences) and a Fellow of ECCAI. He has been visiting
professor or visiting scientist at various universities,
including Edinburgh University, Strathclyde University,
Sydney University, University of New South Wales,
Polytechnic University of Madrid, University of
Klagenfurt, Delft University of Technology.
|
|
Professor Alan Bundy, School of Informatics, University
of Edinburgh
Topic: "Modelling Repairs to Virtual Bargaining with
Reformation"
Abstract:
Research by Nick Chater and his team at Warwick has
identified Virtual Bargaining, a technique that
collaborating humans have been found to use when they need
to coordinate under severe communicative constraints. Each
participant imagines themselves into the shoes of the
other participant(s): senders decide how receivers would
interpret each of the limited range of signals that they
can be sent; and receivers use similar reasoning to
interpret these signals and, thereby, decide what actions
to take. Given the limited range of possible signals, a
novel situation sometimes requires an old signal to be
reinterpreted in a new way - even to the extent of
inverting its meaning. Virtual bargaining isn't perfect.
Sometimes receivers misinterpret signals and take the
wrong actions. Then, either senders, receivers or both
need to learn from these failures and generalise their
strategies. We discuss how Reformation, an algorithm
for repairing the concepts in a formal representation, can
be used computationally to model the conceptual changes
involved in this learning process.
Short biography:
Alan Bundy is Professor of Automated Reasoning in
the School of Informatics at the University of
Edinburgh. His research interests include: the
automation of mathematical reasoning, with applications to
reasoning about the correctness of computer software and
hardware; and the automatic construction, analysis and
evolution of representations of knowledge. His research
combines artificial intelligence with theoretical computer
science and applies this to practical problems in the
development and maintenance of computing systems. He
is the author of over 300 publications and has held over
60 research grants. He is a fellow of several academic
societies, including the Royal Society, the Royal Society
of Edinburgh, the Royal Academy of Engineering and the
Association for Computing Machinery. His awards include
the IJCAI Research Excellence Award (2007), the CADE
Herbrand Award (2007) and a CBE (2012). He was:
Edinburgh's founding Head of Informatics (1998-2001);
founding Convener of UKCRC (2000-05); and a Vice President
and Trustee of the British Computer Society with special
responsibility for the Academy of Computing (2010-12). He
was also a member of: the Hewlett-Packard Research Board
(1989-91); the ITEC Foresight Panel (1994-96); both the
2001 and 2008 Computer Science RAE panels (1999-2001,
2005-8); and the Scottish Science Advisory Council
(2008-12).
|
|
Professor Nick Chater, Warwick Business
School,
University of Warwick
Title: Virtual bargaining - A microfoundation for the
theory of social interaction
Abstract:
How can people coordinate their actions or make joint
decisions? One possibility is that each person attempts to
predict the actions of the other(s), and best-responds
accordingly. But this can lead to bad outcomes, and
sometimes even vicious circularity. An alternative view is
that each person attempts to work out what the two or more
players would agree to do, if they were to bargain
explicitly. If the result of such a "virtual" bargain is
"obvious," then the players can simply play their
respective roles in that bargain. I suggest that virtual
bargaining is essential to genuinely social interaction
(rather than viewing other people as instruments), and may
even be uniquely human. This approach aims to respect
methodological individualism, a key principle in many
areas of social science, while explaining how human groups
can, in a very real sense, be "greater" than the sum of
their individual members.
Short biography:
Nick Chater is Professor of Behavioural Science at Warwick
Business School. He works on the cognitive and social
foundations of rationality and language. He has published
more than 250 papers, co-authored or edited more than a
dozen books, has won four national awards for
psychological research, and has served as Associate Editor
for the journals Cognitive Science, Psychological Review,
and Psychological Science. He was elected a Fellow of the
Cognitive Science Society in 2010 and a Fellow of the
British Academy in 2012. Nick is co-founder of the
research consultancy Decision Technology and is a member
on the UK’s Committee on Climate Change. He is the author
of The Mind is Flat (2018).
|
|
Professor Luc De Raedt,
Department of Computer Science, Katholieke
Universiteit Leuven
Topic: "Inductive Modeling for the Automation of
Data Science"
Abstract:
A primary goal of artificial intelligence is to
develop machines that carry out and automate tasks that
require intelligence. This paper focusses on the
automation and democratization of data science. Data
science, and the related fields of machine learning and
data mining, are causing a revolution in both science and
society today. But it requires a lot of effort and labor
to carry out such data science processes as one needs to
select the right subsets of the data, put those data in
the right form, determine what the learning tasks will be,
select the right algorithms, evaluate the results, ask the
experts, etc. The question tackled in the ERC AdG project
SYNTH (Synthesising Inductive Data Models) is how humans
can be supported in the data science process by a number
of tools and techniques that (partly) automate several
steps in this process. We introduce the SYNTH
framework from both the perspective of data scientists and
end-users. From an end-user point of view, SYNTH performs
the task of autocompletion, that is, given a set of
spreadsheets that the user is filling out, SYNTH wants to
automatically predict or complete the next value the user
will fill out wherever possible. The front-end of SYNTH
extends traditional spreadsheet software with facilities
for realizing this. These are based on automatically
analyzing, wrangling and transforming the data in a format
that is amenable for data analysis, the learning of
constraints that hold in the data as well as predictive
and probabilistic models, and using probabilistic
reasoning for automatically computing the most likely
target value. The back-end of SYNTH is the SynthLog
language for learning and reasoning, which extends the
probabilistic logic programming language ProbLog with
inductive database principles, and as such treats learned
"inductive" models as first class citizens. In this
way, SyntLog provides support for inductive, deductive and
probabilistic reasoning, for constraint solving, as well
as for machine learning. For more information about the
SYNTH project, and a list of contributors, we refer to
synth.cs.kuleuven.be
Short biography:
Professor Luc De Raedt is a full professor and head
of the lab for Declarative Languages and Artificial
Intelligence at KU Leuven. He is a former chair of
Computer Science of the University of Freiburg in Germany.
He received an ERC AdG on automated data science, he is a
fellow of EurAI and AAAI, and he chaired several
conferences in artificial intelligence and machine
learning (such as ICML 05 and ECAI 12). Luc's research
interests are in Artificial Intelligence, Machine Learning
and Data Mining, as well as their applications. He is well
known for his contributions in the areas of learning
and reasoning, in particular, for his contributions to
statistical relational learning and inductive programming.
|
|
Professor Mike
Frank, Stanford University
Topic: "Variability and Consistency in Early Language
Learning: The Wordbank Project"
Abstract:
Every typically developing child learns to talk,
but children vary tremendously in how and when they do so.
What predicts this variability? And which aspects of early
language learning are consistent across the world’s
languages and cultures? We use data from tens of thousands
of children learning dozens of different languages to
create a data-driven picture of universals and variation
in early language learning.
Short biography:
Michael C. Frank is David and Lucile Packard
Professor of Human Biology at Stanford University. He
received his PhD from MIT in Brain and Cognitive Sciences
in 2010. He studies language use and language learning,
and how these interact with social cognition, focusing
especially on early childhood. He is the organizer of the
ManyBabies Consortium, a collaborative replication network
for infancy research, and has led open-data projects
including Wordbank and MetaLab. He has been recognized as
a "rising star" by the Association for Psychological
Science. His dissertation received the Glushko Prize from
the Cognitive Science Society, and he is recipient of the
FABBS Early Career Impact award and a Jacobs Advanced
Research Fellowship. He has served as Associate Editor for
the journal Cognition, member and chair of the Governing
Board of the Cognitive Science Society, and was a founding
Executive Committee member of the Society for the
Improvement of Psychological Science.
|
|
Professor Ulrike
Hahn, Department of Psychological Sciences, Birkbeck
University of London
Topic: "Explanation
for AI systems"
Abstract:
The talk will use recent work seeking to generate
natural language explanations for Bayesian Belief Networks
(BBN) to motivate a more thorough inquiry into what “good
explanations” are in this context. It will draw on
analysis of algorithm performance, a case study in human
generated explanations of BBN inference, and data from
human behavioural experiments. Implications for
explanation of machine reasoning and decision-making in
general will be discussed.
Short biography:
Professor of Psychology in the Department of
Psychological Sciences, has been awarded the Alexander von
Humboldt Foundation Anneliese Maier Research Award. This
award is presented to world class researchers in the
humanities and social sciences with the aim of encouraging
collaboration between international researchers in
Germany. Winners work on research projects funded for up
to five years. Professor Hahn’s research investigates
aspects of human cognition including argumentation,
decision-making, concept acquisition, and language
learning. Her work involves both experimentation and
modelling. She is Director of the Centre for Cognition,
Computation and Modelling which was launched earlier in
2013.
|
|
Prof Patrick Healey, School of
Electronic Engineering and Computer Science at Queen
Mary, University of London
Topic: "Social Health: Mapping the quality of social
interactions in the wild"
Abstract:
Social engagement is an exceptionally strong
predictor of long term physical and mental health.
Socially isolated people have 2–4-times increased
all-cause mortality after adjusting for biomedical risk
factors (House, et. al. 1988, Fratiglioni et al., 2004;
Holt-Lunstad, 2010). Our ability to understand and
take advantage of these effects is limited by our ability
to measure social engagement. Current research relies on
coarse-grained, indirect measures such as marital status,
group membership, frequency of contact, frequency of media
use and retrospective self-report (see e.g., Cohen, 2004;
Holt-Lunstad, 2010 for reviews). These measures are
unable to capture the quality or ecology of daily social
interactions; stressful, playful, engaging, hostile
etcetera. Doing this is an important step toward
uncovering opportunities for both health and policy
interventions. Wearables provide an obvious opportunity to
fill this gap however the primary focus of most health
research to date has been on tracking physical activity
e.g. walking, sleeping, cycling. We present data
from wrist mounted triaxial accelerators which shows how
social activity can be reliably sensed from people's hand
movements alone. We discuss the potential of this
approach for unobtrusive tracking of individual and group
social health, the challenges for privacy and sharing and
the potential applications beyond heath.
Short biography:
Pat Healey is Professor of Human Interaction and leads the
Cognitive Science Research Group in the School of
Electronic Engineering and Computer Science at Queen Mary,
University of London. He also holds a Turing
Fellowship. His research focuses on the mechanisms
that underpin human-human interaction, especially the ways
in which people detect and recover from misunderstandings.
|
|
Dr Mateja Jamnik, Department of
Computer Science and Technology, University of Cambridge
Topic: "How to Re(represent) It?"
Abstract:
To achieve efficient human computer collaboration,
computers need to be able to represent information in ways
that humans can understand. To select representations
appropriately, AI systems need to have some underlying
theory of the formal and cognitive properties of
representations. In this interdisciplinary project,
we are developing the foundations for the analysis of
representations for reasoning. Ultimately, the goal is to
build AI systems that select representations
intelligently, taking users’ preferences and abilities
into account.
Short biography:
Dr Mateja Jamnik is a Reader in Artificial Intelligence at
the Department of Computer Science and Technology of the
University of Cambridge, UK. She is developing AI
techniques for human-like computing - she computationally
models how people solve problems to enable machines to
reason in a similar way to humans. She is essentially
trying to humanise computer thinking. She applies this AI
technology to medical data to advance personalised cancer
medicine, and to education to personalise tutoring
systems. Mateja is passionate about bringing science
closer to the public and engages frequently with the media
and public science events. Her active support of women
scientists was recognised by the Royal Society which
awarded her the Athena Prize. Mateja has been advising the
UK government on policy direction in relation to the
impact of AI on society.
|
|
Dr Caroline Jay, School of School of
Computer Science, University of Manchester
Topic: "Using human vision to automate the
interpretation of complex signal data"
Abstract:
Electrocardiograms (ECGs), which capture the
electrical activity of the human heart, are widely used in
clinical practice, and notoriously difficult to interpret.
Whilst there have been attempts to automate their
interpretation for several decades, human reading of the
data presented visually remains the ‘gold standard’. We
demonstrate how a visualisation technique that
significantly improves human interpretation of ECG data
can be used as a basis for an automated interpretation
algorithm that is more accurate than current signal
processing techniques, and has the benefit of the human
and machine sharing the same representation of the data.
We discuss the benefits and limitations of this approach,
and compare it with machine learning approaches that are
commonly used with medical data, in terms of its accuracy,
efficiency, and acceptability in clinical practice
Short biography:
Caroline Jay is a Chartered Psychologist and Computer
Scientist at the University of Manchester. Her research
examines how people interact with machines, from using
apps, to designing algorithms. Caroline is the Research
Director of the UKRI Software Sustainability Institute,
and a Fellow of the Alan Turing Institute where she
leads the project ‘Understanding the relationship between
human health and the environment.' She leads the
University of Manchester Arm of the BBC Data Science
Partnership, and the Software Engineering Learning
Analytics stream at the Institute of Coding.
|
|
Dr Max Kleiman-Weiner, Harvard University
Topic:
Reverse Engineering Human Cooperation
Abstract:
Human cooperation is distinctly powerful. We collaborate
with others to accomplish together what none of us could
do on our own; we share the benefits of collaboration
fairly and trust others to do the same. Even young
children cooperate with a scale and sophistication
unparalleled in other animal species. I seek to understand
these everyday feats of social intelligence in
computational terms. What are the cognitive
representations and processes that underlie these
abilities and what are their origins? How can we apply
these cognitive principles to build machines that have the
capacity to understand, learn from, and cooperate with
people? I will present a formal framework based on the
integration of individually rational, hierarchical
Bayesian models of learning, together with socially
rational multi-agent and game-theoretic models of
cooperation. First, I investigate the evolutionary origins
of the cognitive structures that enable cooperation and
support social learning. I then describe how these
structures are used to learn social and moral knowledge
rapidly during development. Finally I show how this
knowledge is generalized in the moment, across an
infinitude of possible situations: inferring the
intentions and reputations of others, distinguishing who
is friend or foe, and learning a new moral value all from
just a few observations of behavior.
Short biography:
Dr. Max Kleiman-Weiner is a fellow of the Data
Science Institute and Center for Research on Computation
and Society (CRCS) within the computer science and
psychology departments at Harvard. He did his PhD in
Computational Cognitive Science at MIT advised by Josh
Tenenbaum where he was a NSF and Hertz Foundation Fellow.
His thesis won the 2019 Robert J. Glushko Prize for
Outstanding Doctoral Dissertation in Cognitive Science. He
also won best paper at RLDM 2017 for models of human
cooperation and the William James Award at SPP for
computational work on moral learning. Max serves as Chief
Scientist of Diffeo a startup building collaborative
machine intelligence. Previously, he was a Fulbright
Fellow in Beijing, earned an MSc in Statistics as a
Marshall Scholar at Oxford, and did his undergraduate work
at Stanford as a Goldwater Scholar.
|
|
Dr Kenneth Kwok, Agency for Science,
Technology and Research (A*STAR), Singapore
Topic: "Cognitive Human-like Empathetic and Explainable
Machine Learning (CHEEM): A human-centric AI
research programme"
Abstract:
AI has made spectacular progress in recent years,
achieving and, in some cases, surpassing human-level
performance. Long-standing problems in computer
vision and speech have been solved, and AI programs have
beaten the best human players in games such as Go,
Jeopardy, and even versions of Poker. Yet,
impressive as these feats are, AI still does not
understand much of what it does, and certainly does not
understand humans and the complexities of the world in
which we operate. For AI to be useful and usable by
humans, much more needs to be done to endow AI with
abilities that are more human-like. I will discuss
current work at A*STAR that is working to address this gap
towards realising AI that understands humans, and that
humans can understand.
Short biography:
Dr Kenneth Kwok is Principal Scientist at the
Institute of High Performance Computing (IHPC) at the
Agency for Science, Technology and Research (A*STAR) and
Programme Manager of the A*STAR Artificial Intelligence
Programme (A*AI). He heads the Cognitive Systems group
within the Social and Cognitive Computing department in
IHPC, and is the PI of the Human-Centric AI (CHEEM)
Programme under A*AI. He also co-leads the
Collaborative AI for Advanced Manufacturing and
Engineering Programme involving more than 50 scientists
and researchers from A*STAR, the National University of
Singapore, Nanyang Technological University and the
Singapore University of Technology and Design. Prior
to joining IHPC, Kenneth was Programme Director for
Information Exploitation, and later, Programme Director
for Combat Protection and Performance at Singapore’s DSO
National Laboratories. Kenneth’s research interests lie at
the intersection of Cognitive Psychology and Computing.
|
|
Professor Denis Mareschal, Centre
for Brain and Cognitive Development, School of
Psychology, Birkbeck College
Topic: "Fast and slow learning across development"
Abstract:
Children are often notoriously slow at learning new
skills. Yet there is also evidence that in some
circumstances they generalise their knowledge to new
exemplars after very few learning trials. In this talk I
will review evidence of when rapid learning and slow
learning occur to identify the conditions under which or
or the other mode operates. These conditions will be
illustrated through the use of neural network modelling.
Short biography:
Denis Mareschal obtained his first degree in
Physics and Theoretical Physics from Cambridge University.
He then completed a Masters in Psychology from McGill
University in psychology and AI, before moving on to
complete a PhD in psychology at Oxford University. He has
received the Marr prize from the Cognitive Science Society
(USA), the Young Investigator Award from the International
Society on Infant Studies (USA), the Margaret Donaldson
Prize from the British Psychological Society, and a
Wolfson-Royal Society research merit award from the Royal
Society. His research centers on developing mechanistic
models of perceptual and cognitive development in infancy
and childhood. He is currently Professor and director of
the Centre for Brain and Cognitive Development at Birkbeck
University of London. Recent books include
Neuroconstructivism (2007), The Making of Human Concepts
(2010), and Educational Neuroscience (2013).
|
|
Professor Peter
Millican, Oxford University
Topic: "Turing and Human-Like Intelligence"
Abstract:
The concept of Human-Like Computing became central to
visions of Artificial Intelligence through the work of
Alan Turing, whose model of computation (1936) was based
on the potential operations of a human "computer", and
whose famous test for intelligent machinery (1950) focused
on indistinguishability from human behaviour. That
test has recently been reconceived by various scholars,
and my first aim will be to settle various interpretative
controversies as conclusively as possible. Then I
shall go on to consider the force of Turing's arguments
for his text and any enduring lessons that can be drawn
from his discussion. My overall conclusion is that
his own position is somewhat confused, giving a criterion
based on superficial similarity to human performance but
at the same time apparently drawing implications about
internal causation (notably in his solipsistic response to
the objection from consciousness). Thus Turing
overemphasises human-likeness - both externally and
internally - even though his overall intention seems to be
to provide a criterion for machine intelligence that is
objective and blind to internal mechanisms. The main
weaknesses of his test follow: first, its focus on
superficial indistinguishability from a human renders it
inapplicable to the vast range of possible un-humanlike
intelligences, while imposing an irrelevant (and
potentially very heavy) demand on any tested system.
Secondly, the test involves a human judge, and thus
encourages a focus on methods that can fool such a judge
(as in ELIZA-style chatbots) rather than on exhibiting
sophisticated information processing. Both of these
objections can be (and often have been) overcome by moving
to a general perspective - and, if desired, a style of
test - that compares the achievements of human and machine
“intelligences” in particular information-processing
tasks, focusing more on the quality of their results than
on the similarity of their behaviour to our own.
|
|
Professor
Stephen Muggleton, Department of Computing, Imperial
college London
Topic: "Human-Machine Vision"
Abstract:
Statistical machine learning is widely used in image
classification. However, most techniques 1) require many
images to achieve high accuracy and 2) do not provide
support for reasoning below the level of classification,
and so are unable to support secondary reasoning, such as
the existence and position of light sources and other
objects outside the image. In recent work an Inductive
Logic Programming approach called Logical Vision (LV) was
shown to overcome some of these limitations. LV uses
Meta-Interpretive Learning combined with low-level
extraction of high-contrast points sampled from the image
to learn recursive logic programs describing the image. In
published work LV was demonstrated capable of
high-accuracy prediction of classes such as regular
polygon from a small number examples of images where the
compared statistical learning algorithms gave near random
prediction given hundreds of instances. LV has so far only
been applied to noise-free, artificially generated images.
This paper extends LV by using a) richer background
knowledge such as light reflection that can itself be
learned and used for resolving visual ambiguities, which
cannot be easily modeled using statistical approaches, b)
a wider class of background models representing classical
2D shapes such as circles and ellipses, c) primitive-level
statistical estimators to handle noise in real images, Our
results indicate that in real images the new noise-robust
version of LV using a single example (ie one-shot LV)
converges to an accuracy at least comparable to
thirty-shot statistical machine learner on the prediction
of hidden light sources. Moreover, we show that the
learned theory can be used to identify ambiguities in the
convexity/concavity of objects such as craters.
Short biography:
Professor Stephen Muggleton FREng FAAAI is
Professor of Machine Learning in the Department of
Computing at Imperial College London, Director of the UK's
Human-Like Network and is internationally recognised as
the founder of the field of Inductive Logic Programming.
SM’s career has concentrated on the development of theory,
implementations and applications of Machine Learning,
particularly in the field of Inductive Logic Programming
(ILP) and Probabilistic ILP (PILP). Over the last decade
he has collaborated with biological colleagues, such as
Prof Mike Sternberg, on applications of Machine Learning
to Biological prediction tasks. SM’s group is situated
within the Department of Computing and specialises in the
development of novel general-purpose machine learning
algorithms, and their application to biological prediction
tasks. Widely applied software developed by the group
includes the ILP system Progol (publication has over 1700
citations on Google Scholar) as well as a family of
related systems including ASE-Progol (used in the Robot
Scientist project), Metagol and Golem.
|
|
Professor Martin Pickering,
Department of Psychology, University of Edinburgh
Topic: "Understanding dialogue: Language use and
social interaction"
Abstract:
We present a theory of dialogue as a form of
cooperative joint activity. Dialogue is treated as a
system involving two interlocutors and a shared workspace
that contains their contributions and relevant
non-linguistic context. The interlocutors construct
shared plans and use them to “post” contributions to the
workspace, to comprehend joint contributions, and to
distribute control of the dialogue between them. A
fundamental part of this process is to simulate their
partner’s contributions and to use it to predict the
upcoming state of the shared workspace. As a
consequence, they align their linguistic representations
and their representations of the situation and of the
“games” underlying successful communication. The
shared workspace is a highly limited resource, and the
interlocutors use their aligned representations to say
just enough and to speak in good time. We end by
applying the account beyond the “minimal dyad” to
augmented dialogue, multi-party dialogue, and monologue.
This talk is based on my forthcoming book with the same
title, with Simon Garrod.
Short biography:
Martin Pickering is Professor of the Psychology of
Language and Communication at the University of
Edinburgh. His research focuses on the
representation and processing of language, and in
particular on the interrelation between language
production and comprehension in dialogue and
monologue. He has published around 200 papers on
topics such as language comprehension during reading,
turn-taking in dialogue, the representation of grammatical
knowledge, the extent to which bilinguals integrate their
languages, and the use of prediction to facilitate
comprehension. He has served as the editor of the
Journal of Memory and Language, was recipient of the
Experimental Psychology Society mid-career award, and is a
Fellow of the Royal Society of Edinburgh.
|
|
Professor Stuart Russell,
Department Computer Science, University of California,
Berkeley
Topic: "Beneficial Artificial Intelligence"
Abstract:
It is reasonable to expect that AI capabilities
will eventually exceed those of humans across a range of
real-world-decision making scenarios. Should this be a
cause for concern, as Elon Musk, Stephen Hawking, and
others have suggested? While some in the mainstream
AI community dismiss the issue, I will argue instead that
a fundamental reorientation of the field is required.
Instead of building systems that optimize arbitrary
objectives, we need to learn how to build systems that
will, in fact, be beneficial for us. I will show
that it is useful to imbue systems with explicit
uncertainty concerning the true objectives of the humans
they are designed to help. This uncertainty causes
machine and human behavior to be inextricably (and
game-theoretically) linked, while opening up many new
avenues for research.
Short biography:
Stuart Russell received his B.A. with first-class
honours in physics from Oxford University in 1982 and his
Ph.D. in computer science from Stanford in 1986. He then
joined the faculty of the University of California at
Berkeley, where he is Professor (and formerly Chair) of
Electrical Engineering and Computer Sciences, holder of
the Smith-Zadeh Chair in Engineering, and Director of the
Center for Human-Compatible AI. He has served as an
Adjunct Professor of Neurological Surgery at UC San
Francisco and as Vice-Chair of the World Economic Forum's
Council on AI and Robotics. He is a recipient of the
Presidential Young Investigator Award of the National
Science Foundation, the IJCAI Computers and Thought Award,
the World Technology Award (Policy category), the Mitchell
Prize of the American Statistical Association, the
Feigenbaum Prize of the Association for the Advancement of
Artificial Intelligence, and Outstanding Educator Awards
from both ACM and AAAI. From 2012 to 2014 he held the
Chaire Blaise Pascal in Paris. He is an Honorary Fellow of
Wadham College, Oxford, and Fellow of the American
Association for Artificial Intelligence, the Association
for Computing Machinery, and the American Association for
the Advancement of Science. His book "Artificial
Intelligence: A Modern Approach" (with Peter Norvig) is
the standard text in AI; it has been translated into 14
languages and is used in over 1400 universities in 128
countries. His research covers a wide range of topics in
artificial intelligence including machine learning,
probabilistic reasoning, knowledge representation,
planning, real-time decision making, multitarget tracking,
computer vision, computational physiology, and
philosophical foundations. He also works for the United
Nations, developing a new global seismic monitoring system
for the nuclear-test-ban treaty. His current concerns
include the threat of autonomous weapons and the long-term
future of artificial intelligence and its relation to
humanity.
|
|
Dr Katya
Tentori, Center for Mind/Brain Sciences (CIMeC),
University of Trento, Italy.
Title: "What can the conjunction fallacy tell us about
human reasoning?"
Abstract:
In my talk, I will summarize and discuss the main results
obtained from more than three decades of studies on the
conjunction fallacy. More specifically, I will argue that
this striking and widely debated reasoning error is a
robust phenomenon, which is not caused by the limitation
of cognitive resources but can nonetheless systematically
affect lay people’s as much as experts’ probabilistic
inferences, with potentially relevant real-life
consequences. I will then introduce what is, in my view,
the best explanation of the conjunction fallacy and
indicate how it allows the reconciliation of some classic
probabilistic reasoning errors with the outstanding
reasoning performances that humans have been shown capable
of. Finally, I will tackle the open issue of the greater
accuracy and reliability of evidential impact assessments
over those of posterior probability, and outline how
further research on this topic might contribute also to
the development of effective human-like computing.
Short biography:
I’m a cognitive psychologist working at the Center for
Mind/Brain Sciences (CIMeC), University of Trento,
Italy. My research interests are primarily in the
fields of inductive reasoning, forecasting, decision
biases, and causal cognition, but also extend to various
applied problems in medical decision making and legal
evidence assessment. In my studies, I combine empirical
methods of experimental psychology with theoretical
modelling from formal epistemology.
|
|
Professor Francesca
Toni, Department of Computing, Imperial College London
Title: "Dialectic Explanations"
Abstract:
The lack of transparency of AI techniques, e.g. machine
learning algorithms or recommender systems, is one of the
most pressing issues in the field, especially given the
ever-increasing integration of AI into everyday systems
used by experts and non-experts alike, and the need to
explain how and/or why these systems compute outputs, for
any or for specific inputs. The need for explainability
arises for a number of reasons: an expert may require more
transparency to justify outputs of an AI system,
especially in safety-critical situations, while a
non-expert may place more trust in an AI system providing
basic (rather than no) explanations, regarding, for
example, films suggested by a recommender system.
Explainability is also needed to fulfil the requirements
of the forthcoming General Data Protection Regulation
(GDPR), effective from May 25 th, 2018. Furthermore,
explainability is crucial to guarantee comprehensibility
in Human-Like Computing, to support collaboration and
communication between machines and human beings. In this
talk I will overview recent efforts to use argumentative
abstractions for data-centric methods in AI as a basis for
generating dialectic explanations. These abstractions are
formulated in the spirit of argumentation in AI, amounting
to a (family of) symbolic formalism(s) where arguments are
seen as nodes in a graph with relations between arguments,
e.g. attack and support, as edges. Argumentation allows
for conflicts to be managed effectively, an important
capability in any AI system tasked with decision-making.
It also allows for reasoning to be represented in a
human-like manner, and can serve as a basis for a
principled theory of explanation supporting human-machine
dialectical exchanges.
Short biography:
Francesca Toni is Professor in Computational Logic in the
Department of Computing, Imperial College London, UK, and
the funder and leader of the CLArg (Computational
Logic and Argumentation) research group. Her research
interests lie within the broad area of Knowledge
Representation and Reasoning in Artificial Intelligence,
and in particular include Argumentation, Logic-Based
Multi-Agent Systems, Logic Programming for Knowledge
Representation and Reasoning, Non-monotonic and
Default Reasoning. She graduated, summa cum laude, in
Computing at the University of Pisa, Italy, in 1990, and
received her PhD in Computing in 1995 from Imperial
College London. She has coordinated two EU projects,
received funding from EPSRC and the EU, and awarded a
Senior Research Fellowship from The Royal Academy of
Engineering and the Leverhulme Trust. She is currently
Technical Director of the ROAD2H EPSRC-funded
project. She has co-chaired ICLP2015 (the 31st
International Conference on Logic Programming) and KR 2018
(the 16th Conference on Principles of Knowledge
Representation and Reasoning). She is a member of the
steering committe of AT (Agreement Technologies) and KR
Inc (Principles of Knowledge Representation and Reasoning,
Incorporated), corner editor on Argumentation for the
Journal of Logic and Computation , and in the editorial
board of the Argument and Computation journal and the AI
journal. |
|
Professor Adam
Sanborn, University of Warwick
Title: "Bayesian brains without
probabilities"
Abstract :
Over the past two decades, a wave of Bayesian explanations
has swept through cognitive science, explaining behavior
in domains from intuitive physics and causal learning, to
perception, motor control and language. Yet people produce
stunningly incorrect answers in response to even the
simplest questions about probabilities. How can a
supposedly Bayesian brain paradoxically reason so poorly
with probabilities? Perhaps Bayesian brains do not
represent or calculate probabilities at all and are,
indeed, poorly adapted to do so. Instead the brain could
be approximating Bayesian inference through sampling:
drawing samples from its distribution of likely hypotheses
over time. Only with infinite samples does a Bayesian
sampler conform to the laws of probability, and in this
talk I show how reasoning with a finite number of samples
systematically generates classic probabilistic reasoning
errors in individuals, upending the longstanding consensus
on these effects. I then present work testing whether
people sample when producing numeric estimates, and
discuss what kind of sampling algorithm the brain might be
using.
Short biography:
Adam Sanborn is a cognitive psychologist interested in how
rational people's behaviour is: whether the biases that
people show correspond to normative statistical models and
approximations to statistical models. He has studied these
ideas in various areas of cognition, including
categorization, perception, decision making, learning,
reasoning, and intuitive physics. He received his PhD in
Cognitive Science and Psychological and Brain Sciences
from Indiana University and worked as a postdoc at the
Gatsby Computational Neuroscience Unit of University
College London. He is currently an Associate Professor at
the University of Warwick. In 2019, he was awarded an ERC
Consolidator grant to study how sampling algorithms can
explain human cognition.
|
|
Professor Ute
Schmid, Cognitive System Group, University of Bamberg
Title: "Learning to Delete - Interactive
Learning with Mutual Explanations to Get Rid of Digital
Clutter"
Abstract :
With the ongoing digitalisation an increasing amount of
digital data is being stored on personal and company
devices. While the digital storage of data can be used for
efficient information retrieval, data analytics and
machine learning, we also encounter a growing amount of
digital clutter, which is unnecessarily occupying storage
space and making it difficult to keep track of relevant
files and other digital entities. The interactive
companion system Dare2Del is designed as a cognitive
companion to support employees in administration and
industry by identifying irrelevant digital objects which
can be deleted or archived. The application addresses some
challenges for human-like computation: Whether a digital
object is irrelevant or not is partially dependent on
fixed rules and regulations and partially on personal
preferences which cannot be predicted and might even
change over time. While fixed rules can easily be
handcrafted for the system, there is no ground truth
available to derive a user‘s personal preferences from.
These have to be approximated during an interactive
process: The user is presented with a small selection of
digital objects that are classified as irrelevant
according to the system‘s current classification
procedure. He or she can either confirm or reject the
system‘s proposal and this feedback will be used for
incremental learning. Dare2Del is realized with an
inductive logic programming approach. The classification
of digital objects as relevant or irrelevant is based on a
theory represented as Prolog rules. Fixed rules and
regulations can be pre-defined and are combined with rules
induced from interactive learning. Which digital objects
are presented to the user is determined by the current
context. Generating and exploiting explanations is a
crucial factor for making Dare2Del a trustworthy
companion. For each object assumed to be irrelevant, the
user can request an explanation justifying the system‘s
decision. Verbal explanations are generated from the
learned and predefined Prolog rules which have been
instantiated with the current object. The explanations are
integrated into the file manager, where the relevant
features of the analysed digital objects are highlighted.
The user is given the opportunity to reject a certain
classification and explicitly state which parts of the
explanations are incorrect. This feedback is integrated
into the process of learning a revised model in the form
of constraints. Besides using verbal explanations, we
explore how near-miss examples can support the
transparency of Dare2Del.
Short biography:
Ute Schmid holds a diploma in psychology and a diploma in
computer science, both from Technical University Berlin
(TUB), Germany. She received her doctoral degree (Dr.
rer.nat.) in computer science from TUB in 1994 and her
habilitation in computer science in 2002. From 1994 to
2001 she was assistant professor (wissenschaftliche
Assistentin) at the AI/Machine Learning group, Department
of Computer Science, TUB. Afterwards she worked as
lecturer (akademische Rätin) for Intelligent Systems at
the Department of Mathematics and Computer Science at
University Osnabrück. Since 2004 she holds a professorship
of Applied Computer Science/Cognitive Systems at the
University of Bamberg. Research interests of Ute Schmid
are mainly in the domain of comprehensible machine
learning, explainable AI, and high-level learning on
relational data, especially inductive programming,
knowledge level learning from planning, learning
structural prototypes, analogical problem solving and
learning. Further research is on various applications of
machine learning (e.g., classifier learning from medical
data and for facial expressions) and empirical and
experimental work on high-level cognitive processes. Ute
Schmid dedicates a significant amount of her time to
measures supporting women in computer science and to
promote computer science as a topic in elementary,
primary, and secondary education.
|
|
Professor
Gabriella Vigliocco, Department of Experimental
Psychology, University College London
Title: "There is more than Linguistic Information to
Language"
Abstract:
Most often, our psychological and/or linguistic theories
present a picture of language in which multifaceted
phenomena (such as processing a sentence; or processing
the meaning of a word) tend to be reduced to linguistic
processes without much consideration of the physical and
social context in which language is used. However,
language is most often used in face-to-face interactions,
where ‘linguistic’ information is inexorably intertwined
with ‘non-linguistic’ information, relevant to the content
of communication (e.g., co-speech gestures or mouth
movements). In a similar vein - with little exception -
models in CL and NLP also do not take ‘non-linguistic’
information into account. I will present results from
behavioural and electro-physiological studies that show
how the ‘non-linguistic’ information is used online by
humans, thus, calling for human and machine models that
take the physical and social context seriously.
Short biography:
Gabriella Vigliocco is Professor of the Psychology of
Language in the Department of Experimental Psychology at
University College London. She was awarded her PhD from
the University of Trieste in 1995 and completed her
postdoctoral studies at the University of Arizona before
serving as a visiting scientist at the Max Planck
Institute for Psycholinguistics between 1999 and 2000.
Vigliocco leads a multi-disciplinary team comprising
psychologists, linguists, computer scientists and
cognitive neuroscientists who share a vision that the
integration of multiple levels of analysis and the use of
different methodological approaches can lead to a better
understanding language and cognition. They seek to
understand the relationship between language and other
aspects of cognitive function and to use this knowledge to
impact education and improve the lives of people with
language disorders.
|