Ethical specifications for autonomous systems: sources, conflicts, priorities

Guglielmo Tamburrini
Department of Electronic Engineering and Information Technology (University of Naples “Federico II” – ITALY)

Ethical specifications are needed to design and implement autonomous AI and robotic systems facing morally significant choices. In several application domains, the identification of consistent collections of ethical specifications is hindered by conflicts which arise directly between ethical principles that one finds in normative ethics or between their domain-specific interpretations. These conflicts are illustrated here by reference to autonomous vehicles and their behaviour in unavoidable collisions, shared control issues for increasingly autonomous surgical robots, and meaningful human control of autonomy in weapons systems. Moreover, priority rules for solving ethical conflicts are selectively analysed and compared in the way of their implications about moral obligation, permission and preference settings. Finally, increasing levels of moral competence for autonomous systems are discussed in the light of current challenges raised by both ethical theorizing and legal regulations.

September 16, 2019


Learning and Reasoning with Logic Tensor Networks: the framework and an application
Luciano Serafini
Data and Knowledge Management Research Unit (Fondazione Bruno Kesskler, Trento – ITALY)

Logic Tensor Networks (LTN) is a theoretical framework and an experimental platform that integrates learning based on tensor neural networks with reasoning using first-order many-valued/fuzzy logic. LTN supports a wide range of reasoning and learning tasks with logical knowledge and data using rich symbolic knowledge representation in first-order logic (FOL) to be combined with efficient data-driven machine learning based on the manipulation of real-valued vectors. In practice, FOL reasoning including function symbols is approximated through the usual iterative deepening of clause depth. Given data available in the form of real-valued vectors, logical soft and hard constraints and relations which apply to certain subsets of the vectors can be specified compactly in FOL. All the different tasks can be represented in LTN as a form of approximated satisfiability, reasoning can help improve learning, and learning from new data may revise the constraints thus modifying reasoning. We apply LTNs to Semantic Image Interpretation (SII) in order to solve the following tasks: (i) the classification of an image’s bounding boxes and (ii) the detection of the relevant part-of relations between objects. The results shows that the usage of background knowledge improves the performance of pure machine learning data driven methods.

Factoring Time into Distributed Representations of Words and Entities learned from Texts
Matteo Palmonari
Department of Informatics, Systems and Communication (University of Milano-Bicocca – ITALY)

Vector representations of words, also known as word embeddings, support reasoning about similarity, e.g., in analogical tasks, and are now used in a variety of downstream applications related to natural language processing and knowledge representation. These representations are usually learned from text corpora and account for word meaning based on distributional semantics, according to which similar words appear in similar contexts. The very same principles can be also applied to learn representations of entities and ontology types that capture their intuitive meaning using a data-driven and sub-symbolic approach. Time is a crucial factor when dealing with distributed representations of language and knowledge. For example, tracking word meaning shift and entity evolution can have several applications and time may sneak into similarity as computed with these models in way that may be difficult to control. In this lesson we will briefly recap distributional semantics and distributed models and discuss how to generate distributed models of entities from text. Then we will discuss approaches for the implicit and explicit modeling of time, the first ones addressing time-dependent representations of words and entities (e.g., amazon_1975 vs. amazon_2012), and the second one addressing the representations of temporal references (e.g., years, days, etc.). The discussion will be often grounded into analogical reasoning as a test bed for distributed representations of this kind, and we will discuss solutions for ambiguous and temporal analogies.

Deep learning and Computer Vision
Simone Bianco and Raimondo Schettini
Department of Informatics, Systems and Communication (University of Milano-Bicocca – ITALY)

The automatic recognition and description of visual data is a challenging problem in both multimedia and computer vision, with a huge variety of applications. In this lecture I will talk about the use of deep neural networks to extract rich scene annotation and description from visual data. We will cover both image and video recognition, including image classification and annotation, object recognition and image search. Several examples, mainly related to image and video recognition in large multimedia archives, and to automatic computation of their attributes will be discussed.

September 17, 2019


Introduction to Probabilistic Graphical Models
Luigi Portinale
University of Eastern Piedmont “Amedeo Avogadro” (Alessandria – ITALY)

In this lecture we will address the main features of one of the most important class of formalism for uncertain knowledge representation and reasoning: Probabilistic Graphical Models (PGM). We will describe the differences between directed models (i.e. Bayesian Networks) and undirected models (i.e., Markov Random Fields). The notion of conditional independence is then introduced, by pointing out the differences between directed and undirected models, and the consequences from the knowledge representation point of view. Properties of soundness and completeness will be discussed, by relating graph-theoretic aspects to probabilistic interpretation. Finally, the main approaches to inference on PGM will be outlined and briefly discussed.

Why Causality Matters?
Fabio Stella
Department of Informatics, Systems and Communication (University of Milano-Bicocca – ITALY)

In a world populated by an exploding amount of data and information, machine learning and artificial intelligence aim to address and solve challenging problems such as; development of new drugs, control of economic policies, education, robotics and global warming, to mention just a few. In the last five years, deep learning achieved amazing results in image analysis and natural language processing, while reinforcement learning proved to have gained super-human skills to play complex games like the Go. However, as already happened in the past, data enthusiasm could quickly slip to disappointment due to failures and irrational expectations about what can be achieved by a fully data-driven approach. Indeed, when using observational data, i.e. the vast majority of available data, few are aware that not all research question can be answered. This talk gives basics of structural causal models with the aim to increase awareness in both the research community and in practitioners that learning from data alone, without clarifying the causal mechanism, could lead to painful failures.

Uncertainty Theories Beyond Probability
Davide Ciucci
Department of Informatics, Systems and Communication (University of Milano-Bicocca – ITALY)

“The glass is half full” and “I believe that the glass is full”, are two different examples of imprecision in language that evidently do not fit with a probabilistic interpretation, i.e., from the piece of knowledge “the glass is full with probability 1/2”. Indeed, there are several forms of uncertainty and not all of them can be handled (in the best way) by probability theory. We will give a bird’s-eye view of some of these forms and some tools to manage them, including possibility theory, belief theory and rough sets.

Fuzzy Logics for Knowledge Representation
Rafael Penaloza
Department of Informatics, Systems and Communication (University of Milano-Bicocca – ITALY)

One of the challenges in knowledge representation is to handle vague and imprecise notions that we encounter in our daily lives. For example, a medical application should be able to handle concepts like “obese” or “high cholesterol” which cannot be defined with full precision. We will study fuzzy logics as natural extensions of classical logic allowing for more fine-grained membership degrees, whose aim is to handle imprecision. We will motivate their design choices, study their properties, and show how they differ from other syntactically similar logics. In particular, we will highlight the difference between fuzzy membership degrees and probabilities.

September 18, 2019


Towards intelligent crowd management for Olympics 2020
Katsuhiro Nishinari
RCAST – Research center for Advanced Science and Technology (The University of Tokyo – JAPAN)


Smart Environments as Mirror Worlds
Alessandro Ricci
Department of Computer Science and Engineering (University of Bologna, ITALY)

In this talk we introduce and discuss a vision of Mirror Worlds (MW) as Agent-Based Digital Twins, integrating both Internet of Things and Mixed Reality technologies for the design and development of future smart environments. MW provides a bi-directional coupling between the digital and physical world. On the one hand, similarly to Digital Twins, a MW hosts the execution of computational entities which are designed to be a digital representation and functional extension of some physical environment (including physical assets, objects, people). On the other hand, the physical environment is augmented with mixed reality holograms that are designed to be a physical representation and extension of computational entities running in the mirror. Software agents, especially cognitive agents, are the citizens of MWs, with the capability of creating, observing and manipulating as well as reasoning about MW computational entities as part of their environment.

Real and virtual crowds
Giuseppe Vizzari
Department of Informatics, Systems and Communication (University of Milano-Bicocca – ITALY)

Complex phenomena related to crowding behaviours both in the real world (e.g. the built environment, where people can physically move) and in synthetic environments (e.g. social media or virtua reality) are growingly studied by different disciplines, with different methods and goals. This lecture will discuss AI approaches employed in the analysis of human behaviour, both in the real world and in the social media (in particular with concete case studies adopting adaptive and iterative density based clustering). The lecture will also show how acquired data and insights can support the development of models for the simulation of crowds in synthetic environments through agent based models.

From Kintaro to Pepper. Robots in Japanese elder-care, myths and reality
Florian Coulmas
Institute of East Asian Studies (University of Essen Duisburg – GERMANY)

Until recently, Japan has been the country with the highest number of robots per capita. Are there any cultural reasons for this? Is it because Japan is a techno-friendly culture? Doubts are in order; for Japan is also the country where social ageing is most advanced and that, nevertheless, holds on to a very rigid immigration policy. Robots are in demand, in industrial production, in households, and in the care sector. Also, it should not be forgotten that the art of marketing is highly developed in Japan. Robots want to be bought. This paper tries to explain what the use of robots in care facilities has to do with social ageing, immigration, marketing, and culture.

The Issue of Information Credibility in the Social Web
Marco Viviani, Gabriella Pasi
Department of Informatics, Systems and Communication (University of Milano-Bicocca – ITALY)

Nowadays, the Social Web supports and fosters social interactions among people through Web 2.0 technologies. In this context, information in the form of User-Generated Content (UGC) spreads across social media platforms in the absence of traditional trusted third parties that can verify the reliability of the sources and the believability of the content generated. For this reason, the issue of assessing the credibility of UGC is receiving increasing attention from researchers, in different contexts. In the last years, several approaches have been proposed to automatically assess the credibility of information in social media. Most of them are based on data-driven models, i.e., they employ machine learning techniques to identify misinformation, but recently also model-driven approaches are emerging, as well as graph-based approaches focusing both on knowledge bases and credibility propagation. Three of the main contexts in which this issue has been tackled and that will be discussed in this lecture concern: the detection of opinion spam in review sites, the detection of fake news and spam in microblogging, and the credibility assessment of online health information.

Deep Learning meets NLP: Sentiment Analysis in Microblogs
Elisabetta Fersini, Enza Messina
Department of Informatics, Systems and Communication (University of Milano-Bicocca – ITALY)

In this talk we address the main challenges of sentiment analysis of microblogs using machine learning techniques. We show how combining post contents and network structure information may lead to significant improvements in the polarity classification of the sentiment, both at post and at user level. We also discuss the potential of deep learning for enhancing the classification performance through a high level feature representation.
September 19, 2019


Learning From Constraints
Marco Gori
Department of Engineering of Information and Mathematical Sciences (University of Siena – ITALY)

Learning and inference are traditionally regarded as the two opposite, yet complementary and puzzling components of intelligence. In this talk we point out that a constrained-based modeling of the environmental agent interactions makes it possible to unify learning and inference within the same mathematical framework. The unification is based on the abstract notion of constraint, which provides a representation of knowledge granules gained from the interaction with the environment. The agents are based on a deep neural network architecture, and their learning and inferential processes are driven by different schemes for enforcing the environmental constraints. Logic constraints are also included thanks to their translation into real-valued functions that arises from the adoption of opportune t-norms. Computational models like graph neural networks can be incorporated in the proposed framework thanks the expression of structured domains by constraints. The basic ideas are presented by simple case studies ranging from learning and inference in social nets, missing data, checking of logic constraints, and pattern generation. The theory offers a natural bridge between the formalization of knowledge and the inductive acquisition of concepts from data.
Human-robot emotional interaction
Sarah Cosentino
Gobal Center for Science and Engineering (Waseda University, Tokyo – JAPAN)

With the advent of assistive robots, robots designed to assist users in daily activities, emotional interaction is gaining particular importance in the field of “natural” interfaces and communication methods. But why is the possibility of understanding the emotional state of the human partner vital for a robot? Human communication takes place in parallel on two channels, the cognitive and the emotional. If on the cognitive level, exchanges are direct, they are not necessarily unequivocally clear. Emotional communication complements and completes these exchanges to give a clearer picture of the situation in its context. Furthermore, knowing how to arouse positive emotions in others is a crucial factor for the establishment of a good interpersonal or interperson-robot relationship. The possibility of understanding and knowing how to cope correctly with different emotional states will give robots an extra gear to become part of the human social sphere, making communications more natural, spontaneous, and possibly clarifying any situations of ambiguity.

Learning from Physiological Data, from Affective Computing to BCI: Applications, Limits and Future Perspectives
Francesca Gasparini
Department of Informatics, Systems and Communication (University of Milano-Bicocca – ITALY)

Physiological data such as Galvanic Skin Response, Heart Rate, Temperature, Electroencephalogram are unintentional signals that can reveal the person’s emotional state without lying. However, their message is not always easy to be interpreted. Starting from concrete applications in the field of Affective Computing and Brain Computer Interface, successes achieved, limits and future perspectives will be discussed.

A framework for collecting, unifying, and distributing inertial labelled data for Deep Learning
Daniela Micucci, Paolo Napoletano
Department of Informatics, Systems and Communication (University of Milano-Bicocca – ITALY)

Automatic recognition of activities of daily living (ADLs) from inertial signals is crucial for many application domains, such as healthcare, sport, and entertainment. Smartwatches and smartphones can be used both for collecting data and for recognizing actions exploiting machine learning techniques. The use of machine learning techniques, and especially deep learning ones, for ADLs recognition is growing rapidly. In this context, the databases used to train the classifiers are of great importance in order to evaluate the robustness of the techniques to data variability, which is mostly related to physical characteristics and lifestyle of human subjects, to position and brand of devices. The lecture aims at practically comparing traditional personalized-based and deep learning machine learning techniques in order to emphasizes both the results and the pros and cons in the ADLs recognition domain. Moreover, the lecture will introduce a framework for the collection, integration and distribution of homogenized signal in order to promote the research on deep learning techniques.

Platinum Society : Convergence with the social participation and big data, AI

Tomoo Matsuda
Mitsubishi Research Institute (Tokyo – JAPAN)

What will the big data and AI contribute to the super aging society? With an aging rate of 28% in Japan and 23% in Italy, both countries are facing the super aging society. Also the isolation of the elderly is getting serious problems for both countries. In order to solve the social isolation problems and to extend the healthy lifespan, the social participation and the interaction between the generations are very important. Big data and AI also would be promising such as analyzing the elderly health data and the behavioral pattern. Convergence with the social participation, the big data, AI, the cutting-edge technology could provide the creative solutions in the aging society. Even though the super aging society is a big problem, if we change the view point, the threat could be the chance and the opportunity. Silver Society has been used as the negative image of the aging society such as silver hair and getting old. Now, we need more positive philosophy. “Platinum Society” is a new positive concept for symbolizing an active aging society which will never getting rusty compared to silver society. Some best practices of the elderly in their social participants related the big data, AI, technologies would be introduced in this presentation.
September 20, 2019


The Journey from AI Reseach to Product
Rafah Hosn
Microsoft Research (New York – USA)

As end users in this Artificial Intelligence era, we are the beneficiaries of a plethora of ML-based products that are now, more than never, changing the way we think, live and interpret the world around us. Often, these products are the fruit of researchers and scientists that were working on a new underlying ML theory or an extension of an existing algorithm. How does one go from research paper to a full-fledged product that end users can benefit from? Who’s involved in this journey, what skills and processes are needed to get there? In this lecture, I will describe the process by which AI Research gets transformed into products and the role product management plays in championing this journey. The lecture will ground the learning in a recent product that came out of Microsoft Research called Personalizer (, a Reinforcement Learning based product that offers personalized experiences across a multitude of scenarios.

From psychology to cybernetics: the projective consciousness model
David Rudrauf
Laboratory of Multimodal Modelling of Emotion & Feeling (University of Geneva – SWITZERLAND)

The Projective Consciousness Model (PCM) (Rudrauf et al, 2017; Williford et al, 2018) is an attempt to unify psychology and cybernetics, with the broadest possible explanatory power about a multiplicity of phenomena and behaviours, from perception, imagination, appraisal, emotion, social cognition, motivation, and action. The PCM advances previous formulations of active inference by featuring an explicit psychological model of the form, structure and dynamics of conscious experience. It emphasise the formal concept of Field of Consciousness, integrating 3D projective geometry and the Free Energy principle (Friston, 2010), for the global optimisation of action outcomes. The PCM offers a computable and integrative basis for deriving and testing hypotheses about normal and pathological psychological mechanisms quantitatively. The principles of the model will be explained, and will be illustrated with applications to: perception, focusing on perceptual illusions; the imagination and its role in motivation and resilience; social cognition, in normal and pathological contexts; affective dynamics and expressions. We will demonstrate the concepts using artificial agent simulations in virtual reality and implementations in collective robots. We will discuss perspectives for strong AI and autonomous robotics, around designing interpretable social and affective artificial agents.

Complex analytical systems: which interest our customers the most
Marco Breda
Engineering Ingegneria Informatica S.p.A. (Rome, ITALY)

The time for a widespread application of artificial intelligence in real work contexts has arrived. Availability of data, algorithms and libraries have made it possible. In this context the consulting and system integration companies have organized themselves with business lines and centers of competence to cover the entire life cycle of data science projects. Some real applications will be briefly illustrated among those that most interest our customers: complex, massive, hierarchical, real-time predictive systems; anomaly detection and predictive maintenance systems; systems for specific text mining problems; systems for intelligent signal dissaggregation.

AI and Virtual Reality at work
Giuseppe Vizzari, Fabio Luca Bonali
Department of Informatics, Systems and Communication (University of Milano-Bicocca – ITALY)

AI and Virtual Reality at work Abstract: The talk will describe an ongoing research collaboration between computer scientists and AI researchers with geoscientists about a comprehensive project comprising different phases and activities: a) the acquisition of data for the creation of a Virtual replica of an actual environment of interest; b) the creation of instruments within this kind of Virtual Environment supporting researchers of geosciences in taking measurements, planning further actions and additional observations to be carried out on the field, discussing with colleagues and using this form of highly interactive experience for sake of teaching. The talk will present the idea, current state, and planned developments of this project, taking more specifically the perspective of AI researchers to describe the challenges proposed by this line of work and what AI can bring to this kind of innovative scenario.