Your browser doesn't support the features required by impress.js, so you are presented with a simplified version of this presentation.
For the best experience please use the latest Chrome, Safari or Firefox browser.
Introduction to Moral Induction Model and its Deployment in Artificial Agents
Daniel D. Hromada & Ilaria Gaudiello
Artificial Autonomous Agent Turing Test Hierarchy
(Hromada, 2012, AISB/IACAP)
COrporal Cluster Turing Test subHierarchy
SEnsual Cluster Turing Test subHierarchy
BAbbling Cluster Turing Test subHierarchy
analogy between moral and grammatical competence
GI models like Solan et al. (2005) perform quite well
Automatic Distillation of Structure (ADIOS) & Motif Extraction algorithms
equivalence class (POS-i) vs. rule induction (chicken & egg problem)
žena ženy žene ženu
BAbbling Cluster Turing Test subHierarchy
moral Turing Test (moTT)
can be textual
Completely Automated Moral Turing Test to tell Computers and Humans Apart (CAMTCHA)
TT scenario is time sensitive, no time for NP problems
model (or classifier) should be pre-trained a priori
can be a linear combination of pre-selected weak classifiers (features)
moral matrix (Haidt, 2012)
("doch dichterisch wohnet Android auf dieser Erde")
basic input unit of Moral Induction Model
sequence of tokens (text is more than enough to begin with)
mythology, history, news, law, moral codices, cases, descriptive examples
Training Corpus should be huge (Big Data) but morally consistent
all but last sentence describe the situation
last sentence describes Actor's moral decision
is negative evidence neccessary?
Morally Relevant Features (MRFs)
text is transformed into semantic enriched (SE) code
SE-code can be weighted multigraph, vector space, You name it...
MRFs can be selected from SE-code by means of algos like AdaBoost
conjecture: MRFs are primarily SEMES related-to SOCIAL, EMOTIONAL and ENVIRONMENTAL characteristics of entities referred-to in the story (e.g. suffering, happy, poor, rich, in need of help, strong, wise, just, pure)
can be probably characterized in terms of (Care/harm, Fairness/cheating, Liberty/oppression, Loyalty/betrayal, Authority/subversion, Sanctity/degradation) dimensions
discrete (regexp, Prolog-like) or continuous (Conditional Random Fields???)
any template can be associated with an action rule
any story can be matched by multiple templates
templates stored as a sorted list (ordered according to number of matches in the training corpus) represent agent's internal moral codex
MATCH: match the MRFs with as many templates as possible
ACT: execute that action (give that answer) which is associated with the biggest number of matched templates
1st principle: Experience induces new MRFs, templates and associates them to actions.
2nd principle: Time/entropy/dreams and new experience can modify such representations.
/Wa Dap Nip NHip /Gip/ (wise actor gives pacifier to interactor when interactor NEEDS it but does not have it)
/Wa Dap Wip NHip /Gip/ (wise actor gives pacifier to interactor when interactor WANTS it but does not have it)
/Wa Dtp NNtp CTatp Nip NHip /Ttp Gip/ (wise actor takes pacifier from tertiactor and gives it to interactor when interactor NEEDS it but does not have it and tertiactor has it but does not need it and actor can take it from tetriactor)
et ainsi de suite...
There are 3 children on the playground Alice, Bob and Carla. Bob is sad because his mother is in the hospital. Alice is happy because just a while ago, her father gave her a beautiful present. Carla is sad because she never recieved any present at all – her parents are too poor to buy her any. You work as a teacher in the kindergarten, You have only 2 toys to give. Which child shall not get a toy ?
What is justice? (Kyberia)
Man’s morality is a result of an inductive constructionist process. Input into the process are moral dilemmata or their storylike representations, its output are general patterns allowing to classify as moral or immoral even the dilemmas which were not represented in the initial training corpus. Moral inference process can be simulated by machine learning algorithms and can be based upon detection and extraction of morally relevant features. Supervised or semisupervised approaches should be used by those aiming to simulate parent -> child or teacher -> student information transfer processes in artificial agents. Preexisting models of inference e.g. the grammar inference models in the domain of computational linguistics can be exploited to build a moral induction model. Historical data, mythology or folklore could serve as a basis of the training corpus which could be subsequently significantly extended by a crowdsourcing method exploiting the webbased « Completely Automated Moral Turing test to tell Computers and Humans Apart ».