{"id":17573,"date":"2018-01-24T00:00:00","date_gmt":"2018-01-24T00:00:00","guid":{"rendered":"https:\/\/www.bbs.unibo.it\/the-morality-of-an-artificial-intelligence\/"},"modified":"2020-02-28T14:23:45","modified_gmt":"2020-02-28T14:23:45","slug":"the-morality-of-an-artificial-intelligence","status":"publish","type":"post","link":"https:\/\/www.bbs.unibo.it\/en\/the-morality-of-an-artificial-intelligence\/","title":{"rendered":"The Morality of an Artificial Intelligence"},"content":{"rendered":"<p id=\"tw-target-text\" class=\"tw-data-text tw-ta tw-text-small\" dir=\"ltr\" style=\"color: #212121;\" data-placeholder=\"Traduzione\"><span lang=\"en\">In\u00a0<strong>1896<\/strong>, a group of people was running away from\u00a0the Salon indien du Grand Caf\u00e9 on Boulevard des Capucines in Paris, terrified by a train approaching them at a fast pace. Although the story linked to the first screening of the film <strong>The arrival of a train at the La Ciotat station<\/strong> of the Lumi\u00e8re brothers is probably not much more than a legend, it represents in a picturesque but effective way the reaction of mankind\u00a0in front of a &#8216;modern devilry&#8217; only a little over 100 years ago. Nowadays, our sensitivity threshold towards technological progress has risen considerably and the idea that a projection on canvas can be mistaken for reality just makes us smile. However, this is a matter of perspective. Technology and progress arouse fear when nature and governability are not fully known. Yesterday were the trains, forced at the dawn to be preceded by a flag-bearer to ensure the safety of passers-by, today are the <strong>robots and artificial intelligenc<\/strong>e, technologies that we seem to be able to develop more than to understand and manage.<\/span><\/p>\n<p><!--more--><\/p>\n<p>&nbsp;<\/p>\n<p><a href=\"http:\/\/www.hansonrobotics.com\/robot\/sophia\/\" target=\"_blank\" rel=\"noopener noreferrer\"><strong>Sophia<\/strong><\/a>,\u00a0the <strong>robot-woman<\/strong> who has surprised the world with her statements about the conquest of the world, supported by 65 facial expressions and <strong>autonomous reasoning<\/strong> comparable to a 3-year-old child, is only the parade horse of a host of &#8216;intelligences&#8217; that man has created and that now, in retrospect, is trying to understand and explain its evolution. The essential difference between other technologies, with which we have already become accustomed to share our time and our planet, and artificial intelligence, lies in the ability to elaborate autonomous decisions. Although sophisticated, the advanced systems available to all of us today, respond to our needs by fishing in fractions of a second between an infinite number of cases and information, but returning them to us\u00a0for their final use. The AI, on the other hand, is capable, and it will always be more so, to develop autonomous solutions, <strong>learning incrementally<\/strong> from its actions, at a speed not yet fully understood by man.<\/p>\n<p>&nbsp;<\/p>\n<p id=\"tw-target-text\" class=\"tw-data-text tw-ta tw-text-small\" dir=\"ltr\" style=\"color: #212121;\" data-placeholder=\"Traduzione\"><span lang=\"en\">To entrust not only pre-established tasks but autonomous decision-making to a machine opens up questions that are anything but new. It is the <strong>dilemma of morality<\/strong> the pivot around which the doubts and uncertainties that the spread of artificial intelligence could cause rotate. <strong>Who will be responsible for damage caused to third parties<\/strong> by intelligent machines at our service? We are facing unexpected questions since, if we leave the right to decide and the relative responsibility for action, to a machine, we would be a step away from the chaos or from admitting the existence of a conscience and a different form of life? Furthermore, is it possible to suppose that, in a not so distant future,<em> we may no longer be able to decide about it<\/em>?<\/span><\/p>\n<p>&nbsp;<\/p>\n<p id=\"tw-target-text\" class=\"tw-data-text tw-ta tw-text-small\" dir=\"ltr\" style=\"color: #212121;\" data-placeholder=\"Traduzione\"><span lang=\"en\">The loss of control scares, perhaps in a much more justified and reasonable way than the famous Lumi\u00e8re brothers&#8217; train. The neuroscientist and philosopher <a href=\"https:\/\/www.ted.com\/talks\/sam_harris_can_we_build_ai_without_losing_control_over_it?language=it\" target=\"_blank\" rel=\"noopener noreferrer\"><strong>Sam Harris<\/strong><\/a>\u00a0paints a scenario in which the machines, from a certain point on, begin to <strong>improve themselves without our help, or permission<\/strong>. Harris argues that artificial intelligence does not spontaneously become evil as we see happening in movies, but it certainly possess\u00a0the need for <strong>self-preservation<\/strong>. The smallest discrepancy between our goals and those of the AI \u200b\u200bcould cause damage to the weaker &#8216;species&#8217; less. The human being, as Harris suggests, does not hate other living species, but does not hesitate to limit or destroy them if it is in his interest.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p id=\"tw-target-text\" class=\"tw-data-text tw-ta tw-text-small\" dir=\"ltr\" style=\"color: #212121;\" data-placeholder=\"Traduzione\"><span lang=\"en\">According to the most optimistic current of thought, the problem of the supremacy of the AI \u200b\u200bwill not arise if we will be able to instill something that has always been an\u00a0exclusive prerogative of man: <strong>morality<\/strong>. A robot, however, is not able to understand a command such as &#8216;<em>do good<\/em>&#8216; or &#8216;<em>choose the lesser evil<\/em>&#8216;, because it draws its reasoning from infinite numbers of examples and cases. Understanding something that is not univocal even for humans, however, is far more difficult than expected. Although we may all agree that good and bad\u00a0are two very distinct categories, the agreement fades if <strong>we try to establish with absolute precision what is part of one rather than another<\/strong>.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p id=\"tw-target-text\" class=\"tw-data-text tw-ta tw-text-small\" dir=\"ltr\" style=\"color: #212121;\" data-placeholder=\"Traduzione\"><span lang=\"en\">One of the first products related to artificial intelligence that has definitely put us in front of this question is the advent of <strong>cars without a driver<\/strong>. Materially almost ready to be placed on our roads, these technological wonders lie before a rock that for now seems insurmountable: <em>how can we leave to a car\u00a0the power to make\u00a0a choice, which can save a life at the expense of another?<\/em> What can be the right instructions to give to the car, provided that they exist? The doubt relates to the famous Trolley Dilemma\u00a0by <strong>Philippa Ruth Foot<\/strong>, dating back to 1978, where people had to choose between killing 5 people, leaving the convoy proceeding in the taken direction, or taking away one&#8217;s life by operating an exchange lever. There have been numerous tests in this regard and even more are the ethical and moral dilemmas that have arisen. In 2011, for example, the psychologist <a href=\"https:\/\/www.sciencedaily.com\/releases\/2011\/12\/111201105443.htm\" target=\"_blank\" rel=\"noopener noreferrer\">Carlos Davis Navarrete dell\u2019Universit\u00e0 del Michigan<\/a>\u00a0has devised a variant, confirming the previously obtained results. Out of 147 participants, as many as 133 (<strong>90%<\/strong>)\u00a0changed direction killing the single person, 11 did not touch the lever\u00a0and 3 triggered the exchange before bringing the lever back to its original position. The utilitarian choice would seem to be the road most often traveled, that is to prefer the lesser evil and the safeguard of the greatest number of lives.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p id=\"tw-target-text\" class=\"tw-data-text tw-ta tw-text-small\" dir=\"ltr\" style=\"color: #212121;\" data-placeholder=\"Traduzione\"><span lang=\"en\">The philosophical conception of <strong>utilitarianism<\/strong> claims that moral action is that which generates greater happiness to the greatest number of people. According to this reasoning, a car without a driver should choose to save the highest number of lives and, in the case of the trolley dilemma, always choose the deviation. In addition to utility, however, <strong>moral responsibility should also be evaluated<\/strong>. Considering that the driver creates the risk already by taking\u00a0the car, it would be right to prefer to save him instead of an\u00a0unsuspecting passer-by? And if in the car there were two people in front of a single pedestrian? If the pedestrians were five but they contribuite\u00a0much less to their community\u00a0than does the single driver of the car?<\/span><\/p>\n<p>&nbsp;<\/p>\n<p id=\"tw-target-text\" class=\"tw-data-text tw-ta tw-text-small\" dir=\"ltr\" style=\"color: #212121;\" data-placeholder=\"Traduzione\"><span lang=\"en\"><strong>Warren Quinn<\/strong>, a professor at the University of California, rejected the utilitarian idea, arguing that from an ethical point of view an action that causes damage in a direct and deliberate manner is more despicable\u00a0than an indirect one that causes it randomly. According to a study published in October 2015 on the\u00a0<a href=\"https:\/\/arxiv.org\/abs\/1510.03346\" target=\"_blank\" rel=\"noopener noreferrer\">Arxiv<\/a> scientific site, if you ask people not familiar\u00a0with philosophy how should behave a\u00a0car in case it has\u00a0to choose between the death of passengers or that of pedestrians, most will answer that the cars should be programmed so as <strong>not to hurt in any case the passers-by<\/strong>. The psychologist <a href=\"https:\/\/www.researchgate.net\/publication\/301293464_The_Social_Dilemma_of_Autonomous_Vehicles\" target=\"_blank\" rel=\"noopener noreferrer\">Jean-Francois Bonnefon<\/a>, of the School of Economics in Toulouse, found that 75% of the participants in his experiments think that <strong>the car should always steer and kill the passenger<\/strong>, even to save a single pedestrian. If driverless cars were programmed to sacrifice the driver&#8217;s life, what would happen if a pedestrian would step in front of the car\u00a0on purpose? Self-driving cars are not able to assess the relationships between people and therefore it is impossible at present to leave the decision to them. In the same way, the unanimity of mankind\u00a0on the possible scenarios\u00a0with which to program\u00a0the carsis extremely unlikely.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p id=\"tw-target-text\" class=\"tw-data-text tw-ta tw-text-small\" dir=\"ltr\" style=\"color: #212121;\" data-placeholder=\"Traduzione\"><span lang=\"en\">To highlight the different interpretations that people give to the concepts of &#8216;right&#8217; and &#8216;wrong&#8217;, there is also the <a href=\"http:\/\/moralmachine.mit.edu\" target=\"_blank\" rel=\"noopener noreferrer\"><strong>Moral Machine<\/strong><\/a><strong>\u00a0<\/strong>of<strong> MIT<\/strong> (Massachusetts Institute of Technology) in Boston. An interactive game where, through a test, users can identify themselves with artificial intelligence programmers: they are subjected to a series of situations and they must choose the most correct and moral action, getting a feedback on their\u00a0personal <strong>ranking of &#8216;sacrifice&#8217; of individuals and animals<\/strong>.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p id=\"tw-target-text\" class=\"tw-data-text tw-ta tw-text-small\" dir=\"ltr\" style=\"color: #212121;\" data-placeholder=\"Traduzione\"><span lang=\"en\">The real point is not to be able to completely transfer the modus operandi of the human brain to that of an AI, since man is fallible and in situations of uncertainty often acts by instinct or based on more or less distorted and personal evaluations. <em>The point is the transposition of responsibility for choice and action.<\/em> The ethics commission established by the <a href=\"https:\/\/www.bmvi.de\/SharedDocs\/EN\/Documents\/G\/ethic-commission-report.pdf?__blob=publicationFile\" target=\"_blank\" rel=\"noopener noreferrer\">German Ministry of Transport<\/a>, composed of luminaries from the automotive, ethics, religion and jurisprudence sectors, has in fact produced the <strong>first document on the guidelines for driverless cars<\/strong>, where it is necessary that the driver must always be in control of the car and the car&#8217;s AI must always favor human life in relation to goods or animals. The commission has also established the obligatory presence of a <strong>black box on board<\/strong> to rebuild the responsibility in case of accident, which will be always on the driver, except in cases where the automatic driving was active due to production defect or failure . This decision <strong>denies any decision-making autonomy of the AI<\/strong> \u200b\u200band at the same time <strong>hinders its development<\/strong>, given that it learns from its actions. Decisions that then demonstrate the objective difficulty of providing the AI \u200b\u200bwith an ethic and the need not to underestimate the power of the right to make decisions.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p id=\"tw-target-text\" class=\"tw-data-text tw-ta tw-text-small\" dir=\"ltr\" style=\"color: #212121;\" data-placeholder=\"Traduzione\"><span lang=\"en\">The <strong>cultural resistance of humans<\/strong>\u00a0towards autonomous machines and the AI \u200b\u200bas a whole seems justified at the moment but, as in the past, progress can not be stopped but only understood and managed. In a recent survey conducted by the <a href=\"http:\/\/newsroom.aaa.com\/2017\/03\/americans-feel-unsafe-sharing-road-fully-self-driving-cars\/\" target=\"_blank\" rel=\"noopener noreferrer\">American Automobile Association&#8217;s Foundation for Traffic Safety<\/a>, <strong>78%<\/strong> of respondents said they were afraid of getting into a driverless vehicle, while another survey conducted by the insurance giant <a href=\"https:\/\/www.insurancejournal.com\/news\/national\/2017\/10\/03\/466351.htm\" target=\"_blank\" rel=\"noopener noreferrer\">AIG<\/a> shows that <strong>41%<\/strong> of participants didn&#8217;t\u00a0want to share the road with a driverless vehicle. The same result is also given by the surveys conducted over the last 2 years by the Massachusetts Institute of Technology (MIT) and by the marketing company JD Power and Associates. Although companies can invest in the security of these systems, the fear of consumers and their distrust increases, partly due to the mystification of the issues related to artificial intelligence, and partly because the same professionals seem to have no convincing unanimous answers.<\/span><\/p>\n<p>&nbsp;<\/p>\n<hr \/>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p id=\"tw-target-text\" class=\"tw-data-text tw-ta tw-text-small\" dir=\"ltr\" style=\"color: #212121;\" data-placeholder=\"Traduzione\"><span lang=\"en\">Whatever the evolution of artificial intelligence and its use in our everyday life will be, we can be sure that this evolution will take place anyway. We are witnesses of the transition of our world and, as some hypothesize, of our species, towards a new era. We can choose to observe this transformation from a distance, shielded by\u00a0skepticism and worry, or <strong>decide to be part of it<\/strong>, becoming aware of what is happening or even contributing ourselves. <strong>Bologna Business School<\/strong>\u00a0offers to those who want to approach the challenges of the future with the right skills, programs designed to train the specialists of today&#8217;s and tomorrow&#8217;s technologies.<\/span><\/p>\n<ul>\n<li><span style=\"color: #ff6600;\"><a href=\"\/hp\/?p=7587\" target=\"_blank\" rel=\"noopener noreferrer\"><span style=\"color: #ff6600;\">Global MBA in Innovation Management<\/span><\/a><\/span><\/li>\n<li><a href=\"\/hp\/?p=1874\" target=\"_blank\" rel=\"noopener noreferrer\">Executive Master in Technology and Innovation\u00a0<\/a><\/li>\n<li>Master in Digital Technology Management con gli indirizzi: <a href=\"\/hp\/?p=36329\" target=\"_blank\" rel=\"noopener noreferrer\">Artificial Intelligence<\/a>,\u00a0<a href=\"\/hp\/?p=36210\" target=\"_blank\" rel=\"noopener noreferrer\">Cyber Security<\/a>\u00a0and <a href=\"\/hp\/?p=44196\" target=\"_blank\" rel=\"noopener noreferrer\">Internet of Things<\/a><\/li>\n<li><a href=\"\/hp\/?p=9104\" target=\"_blank\" rel=\"noopener noreferrer\">Master in Data Science<\/a><\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In\u00a01896, a group of people was running away from\u00a0the Salon indien du Grand Caf\u00e9 on Boulevard des Capucines in Paris, terrified by a train approaching them at a fast pace. Although the story linked to the first screening of the film The arrival of a train at the La Ciotat station of the Lumi\u00e8re brothers [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[92],"tags":[],"rubrica":[],"class_list":["post-17573","post","type-post","status-publish","format-standard","hentry","category-news-en"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v25.5 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>The Morality of an Artificial Intelligence | BBS<\/title>\n<meta name=\"description\" content=\"Whatever the evolution of artificial intelligence and its use in our everyday life will be, we can be sure that this evolution will take place anyway.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.bbs.unibo.it\/en\/the-morality-of-an-artificial-intelligence\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"The Morality of an Artificial Intelligence | BBS\" \/>\n<meta property=\"og:description\" content=\"Whatever the evolution of artificial intelligence and its use in our everyday life will be, we can be sure that this evolution will take place anyway.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.bbs.unibo.it\/en\/the-morality-of-an-artificial-intelligence\/\" \/>\n<meta property=\"og:site_name\" content=\"BBS\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/BolognaBusinessSchool\/\" \/>\n<meta property=\"article:published_time\" content=\"2018-01-24T00:00:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2020-02-28T14:23:45+00:00\" \/>\n<meta name=\"author\" content=\"mattia@super\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"mattia@super\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"9 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.bbs.unibo.it\/en\/the-morality-of-an-artificial-intelligence\/\",\"url\":\"https:\/\/www.bbs.unibo.it\/en\/the-morality-of-an-artificial-intelligence\/\",\"name\":\"The Morality of an Artificial Intelligence | BBS\",\"isPartOf\":{\"@id\":\"https:\/\/www.bbs.unibo.it\/en\/#website\"},\"datePublished\":\"2018-01-24T00:00:00+00:00\",\"dateModified\":\"2020-02-28T14:23:45+00:00\",\"author\":{\"@id\":\"https:\/\/www.bbs.unibo.it\/en\/#\/schema\/person\/fcd38373ba8b77cabe4551332f09282e\"},\"description\":\"Whatever the evolution of artificial intelligence and its use in our everyday life will be, we can be sure that this evolution will take place anyway.\",\"breadcrumb\":{\"@id\":\"https:\/\/www.bbs.unibo.it\/en\/the-morality-of-an-artificial-intelligence\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.bbs.unibo.it\/en\/the-morality-of-an-artificial-intelligence\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.bbs.unibo.it\/en\/the-morality-of-an-artificial-intelligence\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.bbs.unibo.it\/en\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"The Morality of an Artificial Intelligence\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.bbs.unibo.it\/en\/#website\",\"url\":\"https:\/\/www.bbs.unibo.it\/en\/\",\"name\":\"BBS\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.bbs.unibo.it\/en\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.bbs.unibo.it\/en\/#\/schema\/person\/fcd38373ba8b77cabe4551332f09282e\",\"name\":\"mattia@super\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.bbs.unibo.it\/en\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/e63bb607deb23aa49114acafa457928e38510123e97567f3e277dd694029bfbd?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/e63bb607deb23aa49114acafa457928e38510123e97567f3e277dd694029bfbd?s=96&d=mm&r=g\",\"caption\":\"mattia@super\"}}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"The Morality of an Artificial Intelligence | BBS","description":"Whatever the evolution of artificial intelligence and its use in our everyday life will be, we can be sure that this evolution will take place anyway.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.bbs.unibo.it\/en\/the-morality-of-an-artificial-intelligence\/","og_locale":"en_US","og_type":"article","og_title":"The Morality of an Artificial Intelligence | BBS","og_description":"Whatever the evolution of artificial intelligence and its use in our everyday life will be, we can be sure that this evolution will take place anyway.","og_url":"https:\/\/www.bbs.unibo.it\/en\/the-morality-of-an-artificial-intelligence\/","og_site_name":"BBS","article_publisher":"https:\/\/www.facebook.com\/BolognaBusinessSchool\/","article_published_time":"2018-01-24T00:00:00+00:00","article_modified_time":"2020-02-28T14:23:45+00:00","author":"mattia@super","twitter_card":"summary_large_image","twitter_misc":{"Written by":"mattia@super","Est. reading time":"9 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/www.bbs.unibo.it\/en\/the-morality-of-an-artificial-intelligence\/","url":"https:\/\/www.bbs.unibo.it\/en\/the-morality-of-an-artificial-intelligence\/","name":"The Morality of an Artificial Intelligence | BBS","isPartOf":{"@id":"https:\/\/www.bbs.unibo.it\/en\/#website"},"datePublished":"2018-01-24T00:00:00+00:00","dateModified":"2020-02-28T14:23:45+00:00","author":{"@id":"https:\/\/www.bbs.unibo.it\/en\/#\/schema\/person\/fcd38373ba8b77cabe4551332f09282e"},"description":"Whatever the evolution of artificial intelligence and its use in our everyday life will be, we can be sure that this evolution will take place anyway.","breadcrumb":{"@id":"https:\/\/www.bbs.unibo.it\/en\/the-morality-of-an-artificial-intelligence\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.bbs.unibo.it\/en\/the-morality-of-an-artificial-intelligence\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/www.bbs.unibo.it\/en\/the-morality-of-an-artificial-intelligence\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.bbs.unibo.it\/en\/"},{"@type":"ListItem","position":2,"name":"The Morality of an Artificial Intelligence"}]},{"@type":"WebSite","@id":"https:\/\/www.bbs.unibo.it\/en\/#website","url":"https:\/\/www.bbs.unibo.it\/en\/","name":"BBS","description":"","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.bbs.unibo.it\/en\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/www.bbs.unibo.it\/en\/#\/schema\/person\/fcd38373ba8b77cabe4551332f09282e","name":"mattia@super","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.bbs.unibo.it\/en\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/e63bb607deb23aa49114acafa457928e38510123e97567f3e277dd694029bfbd?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/e63bb607deb23aa49114acafa457928e38510123e97567f3e277dd694029bfbd?s=96&d=mm&r=g","caption":"mattia@super"}}]}},"_links":{"self":[{"href":"https:\/\/www.bbs.unibo.it\/en\/wp-json\/wp\/v2\/posts\/17573","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.bbs.unibo.it\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.bbs.unibo.it\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.bbs.unibo.it\/en\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.bbs.unibo.it\/en\/wp-json\/wp\/v2\/comments?post=17573"}],"version-history":[{"count":1,"href":"https:\/\/www.bbs.unibo.it\/en\/wp-json\/wp\/v2\/posts\/17573\/revisions"}],"predecessor-version":[{"id":20461,"href":"https:\/\/www.bbs.unibo.it\/en\/wp-json\/wp\/v2\/posts\/17573\/revisions\/20461"}],"wp:attachment":[{"href":"https:\/\/www.bbs.unibo.it\/en\/wp-json\/wp\/v2\/media?parent=17573"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.bbs.unibo.it\/en\/wp-json\/wp\/v2\/categories?post=17573"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.bbs.unibo.it\/en\/wp-json\/wp\/v2\/tags?post=17573"},{"taxonomy":"rubrica","embeddable":true,"href":"https:\/\/www.bbs.unibo.it\/en\/wp-json\/wp\/v2\/rubrica?post=17573"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}