-
Notifications
You must be signed in to change notification settings - Fork 1
/
aises_9_1
2036 lines (2024 loc) · 121 KB
/
aises_9_1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<!-- Appendix A - Normative Ethics -->
<style type="text/css">
table.tableLayout{
margin: auto;
border: 1px solid;
border-collapse: collapse;
border-spacing: 1px
}
table.tableLayout tr{
border: 1px solid;
border-collapse: collapse;
padding: 5px;
}
table.tableLayout th{
border: 1px solid;
border-collapse: collapse;
padding: 3px;
}
table.tableLayout td{
border: 1px solid;
padding: 5px;
}
</style>
<h1 id="introduction">A.1 Introduction</h1>
<p>Ethics is the branch of philosophy concerned with questions of right
and wrong, good and bad, and how we ought to live our lives. We make
ethical choices every day. When we decide whether to tell the truth or
lie, help someone in need or ignore them, treat others with respect or
act in a discriminatory manner, we are making moral decisions that
reflect our values, beliefs, and moral principles. Philosophical ethics
seeks to provide a systematic framework for making these
decisions.<p>
In this chapter, we will explore some of the key concepts and theories
in philosophical ethics. This branch of research is also commonly called
<em>moral philosophy</em>. We use the terms <em>ethics</em> and
<em>morality</em> interchangeably. The subfield of ethics dedicated to
developing moral theories is called normative ethics. Normative ethics
is the consideration of questions like “Which actions are right and
wrong?”<p>
This chapter outlines some of the main reasons why it’s important for AI
researchers to learn about ethics. We then turn to the basic building
blocks of moral theories, examining various moral considerations like
intrinsic goods, constraints, and special obligations. Then we will
explore some of the most prominent ethical theories, like
utilitarianism, deontology, and virtue ethics, evaluating their
strengths and weaknesses. Throughout, our
key focus is on the ethical concepts that are most relevant to the
development, implementation, and governance of AI.</p>
<h1 id="why-learn-about-ethics">A.2 Why Learn About Ethics?</h1>
<p>This chapter will help you understand ethics—both in the context of
this book and in public discourse about AI safety. Here, we cover the
most prominent theories in the history of ethical discourse. After
reading this chapter, you should have a solid foundation for
understanding ethics in AI discussions.<p>
Ethics is relevant to the field of AI for two key reasons. First, AI
systems are increasingly being integrated into various aspects of human
life, such as healthcare, education, finance, and transportation, and
they have the potential to significantly impact our lives and wellbeing.
As AI systems become increasingly intelligent and powerful, it is
crucial to ensure that they are designed, developed, and deployed in
ways that promote widely shared values and do not amplify existing
social biases or cause needless harms. Unfortunately, there are already
numerous examples of AI systems being designed in ways that failed to
adequately consider such risks, such as racially biased facial
recognition systems. In order to wisely manage the growing power of AI
systems, developers and users of AI systems need to understand the
ethical challenges that AI systems introduce or exacerbate.<p>
Second, AI systems raise a range of new ethical questions that are
unique to their technological nature and capabilities. For instance, AI
systems can generate, process, and analyze vast amounts of data—much
more than was previously possible. In what ways does this new technology
challenge traditional notions of privacy, consent, intellectual
property, and transparency? Another important set of questions relates
to the moral status of AI systems. This is likely to become more
pressing if AI systems become increasingly autonomous and able to
interact with human beings in ways that convince their users that they
have their own preferences and feelings. What should we do if AI systems
appear to meet some of the potential criteria for sentience or other
morally relevant features?<p>
Thirdly, as further explored in the Single Agent Safety and Machine Ethics chapters, it is challenging to
specify objectives or goals for highly powerful AI systems in ways that
do not lead in a predictable way to highly undesirable consequences. In
order to grasp why it is so challenging to specify these objectives, it
is helpful to understand the ethical theories that have been proposed.
Questions of what it means to act rightly or to live a good life have
been debated by many thinkers over several millennia, with strong
arguments advanced for a number of competing positions. These debates
can provide us with greater insight into the challenges that AI
developers will need to overcome in order to build increasingly powerful
AI systems in a beneficial way. Rather than attempting to bypass or
ignore such controversies, AI developers should accept that their design
decisions may raise difficult ethical questions that need to be
considered carefully.</p>
<h2 id="is-ethics-relative">A.2.1 Is Ethics “Relative?”</h2>
<p><strong>Even after millennia of deliberation, we do not agree on all
of morality.</strong> Philosophers have been thinking about and debating
moral principles for millennia, yet they have not achieved consensus on
many moral issues. Widespread disagreements remain in both philosophical
and public discourse, including about important topics like abortion,
assisted suicide, capital punishment, animal rights, and the effects of
human activity on natural ecosystems. One troubling idea is that these
disagreements are irresolvable because no moral principles or judgments
are absolutely or universally correct. In the case of AI, this may lead
AI developers to believe that they have no role to play in shaping how
AI systems behave.</p>
<p><strong>Cultural relativism claims there is no objective, culturally
independent standard of morality.</strong> Consider the principle that
consensual relationships between adults are acceptable regardless of
whether they are heterosexual or homosexual. A moral relativist would
suggest this principle is correct for people who belong to some cultures
where homosexuality is accepted, but incorrect for people who belong to
other cultures where homosexuality is criminalised or socially
stigmatized. These differences are systemic: many cultures have moral
standards that seem incompatible with others’ ideals, such as different
views on marriage, divorce, gender roles, freedom of speech, or
religious tolerance. These differences form the basis for arguments for
cultural relativism.</p>
<p><strong>Normative moral relativism vs. descriptive moral relativism
<span class="citation" data-cites="gowans2021moral">[1]</span>.</strong>
Moral relativism has various forms, but here we discuss two: descriptive
moral relativism and normative moral relativism. Descriptive moral
relativism is straightforward: it means that different societies around
the world have different sets of rules about what’s right and wrong,
much like they have unique cuisines, customs, and traditions.
Descriptive moral relativism makes no claims about which, if any, of
these rules is right or wrong. Normative moral relativism suggests that
one cannot say that something is right or wrong in general, but only
relative to a particular culture or set of norms. Normative moral
relativists conclude that morality itself is not something universal or
absolute. Strictly speaking, descriptive moral relativism and normative
moral relativism are independent of each other, although in practice
descriptive moral relativism is often treated as if it provides evidence
for normative moral relativism.</p>
<h3 id="objections-to-moral-relativism">Objections to Moral
Relativism</h3>
<p>A number of arguments can be advanced against descriptive and
normative moral relativism <span class="citation"
data-cites="gowans2021moral">[1]</span>, which we explore in this
subsection. We will explore the argument that cultural differences might
be overstated, which makes descriptive moral relativism harder to
uphold. Another argument is that proponents of normative moral
relativism often face challenges when confronted with instances of
extreme harm. For instance, while many would unequivocally agree that
torturing a child for entertainment is morally wrong, a normative moral
relativist might be required to argue that its morality is contingent
upon the cultural context. Extreme examples such as this suggest few
people are willing to be thoroughgoing moral relativists. We further
explore arguments for and against moral relativism in this section.</p>
<p><strong>Human moral systems appear to share some common
features.</strong> Some have argued that most or all societies share
some norms. For example, prohibitions against lying, stealing, or
killing human beings are common across cultures. Many cultures have some
form of reciprocity, which is the idea that people have a moral
obligation to repay the kindness or generosity they have received from
others or that people should treat others the way they wish to be
treated <span class="citation"
data-cites="curry2019cooperate">[2]</span>. This can be seen in the
widespread practice of exchanging gifts and in moral codes that
emphasize fairness and justice. Additionally, human cultures have
typically some concept of parenthood, which often involves a moral
obligation to care for one’s children, as well as broader obligations to
one’s family and group. These common features suggest that there are at
least a few universal aspects of morality that transcend cultural
boundaries.</p>
<p><strong>Moral relativism conflicts with common-sense morality <span
class="citation" data-cites="gowans2021moral">[1]</span>.</strong>
Consider controversial practices still prevalent in some cultures, such
as honor killings in parts of the Middle East. The honor of a family
depends on the “purity” of its women. If a woman is raped or is deemed
to have compromised her chastity in some way, the profound shame brought
upon her family may lead them to kill her in response. According to the
normative moral relativist, if such a practice is in line with the moral
standards of the society where it takes place, there is nothing wrong
with it. Even more disturbingly, on some versions of relativism, men in
these societies may even be considered morally in the wrong if they fail
to kill their wives, daughters or sisters for having worn the wrong
clothing, having premarital sex or being raped. Similarly, normative
moral relativism would require us to believe that the morality of owning
slaves was entirely dependent on the societal context. Moral
iconoclasts, such as early anti-slavery campaigners, would by definition
always be morally wrong. In practice, if required to accept that moral
standards that endorse honor killings or slavery are not wrong in a
general sense, many moral relativists may recoil from this.</p>
<p><strong>Cultural moral relativism denies the possibility of
meaningful moral debate or moral progress <span class="citation"
data-cites="gowans2021moral">[1]</span>.</strong> Moral relativism seems
to require us to accept contradictory claims. For example, moral
relativists might say that a supporter of gay marriage is correct in
saying that homosexuality is morally acceptable, while someone from a
different culture might be correct in saying that homosexuality is
morally wrong, provided that both claims are in line with the moral
standards of the cultures they respectively belong to. If moral
relativism requires assert to simultaneously assert and deny that
homosexuality is morally acceptable, and any theory that generates
contradictions should be rejected, this would appear to mean that we
should reject moral relativism. In order to resist this, moral
relativists typically reinterpret the way we usual moral language in a
way that can save it from contradiction. The relativist would say that
when we say “homosexuality is wrong”, what we really mean is
“Homosexuality is not approved by my society’s norms”. This means that
relativists have to deny the possibility of moral disagreement and claim
that anyone who engages in such debates does not understand the meaning
of what they are saying.</p>
<p><strong>Moral relativism does not necessarily promote tolerance <span
class="citation" data-cites="gowans2021moral">[1]</span>.</strong> Some
have argued that one of the attractions of moral relativism is that it
promotes tolerance. By recognizing cultural differences (descriptive
moral relativism), they may assert that everyone ought to do what their
culture says is right (normative moral relativism). However, in a
society that is deeply intolerant, cultural moral relativism cannot
support tolerance, as it cannot claim that this has any universal or
objective value. Moral relativism only recommends tolerance to cultures
where it is already accepted. Indeed, to be tolerant, one need not be a
normative moral relativist. There are alternatives views which can
accommodate tolerance and multiple perspectives, such as
cosmopolitanism, liberal principles, and value pluralism.</p>
<p><strong>In practice, moral relativism can shut down ethics
discussions <span class="citation"
data-cites="gowans2021moral">[1]</span>.</strong> It is important to
note that different cultures have different moral standards. However, AI
developers sometimes invoke this observation and side with normative
moral relativism to avoid considering the ethics of their AI design
choices. Moreover, suppose AI developers do not analyze the ethical
implications of their choices and avoid ethical discussions by noting
the lack of cross-cultural consensus. In that case, the default is for
AI development to be driven by amoral forces, such as self-interest or
what makes the most sense in a competitive market. Decisions driven by
other forces, such as commercial incentives, will not necessarily be
aligned with the broader interests of society. Moral relativism can be
unattractive from a pragmatic point of view, as it limits our ability to
engage in discussions that may sometimes lead to convergence on shared
principles. This quietist stance de-emphasizes moral arguments to the
benefit of economic incentives and self-interest.<p>
Why are these debates about moral relativism relevant to AI? People
commonly observe that different cultures have different beliefs when
discussing how to ensure that AIs promote human values. It is essential
not to conflate this observation with normative moral relativism and
conclude that AI developers have no ethical responsibilities. Instead,
they are responsible for ensuring that the values embodied in their AI
systems are beneficial. Rather than a barrier, cultural variation means
that making AIs ethical requires a broad, globally representative
approach.</p>
<h2 id="is-ethics-determined-by-religion">A.2.2 Is Ethics Determined by
Religion?</h2>
<p>Moral relativists may believe that studying ethics is futile because
ethical questions are irresolvable. On the other hand, some people
believe that studying ethics is futile because moral questions are
already solved. This position is most common among those who say that
religion is the source of morality.</p>
<h3 id="divine-command-theory">Divine Command Theory</h3>
<p><strong>Many believe morality depends on God’s will and
commands.</strong> The view called <em>divine command theory</em> says
whether an action is moral is determined solely by God’s commands rather
than any qualities of the action or its consequences. (We use the term
“God” inclusively to refer to the god or gods of any religion.) This
theory suggests that God has the power to create moral obligations and
can change them at will.<p>
While this book does not argue for or against any particular religion,
we do suggest that there are severe problems with equating religion and
morality. One problem is that it creates a problematic understanding of
God.<p>
If you believe there is a god, you likely believe he is more than just
an arbitrary authority figure. Many religious traditions view God as
inherently good. It is precisely because God is good that religion
compels us to follow God’s word. However, if you believe that we should
follow God’s word because God is good, then there must be some moral
qualities (like goodness) that exist independently of God’s rules—thus,
divine command theory is false <span class="citation"
data-cites="plato2004euthyphro">[3]</span>.<p>
To be clear, this is not an argument against believing in God or
religion. It is an argument against equating God or faith with morality.
Both religious people and irreligious people can behave morally or
immorally. That’s why everyone needs to understand the factors that
might make our actions right or wrong.</p>
<h1 id="moral-considerations">A.3 Moral Considerations</h1>
<p>How can we determine whether an action is right or wrong? What are
the kinds of principles and values that should guide our moral
decisions? There are many factors to consider. Here, we’ll focus on a
few—-goodness, constraints, special obligations, and options-—that very
commonly enter into moral decision making.</p>
<h2 id="the-goodness-of-actions-and-their-consequences">A.3.1 The “Goodness”
of Actions and Their Consequences</h2>
<p>Moral decision making often involves considering the values, or
“goods,” that are at stake. These may be intrinsic goods or instrumental
goods.</p>
<p><strong>Intrinsic goods are things that are valuable for their own
sake.</strong> Philosophers disagree about what, if anything, is
intrinsically good, but many argue for the intrinsic value of things
like happiness, love, and knowledge. We value such things simply because
they are valuable—not because they necessarily lead to anything
else.</p>
<p><strong>Instrumental goods are things that are valuable because of
the benefits they provide or the outcomes they achieve.</strong> We
pursue instrumental goods as a means to an end, but not for their own
sake. Money, power, and education are examples of instrumental goods. We
value them because they can lead to other things we value, like
security, influence, career opportunities, or intrinsic goods.</p>
<p><strong>Intrinsically good things are not necessarily instrumentally
good.</strong> Sometimes, intrinsically bad things can be instrumentally
good and intrinsic goods can be instrumentally bad. For instance, many
people believe that honesty is intrinsically good. However, it’s easy to
imagine cases in which honesty can lead to bad outcomes, like hurt
feelings. Suppose a friend has confided in you that they are staying at
a shelter to hide from an abusive partner. If that abusive partner asks
you for your friend’s location, you may think that that honesty is
intrinsically good. However, revealing your friend’s location would be
instrumentally bad, as it may lead to further violence and perhaps even
a risk to your friend’s life. On the other hand, consider medical
treatments like chemotherapy. Chemotherapy is instrumentally good
because it can prolong cancer patients’ lives. Yet, as it requires the
administration of highly toxic drugs into a patient’s body, it could be
seen as harmful, or intrinsically bad. For many people, exercise is
painful, and pain is intrinsically bad, but exercise can be
instrumentally good.</p>
<p><strong>There is no consensus about what is intrinsically
good.</strong> Some philosophers believe that there are many intrinsic
goods. Others believe there is only one value. One common view is that
the only intrinsic good is wellbeing, and everything else is valuable
only insofar as it promotes wellbeing.<p>
Value pluralists believe that there are many intrinsic goods. These
values may include justice, rights, autonomy, and virtues such as
courage. Other philosophers believe there is only one fundamental value.
Among these, one common view is that the only intrinsic good is
wellbeing, and everything else is valuable only insofar as it promotes
wellbeing.</p>
<h3 id="sec:wellbeing">Wellbeing</h3>
<p><strong>Wellbeing is how well a person’s life is going for
them.</strong> It is commonly considered to be intrinsically good,
though there are different accounts of precisely what wellbeing is and
how we can evaluate it. Generally, a person’s wellbeing seems to depend
on the extent to which that person is happy, healthy, and fulfilled.
Three common accounts of wellbeing characterize it as 1) net pleasure
over pain, 2) preference satisfaction, or 3) a collection of objective
goods. Each account is elaborated below.</p>
<p>Some philosophers, known as <em>hedonists</em>, argue that wellbeing
is the achievement of the greatest balance of pleasure and happiness
over pain and suffering. (For simplicity we do not distinguish, in this
chapter, between “pleasure” and “happiness” or between “pain” and
“suffering,” though neither pair is interchangeable.) All else equal,
individuals who experience more pleasure have higher wellbeing. All else
equal, individuals who experience more pain have lower wellbeing.</p>
<p><strong>According to hedonism, pleasure is the only intrinsic
good.</strong> Goods like health, knowledge, and love are instrumentally
valuable. That is, they are only good insofar as they lead to pleasure.
It may feel as though other activities are intrinsically valuable. For
instance, someone who loves literature may feel that studying classic
works is valuable for its own sake. Yet, if the literature lover were
confronted with proof that reading the classics makes them less happy
than they otherwise would be, they might no longer value studying
literature. Hedonists believe that when we think we value certain
activities, we actually value the pleasure they bring us, not the
activities themselves.<p>
Hedonism is a relatively clear and intuitive account of wellbeing. It
seems to apply equally to everyone. That is, while we all may have
different preferences and desires, pleasure seems to be universally
valued. However, some philosophers argue that hedonism is an incomplete
account of wellbeing. They argue there may be other factors that
influence wellbeing, such as the pursuit of knowledge.</p>
<p>Some philosophers claim that what really matters for wellbeing is
that our preferences are satisfied, even if satisfying preferences does
not always lead to pleasurable experiences.<p>
One difficulty for preference-based theories is that there are different
kinds of preferences, and it’s unclear which ones matter. Preferences
can be split into three categories: stated preferences, revealed
preferences, and idealized preferences. Each of these categories can be
informative in different contexts.<p>
To illustrate different kinds of preferences, consider voter preferences
in a democratic election.<p>
In a democratic election, citizens choose which candidate they want to
elect by casting their vote on a ballot. Their choice to vote for a
given candidate can be impacted by a number of different factors.
Perhaps they have an existing political affiliation, are influenced by
social pressures, believe in the candidate’s policies, or maybe they
just like one candidate’s demeanor and personality. Importantly,
citizens may not always vote for the candidate they outwardly support,
and the choice to vote for a specific candidate can change when voters
discover new information.<p>
A voter’s <em>stated preference</em> is the candidate that they state
they support. Voters may express their stated preferences in
conversations, polls, and while campaigning.<p>
When a voter casts their vote on a ballot, they express their
<em>revealed preference</em>. Generally, a voter’s revealed preferences
align with their interests. For example, a voter who supports increased
funding for education might vote for a candidate who wants to increase
budgets for local public schools. A revealed preference is expressed by
your actions, not your words.<p>
People change their preferences upon learning new information.
Uninformed preferences can be reached quickly. For example, a voter
might have an uninformed preference based on a “gut reaction” to a
candidate. Voters can arrive at more <em>idealized preferences</em> once
they have gathered and evaluated all relevant information. They might
not actually do this—few people have the time or ability to perfectly
gather and evaluate all the relevant information that they would require
to find their idealized preferences. However, their preferences can
become more idealized over time. A voter might have an uninformed
preference for Candidate A and, after learning new information about
each candidate’s platform, they may arrive at a more informed preference
for a different candidate. In other words, preferences can change, and
they often do change as people become more informed.</p>
<p><strong>It is easy to learn about people’s stated preferences–—simply
ask them.</strong> Political polls and surveys, for example, are an easy
way to gather information about people’s stated preferences. However,
stated preferences may not always predict what people will actually
choose. A voter may outwardly express support for Candidate X, but when
it comes to casting their ballot, they may vote for Candidate Y.
Similarly, someone might express a stated preference to eat healthier,
but that doesn’t necessarily mean that they will. Their behavior (such
as eating only chocolate for a week) may indicate that their revealed
preference is for unhealthy food.<p>
Revealed preferences can be harder to observe, but they are generally
more useful for predicting people’s behavior. Someone with a stated
preference for vegetables but a revealed preference for chocolate is
more likely to purchase and consume chocolate than vegetables. When
researching consumer behavior, economists often prefer to study
consumers’ revealed preferences (i.e. what they buy) rather than stated
preferences (i.e. what they say they’d like to buy).</p>
<p>Others believe that wellbeing is the achievement of an objective set
of “goods” or “values.” These goods are considered necessary for living
a good life regardless of a person’s individual experiences or
preferences. There is disagreement about which particular goods are
necessary for wellbeing. Commonly proposed goods include happiness,
health, relationships, knowledge, and more. Objective goods theorists
consider these values to be important for wellbeing independently of
individual beliefs and preferences.<p>
There is no uncontroversial way to determine which goods are important
for living a good life. However, this uncertainty is not a unique
problem for objective goods theory. It can be difficult for hedonists to
explain why happiness is the only value that’s important for wellbeing
and for preference satisfaction theorists to determine which preferences
matter most.<p>
While people disagree about which account of wellbeing is correct, most
people agree that wellbeing is an important moral consideration. All
else equal, taking actions that promote wellbeing is generally
considered morally superior to taking actions that reduce
wellbeing.<p>
In the future, it is conceivable that AIs might be conscious and have
preferences but not experience pleasure, which would mean they could
have wellbeing according to the preference satisfaction theorists but
not hedonists. It is also possible that in the future AIs may have
wellbeing according to all three accounts of wellbeing. This would
require that we dramatically reassess our relationship with AIs.</p>
<h2 id="constraints-and-special-obligations">A.3.2 Constraints and Special
Obligations</h2>
<p>We’ve covered the moral consideration of intrinsic goods, and focused
on the intrinsic good wellbeing. Special obligations and constraints are
key considerations when we make ethical decisions.</p>
<p><strong>Special obligations are duties arising from
relationships.</strong> We can incur special obligations when we promise
someone to do something, take a professional position with
responsibilities, have a child, make a romantic commitment to a partner,
and so on. Sometimes we can have special obligations that we did not
volunteer for—a child to its parents, or our duties to fellow
citizens.</p>
<p><strong>Constraints are actions that we are morally prohibited from
taking.</strong> A constraint is something that places limits on our
actions. For example, many people think we’re morally prohibited from
lying, stealing, cheating, harming others, and more.</p>
<p><strong>Constraints often come in the form of rights.</strong> Rights
are claims that individuals may have over their community. For instance,
many people believe that humans have the rights to life, freedom,
privacy, and so on. Some people argue that any individual with the
capacity for experiencing pleasure and pain has rights. Non-human
individuals (including animals and AI systems) might also have certain
rights.<p>
An individual’s rights may require that society intervene in certain
ways to ensure that those rights are fulfilled. For instance, an
individual’s right to food, shelter, or education may require the rest
of society to pay taxes so that the government can ensure that
everyone’s rights are fulfilled. Rights that require certain actions
from others are called positive rights.<p>
Other rights may require that society abstain from certain actions. For
instance, an individual’s right to free speech, privacy, or freedom from
discrimination may require the rest of society to refrain from
censorship, spying, and discriminating. Rights that require others to
abstain from certain behaviors are called negative rights.<p>
Many AI researchers think that, for now, we should avoid accidentally
creating AIs that deserve rights <span class="citation"
data-cites="sebo2022chatbot">[1]</span>; for instance, perhaps all
entities that can experience suffering have natural rights to protect
them from it. Some think we should especially avoid giving them positive
rights; it might be fine to give them rights against being tortured but
not the right to vote. If they come to deserve rights, this would create
many complications and undermine our claim to control.</p>
<h2 id="what-does-it-mean-for-an-action-to-be-right-or-wrong">A.3.3 What does
it mean for an action to be right or wrong?</h2>
<p>Some of the first questions we might ask about ethics are: Are all
actions either right or wrong? Are some simply neutral? Are there other
distinctions we might want to draw between the morality of different
actions?<p>
The answers to these questions, like most moral questions, are the
subject of much debate. Here, we will simply examine what it might mean
for an action to be right or wrong. We will also draw some other useful
distinctions, like the distinction between obligatory and non-obligatory
actions, and between permissible and impermissible actions. These
distinctions will be useful in the following section, when we discuss
the considerations that inform our moral judgments.</p>
<h3 id="options">Options</h3>
<p>Special obligations and constraints tell us what we should not do,
and sometimes, what we must do. Intrinsic goods tell us about things
that would be good, should they happen. But philosophers debate how much
good we are required to do.</p>
<p><strong>Options are moral actions which we are neither required to do
nor forbidden from doing.</strong> Even though it would be good to
donate money, many people do not think people are morally required to
donate. This is an ethical option. If we believe in options, not all
actions are either required or forbidden.<p>
We now break down actions onto a spectrum on which we will simply
examine what it might mean for an action to be right or wrong. We will
also draw some other useful distinctions, like the distinction between
obligatory and non-obligatory actions and between permissible and
impermissible actions.</p>
<p><strong>Obligatory actions are those that we are morally obligated or
required to perform.</strong> We have a moral duty or obligation to
carry out obligatory actions, based on ethical principles. For example,
it is generally considered obligatory to help someone in distress, or
refrain from hurting others.</p>
<p><strong>Non-obligatory actions are actions that are not morally
required or necessary.</strong> Non-obligatory actions may still be
morally good, but they are not considered to be obligatory. For example,
volunteering at a charity organization or donating to a good cause may
be good, but most people don’t consider them to be obligatory.</p>
<p><strong>Permissible actions may be morally good or simply neutral
(i.e. not good or bad).</strong> In general, any action that is not
impermissible is permissible. Moral obligations, of course, are
permissible. We can consider four other actions: volunteering, donating
to charity, eating a sandwich, and taking a walk. These seem
permissible, and can be classified into two categories.<p>
One class of permissible actions is called <em>supererogatory
actions</em>. These may include volunteering or giving to charity. They
are generally considered good; in fact, we tend to believe that the
people who do them deserve praise. On the other hand, we typically don’t
consider the failure to do these actions to be bad. We might think of
supererogatory actions as those that are morally good, but optional;
they go “above and beyond” what is morally required.<p>
Another class of permissible actions is called <em>morally neutral
actions</em>. These may include seemingly inconsequential activities
like eating a sandwich or taking a walk. Most people probably believe
that actions like these are neither right nor wrong.</p>
<p><strong>Impermissible actions are those that are morally prohibited
or unacceptable.</strong> These actions violate moral laws or principles
and are considered wrong. Stealing or attacking someone are generally
considered to be impermissible actions.<p>
</p>
<figure id="fig:action-types">
<img src="https://raw.githubusercontent.com/WilliamHodgkins/AISES/main/images/action_types.png" class="tb-img-full"/>
<p class="tb-caption">Classes of permissible and non-obligatory actions</p>
<!--<figcaption>Classes of permissible and non-obligatory actions-->
<!--</figcaption>-->
</figure>
<p>Some philosophers believe that all actions fit on a scale like the
one above. At one end of the scale are impermissible actions, like
murder, theft, or exploitation. At the other end are obligatory actions,
like honesty, respect, and not harming others. In between are neutral
and supererogatory actions. These are neither impermissible nor
obligatory. Many people believe that the vast majority of our actions
fall into these two categories. Crucially, in designing ethical AI
systems that operate in the real world, it is important to determine
which actions are obligatory and which actions are impermissible.<p>
However, some philosophers do not believe in options; rather that
actions are all on a spectrum from the least moral to the most moral. We
will learn more about these positions, and others, when we discuss moral
theories later in this chapter.</p>
<h3 id="from-considerations-to-theories">From Considerations to
Theories</h3>
<p><strong>Moral considerations can guide our day-to-day decision
making.</strong> Understanding which factors are morally relevant can
help us think more clearly about what we should do. Of course, we don’t
always stop to consider every factor before making a decision. Rather,
we tend to draw broader conclusions or moral principles based on our
evaluations of specific cases. For instance, once we consider a few
examples of the ways in which stealing can harm others, we might draw
the conclusion that we shouldn’t steal.<p>
The considerations discussed in this section provide a basis on which we
can develop more practical, action-guiding theories about how we should
behave. The types of fundamental considerations in this section comprise
a subfield of ethics called <em>metaethics</em>. Metaethics is the
consideration of questions like “What makes an action right or wrong?”
and “What does it mean to say that an action is right or wrong?” <span
class="citation" data-cites="fisher2014metaethics">[2]</span><p>
These considerations are important in the context of designing AI
systems. In order to respond to situations in an appropriate way, AI
systems need to be able to identify morally relevant features and detect
situations where certain moral principles apply. They would also need to
be able to evaluate and compare the moral worth of potential actions,
taking into account various purported intrinsic goods as well as
normative factors such as special obligations and constraints. The
challenges of designing objectives for AI systems that respect moral
principles are further discussed in the Machine Ethics chapter.<p>
In the following section, we will discuss some popular moral
theories.</p>
<h1 id="moral-theories">A.4 Moral Theories</h1>
<p>Moral theories are systematic attempts to provide a general account
of moral principles that apply universally. Good moral theories should
provide a coherent, consistent framework for determining whether an
action is right or wrong. A basic background understanding of some of
the most commonly advanced moral theories provides a useful foundation
for thinking about the kinds of goals or ideals that we wish AI systems
to promote. Without this background, there is a risk that developers and
users of AI systems may jump to conclusions about these topics with a
false sense of certainty and without considering many potential
considerations that could change their decisions. Considering a range of
different philosophical theories enables us to stress-test our arguments
more thoroughly and surface questionable assumptions that may not have
been noticed otherwise. It would be highly inefficient for those
developing AI systems or trying to make them safer to attempt to
re-invent moral systems, without learning from the large existing body
of philosophical work on these topics.<p>
There are many different types of moral theories, each of which
emphasizes different moral values and considerations. Consequentialist
theories like utilitarianism hold that the morality of an action is
determined by its consequences or outcomes. Utilitarianism places an
emphasis on maximizing everyone’s wellbeing. Deontological theories like
Kantian ethics hold that the morality of an action is determined by
whether it conforms to universal moral rules or principles. Deontology
places an emphasis on rights, special obligations, and
constraints.<p>
Below, we explore the most common modern moral theories:
<em>utilitarianism</em>, <em>deontology</em>, <em>virtue ethics</em>,
and <em>social contract theory</em>.</p>
<h2 id="utilitarianism">A.4.1 Utilitarianism</h2>
<p>Utilitarianism is the view that we should do whatever results in the
most overall wellbeing <span class="citation"
data-cites="mill2004utilitarianism">[1]</span>. According to Katarzyna
de Lazari-Radek and Peter Singer, “The core precept of Utilitarianism is
that we should make the world the best place we can. That means that, as
far as it is within our power, we should bring about a world in which
every individual has the highest possible level of wellbeing” <span
class="citation" data-cites="lazari2017utilitarianism">[2]</span>. Under
Utilitarianism, the right action in any situation is the one which will
increase overall wellbeing the most, not just for the people directly
involved in the situation but globally.</p>
<h3 id="expected-utility">Expected Utility</h3>
<p><strong>Utilitarianism enables us to use empirical, quantitative
evidence when deciding moral questions.</strong> As we discussed in
the Wellbeing section, there is no
consensus about what, precisely, wellbeing is. However, if we discover
that wellbeing is something measurable, like happiness, moral
decision-making could take advantage of calculation and would rely less
on qualitative argumentation. To determine what action is morally right,
we would simply consider the available options. We might run some tests
or perform data analysis to determine which action would create the most
happiness, and that action would be the right one. Consider the
following example:</p>
<div class="blockquote">
<p><em>Drunk driving</em>: Amanda has had a few alcoholic drinks and is
deciding whether to drive or take the bus home. Which should she
choose?</p>
</div>
<p>A utilitarian could analyze this scenario by listing the possible
outcomes of each choice and determining their impact on overall
wellbeing. We call an action’s impact on wellbeing its <em>utility</em>.
If an action has <em>positive utility</em>, it will cause happiness. If
an action has <em>negative utility</em>, it will cause suffering. Larger
amounts of positive utility represent larger amounts of happiness, and
larger amounts of negative utility represent larger amounts of
suffering. Since no one can predict the future, the utilitarian should
also consider the probability that each potential outcome would
occur.<p>
A simplified, informal, back-of-the-envelope version of this utilitarian
calculation is below:<p>
</p>
<br>
<table class="tableLayout">
<thead>
<tr class="header">
<th style="text-align: center;">Amanda’s action</th>
<th style="text-align: left;">Possible outcome(s)</th>
<th style="text-align: center;">Probability of each outcome</th>
<th style="text-align: center;">Utility</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: center;">Amanda takes the bus.</td>
<td style="text-align: left;">Amanda is frustrated, the bus is slow, and
she has to wait in the cold.</td>
<td style="text-align: center;">100%</td>
<td style="text-align: center;">-1</td>
</tr>
<tr class="even">
<td rowspan="2" style="text-align: center;">Amanda drives home.</td>
<td style="text-align: left;">Amanda gets home safely, far sooner than
she would have on the bus.</td>
<td style="text-align: center;">95%</td>
<td style="text-align: center;">+1</td>
</tr>
<tr class="odd">
<td style="text-align: left;">Amanda gets into an accident and someone
is fatally injured.</td>
<td style="text-align: center;">5%</td>
<td style="text-align: center;">-1000</td>
</tr>
</tbody>
</table>
<caption style="text-align: center;">Table A.1: Illustrative calculation of utility from Amanda's possible actions.</caption>
<br>
<p>
</p>
<p>We are interested in the <em>expected utility</em> of each action—the
amount of wellbeing that each action is likely to result in. To
calculate the expected utility, we multiply the utility of each possible
outcome by the probability of that outcome occurring.<p>
Amanda choosing to take the bus has a 100% chance (a certainty) of
causing a small decrease in utility; she will be slightly
inconvenienced. Since the change in utility is small and negative, we’ll
estimate a small negative number to represent it, like -1. <em>The
expected utility of Amanda taking the bus is 100% <span
class="math inline">×</span> -1, or simply -1.</em><p>
If Amanda drives home, there is a 95% chance that she will get home
safely and create a small increase in utility–—let’s say of +1. However,
there’s also a 5% chance she could cause an accident and end someone’s
life. The accident would result in a very large decrease in utility.
Someone would experience pain and death, Amanda would feel guilty for
the rest of her life, and the victim’s friends and family would
experience loss and grief. We might estimate that the potential loss in
utility is -1000. That’s 1000<span class="math inline">×</span> worse
than the small increase in utility if Amanda gets home safely. <em>The
expected utility of Amanda driving home is the sum of both
possibilities:</em> <span
class="math inline">.95 × 1 + .05 × − 1000</span>, <em>or</em> <span
class="math inline"> − 49.05</span>.<p>
Both of Amanda’s options are expected to yield negative utility, but the
utilitarian would say that she should choose the better of the two
options. Unsurprisingly, Amanda should take the bus.</p>
<h3 id="implications-of-utilitarianism">Implications of
Utilitarianism</h3>
<p><strong>Utilitarianism may sometimes yield results that run against
commonly held beliefs.</strong> Utilitarianism aims at producing the
most wellbeing and insists that this is the only thing that matters.
However, many of the moral values that we have inherited conflict with
this goal. Utilitarianism can be seen as having less of a bias to defend
the moral status quo relative to some other moral theories such as
deontology or virtue ethics. This either makes Utilitarianism exciting
or threatening.</p>
<p><strong>Utilitarianism can lead to some radical moral
claims.</strong> Utilitarianism’s sole focus on wellbeing can lead it to
promote what have been or are viewed as radical actions. For example,
the founder of utilitarianism, Bentham, argued to decriminalize
homosexuality, and contemporary utilitarians have argued we have a much
greater obligation to give to charity than most of us seem to
believe.<p>
Bentham held many beliefs that were ahead of his time. Written in 1785,
in a social and legal environment very hostile to homosexuality,
Bentham’s essay “Offences against oneself” rebuts the arguments that
legal scholars had used to justify laws against homosexuality <span
class="citation" data-cites="bentham1978offences">[3]</span>.</p>
<p><strong>Today, many utilitarians believe that we should prioritize
helping people in low-income countries.</strong> Utilitarianism
continues to make recommendations that today’s society finds
controversial. Consider the following example:<p>
</p>
<div class="blockquote">
<p>On her morning walk through the park, Carla sees a child drowning in
the pond. She is wearing a new suit that she bought the day before,
worth $3,500. Should she dive in to save the child, even though she
would destroy her suit? <span class="citation"
data-cites="singer2017famine">[4]</span><p>
</p>
</div>
<p>The philosopher Peter Singer, who first posed this question, argues
that Carla should dive in. Furthermore, he argues that our judgment in
this case might mean that we should re-evaluate our obligation to donate
to charity. There are charities that will save a child’s life for around
$3,500. If we should forgo that amount in order to save a child who is
right in front of us, shouldn’t we do the same for children across the
world? Singer argues that distance is not relevant to our moral
obligations. If we have an obligation to a child in front of us, we have
the same obligation to similar children who may be far away.<p>
To maximize global wellbeing, Singer says that we should give our money
up until the point where a dollar would be better spent on us than on
charity. If our money helps others more than it can help ourselves,
there isn’t a utilitarian reason to keep it. For an adult making, say,
$50,000 per year, an extra $3,500 would be helpful, but is not critical
to their wellbeing. However, for someone making less than $3 per day in
a low-income country, $3,500 would be life-changing—not just for one
recipient, but for that person’s entire family and community. Singer
argues that, if giving money away can significantly help someone else,
and if giving it away would not be a significant sacrifice, we should
give the money to the person who needs it most.<p>
These conclusions imply that most of us (especially those of us in
high-income countries) should live very different lives. We should, for
the most part, live as inexpensively as possible and donate a
significant portion of our income to people in lower-income
communities.</p>
<h3 id="utilitarianisms-central-claims">Utilitarianism’s Central
Claims</h3>
<p>Utilitarianism can be distinguished from other ethical theories by
four central claims.</p>
<p>Utilitarianism is a form of consequentialism. Any theory that claims
that the consequences of an action alone determine whether an action is
right or wrong is <em>consequentialist</em>. Other theories, as we will
discuss later in this chapter, claim that some actions are right or
wrong regardless of their consequences.</p>
<p>Utilitarians believe that the only type of consequences that make an
action right or wrong are those that affect happiness or wellbeing. In
that sense, utilitarianism can be understood as a combination of
consequentialism and hedonism, as we discussed it in the Wellbeing section. Recall
that there are several different accounts of wellbeing, all of which are
compatible with utilitarianism.</p>
<p><strong>Classical utilitarianism.</strong> Most utilitarians are
hedonists about wellbeing; they believe that wellbeing is a function of
pleasure and suffering. Such utilitarians classical utilitarians. When
classical utilitarians say they want to improve wellbeing, they mean
that they want there to be more pleasure and less suffering in the
world.</p>
<p><strong>Preference utilitarianism.</strong> In contrast to classical
utilitarians, preference utilitarians believe that wellbeing is
constituted by the satisfaction of people’s preferences.<p>
The preference account of wellbeing is one of the many modifications of
classical utilitarianism. While we will not describe these other
theories in detail, it is useful to know that if we disagree with one
aspect of classical utilitarianism, there is often another utilitarian
or consequentialist theory that can accommodate our beliefs.</p>
<p><strong>Utilitarians believe that people have the same intrinsic
moral worth.</strong> Bentham exemplified utilitarian thought with the
phrase “Each to count for one and none for more than one.” People of
different classes, races, ethnicities, religions, abilities, and so on
are of equal moral worth. In other words, utilitarianism is an
<em>impartial</em> moral theory.</p>
<p><strong>For an individual to deserve moral treatment, they just need
to be capable of having wellbeing.</strong> According to Bentham, “The
question is not, Can they reason?, nor Can they talk? but, Can they
suffer?” This quote is often taken to mean that we should be concerned
with the wellbeing of animals, since animals feel pleasure and pain just
like humans. Similar positions are held by other utilitarians such as
Peter Singer <span class="citation"
data-cites="singer1981expanding">[5]</span>. If, in the future, AI
systems develop a capacity for wellbeing, they would deserve moral
treatment as well according to classical utilitarians.</p>
<p><strong>Utilitarians aim to maximize wellbeing.</strong> Utilitarians
do not think it is sufficient to perform an action with good
consequences; they think the only right action is the one with the best
consequences. They do not believe in options. The following example
illustrates this distinction.<p>
</p>
<div class="blockquote">
<p>Dorian has a choice: teach biology or research air quality. As a
teacher, he would help hundreds of students. As a researcher, he would
save thousands of lives. He enjoys teaching somewhat more than research.
What should he choose?<p>
</p>
</div>
<p>A utilitarian might argue that Dorian should become a researcher. In
this case, he knows that he will do more good. This is despite the fact
that Dorian would be a great teacher, and would have a positive impact
as a teacher. He would do more good through his job as a public health
researcher, so a utilitarian might argue that he is obligated to take
that option.<p>
The best option is always the one that maximizes wellbeing. This is a
straightforward result of valuing everyone’s wellbeing impartially and
always striving to do the best rather than the merely good.<p>
In summary, utilitarianism makes several claims: wellbeing is the only
intrinsic good, wellbeing should be maximized, wellbeing should be
weighed impartially, and an action’s moral value is determined by its
consequent effects on wellbeing. Utilitarianism teaches that the best
action we can take is the one that leads to the best positive effect on
wellbeing.</p>
<h3 id="common-criticisms-of-utilitarianism">Common Criticisms of
Utilitarianism</h3>
<p>While utilitarianism remains a popular moral theory, it is not
without its critics. This section explains some of the most common
objections to utilitarianism.</p>
<p>Many philosophers argue that utilitarianism is too demanding <span
class="citation" data-cites="scheffler1994rejection">[6]</span>. It
insists that we choose the best actions, rather than merely good ones.
As we saw in our discussion of the drowning child and our obligations to
the global poor, this can lead utilitarianism to recommend
unconventionally large commitments.<p>
According to this criticism, utilitarianism asks us to give up too much
of what we take to be valuable for the sake of other people’s wellbeing.
Perhaps we should quit a career that we love in order to work on
something that does more good, or we should not buy gifts for family and
friends if the money would produce more wellbeing when given to someone
suffering from a preventable disease. To live up to this critique of
everyday values we would have to radically change our lives, and
continue to change them as the global situation evolved. The critic
thinks that this is too much to reasonably ask of someone. A moral
theory, they think, should not make a moral life highly
challenging.<p>
A utilitarian can respond in two ways. The first way is to argue that,
while utilitarianism is theoretically demanding, it is practically less
so. For example, someone trying to live up to the theoretical demands of
utilitarianism might burn out, or harm the people around them with their
indifference. If they had asked less of themselves, they might have done
more good in the long run. Utilitarianism might even recommend acting
almost normally, if acting almost normally is the best way to maximize
wellbeing.<p>
However, it is unlikely that this response undermines the argument that
we should give some portion of our money to charity. Even if donating
most of our income would backfire. most people should likely donate more
than they do. Many utilitarians simply accept that their theory is
demanding. Utilitarianism does demand a lot of us, and until the critic
shows that these demands are not morally required of us, then we might
just live in a demanding world. While demanding too much of yourself can
be counter-productive, we should do far more than we currently do.</p>
<p>Another way of critiquing utilitarianism is to say that even if the
theory is consistent and appealing, it isn’t useful because we rarely
know the consequences of our actions in advance.<p>
When we illustrated a utilitarian calculation above using the case of
drunk driving, we intentionally simplified the situation. We considered
only a few possible immediate outcomes and we estimated their possible
likelihoods. In the real world, however, someone considering whether to
drive home faces unlimited possible outcomes, and those outcomes could
cause other events in the future that would be impossible to predict.
Moreover, we rarely know the probabilities of the effects of our
actions. Utilitarianism would be impractical if it required us to make a
long series of predictions and calculations for every choice we face.
Certainly, we shouldn’t expect Amanda to do so in the moment.<p>
In response to this criticism, a utilitarian might differentiate between
a criterion of rightness and a decision-making procedure <span
class="citation" data-cites="bales2023act">[7]</span>. A <em>criterion
of rightness</em> is the factor that determines whether actions are
right or wrong. According to utilitarianism, the criterion of rightness
is whether an action maximizes expected wellbeing compared to its
alternatives. In contrast, a theory’s <em>decision-making procedure</em>
is the process it recommends individuals use to make decisions.
Crucially, a theory’s decision procedure does not need to be the same as
its criterion of rightness.<p>
For example, a utilitarian would not likely advise everyone to make
detailed calculations before getting in the car after having a couple of
drinks. Most utilitarians would advise everyone to simply never drive
drunk. There’s only a need to consider a utility calculation in cases
where the best option is particularly unclear. Even then, such
calculations are only approximate and should not necessarily be
decisive. Just as corporations try to maximize profit without consulting
a spreadsheet for every decision, utilitarians might follow certain
rules of thumb without relying on utility calculations.<p>
In practice, utilitarians rely on robust heuristics for bringing about
better consequences and rarely consult explicit calculations. To better
improve the world, like others they often cultivate virtues such as
truth-telling, being polite, being fair, and so on. They often imitate
practices that have stood the test of time, even if they do not fully
understand their rationale. That is because some things may have complex
or obscure reasons that are not easily discerned by human reason or not
easily amenable to calculation. They often bear in mind Chesterton’s
fence, which warns against removing a barrier without knowing why it was
erected in the first place. Even if their criterion of rightness can be
controversial, utilitarians adopt decision procedures that are often
conventional.</p>
<p>Many philosophers argue that utilitarianism neglects sources of value
other than wellbeing. One famous argument meant to show that wellbeing
isn’t the only source of value is Robert Nozick’s “Experience Machine”
<span class="citation" data-cites="nozick1974anarchy">[8]</span>. Nozick
considers the following thought experiment:<p>
</p>
<div class="blockquote">
<p>“Suppose there were an experience machine that would give you any
experience you desired. Superduper neuropsychologists could stimulate
your brain so that you would think and feel you were writing a great
novel, or making a friend, or reading an interesting book. All the time
you would be floating in a tank, with electrodes attached to your brain.
Should you plug into this machine for life, preprogramming your life’s
experiences?”<p>
</p>
</div>
<p>Nozick claims that we would decline this offer because we care about
the reality of our actions. We do not just want to feel that we have
cheered up our friend; we actually want them to feel better. We do not
want the experience of writing a great work of literature, we want great
literature to exist because we worked on it. Many philosophers consider
this a decisive rebuttal to the idea that wellbeing is the only thing
that matters.<p>
Though many people say that they would prefer not to use the machine
when it is introduced as above, they may have a different reaction when
the thought experiment is presented differently.<p>
</p>
<div class="blockquote">
<p>“You wake up in a plain white room. You are seated in a reclining
chair with a steel contraption on your head. A woman in a white coat is
standing over you. ‘The year is 2659,’ she explains, ‘The life with
which you are familiar is an experience machine program selected by you
some forty years ago. We at IEM interrupt our clients’ programs at
ten-year intervals to ensure client satisfaction. Our records indicate
that at your three previous interruptions you deemed your program
satisfactory and chose to continue. As before, if you choose to continue
with your program you will return to your life as you know it with no
recollection of this interruption. Your friends, loved ones, and
projects will all be there. Of course, you may choose to terminate your
program at this point if you are unsatisfied for any reason. Do you
intend to continue with your program?” <span class="citation"
data-cites="greene2013moral">[9]</span><p>
</p>
</div>
<p>Joshua Greene, the author of this example, supposes that most people
would not want to leave the program. He suggests that what accounts for
the seeming difference between his and Nozick’s versions is the
<em>status-quo bias</em>. People tend to prefer the life they know.
Surveys of real people’s responses to these thought experiments indicate
that a range of factors—–including the status-quo bias—–affect their
responses. Nozick’s example is not as clear cut as his argument
supposes.<p>
In summary, utilitarianism is often criticized in three ways. People