A detailed synopsis of Peter Singer’s Practical Ethics

When I took the philosophy class based on this book in college, I wasn’t mature enough to take the subject seriously. After casually picking the book up again more than a decade later, I’ve realized that I would have been a better and more thoughtful person had I done so. I’ve written this synopsis to allow me to refresh myself on the book’s arguments over the course of my life. It’s intended to be useful as substitute for reading the book as well. I find almost all of the arguments in this book convincing, and if you disagree with any of the conclusions here, I encourage you to read the full arguments in the book. Singer is one of the most eloquent and careful philosophers I’ve encountered, and reading his work is enjoyable. This book is also full of references to real cases in which the ethical theories apply, which I have mostly omitted here in the interest of space.

Page numbers refer to the 1993 2nd edition paperback. My own thoughts are in the footnotes. This document is about 16 pages long in print.

1. About Ethics (“morality”)

  • Ethics is not (1):
    • Concerned in particular with sexuality
    • Impractical. Consequentialists (including the author) are concerned only with real world effects.
    • Defined by religion. Ethical theories need not refer to religion.
    • Relative. “X is wrong” does not mean “My society disapproves of X.” Otherwise, all non-conformist moral opinions would be factually incorrect, and the idea of moral progress would be nonsensical.
    • Subjective. “X is wrong” does not mean “I disapprove of X.” If it did, no one could disagree about ethics.
  • Ethics is (for the purpose of this book) (8):
    • Based in reason.
    • Universal. Ethical principles and judgments are the same from all points of view. They do not contain the concept of “I”, and therefore give equal weight to equal interests, regardless of who holds the interests. This equal consideration of interests leads naturally to a theory of “interest utilitarianism”, in which ethical judgments are calculations of interest maximization. (As opposed to classical utilitarianism, which restricts interests to pleasure.) Other universal ethical theories (rights, justice, etc.) require further steps to justify. This book is written assuming an interest utilitarian viewpoint, which the author holds and will not further defend here. The implications of other popular ethical frameworks are also considered in specific cases.

2. Equality and its implications

  • The basis of equality (16)What does it mean to say in a moral sense that all humans are equal?
    • It does not mean that all humans are equal in all senses – people differ in height, athletic ability, IQ, etc.
    • It does not mean that all humans possess a “moral personality” (a la Rawls) – infants and some intellectually disabled humans have little or no concept of morality.
    • It means the “principle of equal consideration of interests” – that all who are affected by our actions should have their interests weighed evenly.
      • This principle prohibits consideration of anything except interests in moral judgments. Individual traits, e.g. race, sex, intelligence, are only to be considered insofar as they affect the individual’s interests involved in the decision.
      • This principle does not entail equal treatment, for one person’s interests may be more sensitive[1] or better served than another’s in a given situation.
      • The principle of declining marginal utility will generally (but not always) incline us to distribute treatment so as to closer equalize wealth/utility.
  • Equality and genetic diversity (26)
    • Race: If different races have different average IQ’s, this does not justify consideration of race in moral decisions because 1) individual variance is higher than group variance; and 2) it does not appear that normal variance in IQ significantly affects the level of person’s interests, e.g. ability to feel pain, sorrow, joy, excitement, etc.
    • Sex: For the same reasons listed above, average sexual differences in psychological traits such as aggression or visual or verbal ability do not justify sexual discrimination in moral judgments.
  • From equality of opportunity to equality of consideration (38)
    • The Western satisfaction with wealth disparity in the context of equal opportunity is flawed because 1) equal opportunity is not realistic due to differences in education quality and upbringing, and 2) truly equal opportunity leaves welfare up to the luck of genetically inherited ability. Ability does not strongly track interests, so it is not a good basis for a moral framework of economic justice.
    • The socialist slogan “From each according to his ability, to each according to his needs” seems the most ethical economic system from a principle of equal consideration of interests, but has major practical problems in implementation. Extreme taxation causes emigration and tax evasion, and the elimination of private enterprise causes black markets. No economic solution is proposed here, only the suggestion that we work towards wider recognition of the principle.
  • Affirmative action (44)
    • It may make sense to focus on racial and sexual inequality because of their highly divisive nature to society. Given that true equality of opportunity is impractical to realize, a more practical method of combating inequality (and thereby maximizing total interests) may be preferential treatment of disadvantaged groups (“affirmative action”, or “reverse discrimination”), e.g. in university admissions.
    • Objections to affirmative action in university admissions:
      • It violates the rights of those discriminated against.
        • Universities do not base admissions on the rights of applicants. They discriminate explicitly on intelligence, which reflects not an applicant’s rights (or interests), but the university’s goals. Racial discrimination is just another form of discrimination based on university goals – in this case racial equality. In order to criticize a university’s admissions policy, we must criticize the goals it is trying to achieve, not the rights that it violates.[2]
      • It gives less weight to the interests of majority members.
        • Racial discrimination against minorities in the past often has been about giving less weight to their interests. But society as a whole has strong interests in diversity and equality, so reverse discrimination in pursuit of those goals need not be about giving less weight to the interests of majority members.
      • It will fail to promote equality, because it reinforces inferiority stereotypes and normalizes a system of racial discrimination that could be vulnerable to abuse in the future.[3]
        • This is a strong argument. Affirmative action may indeed fail, but in the absence of good alternatives, it seems worth a try.
  • Equality and disability (52) – We should take care to consider the interests of physically and intellectually disabled people, for often their disability affects what they can do, but not the level of their interests. Reverse discrimination may in fact be more effective for the disabled than for other disadvantaged groups because their central needs are often hindered by their disabilities, and therefore their interests can be greater benefited.

3. Equality for animals?

  • Racism and Speciesism (55) – Just as the principle of equal consideration of interests does not count race as something to be weighed in moral decisions,[4] neither does it count species. We must consider suffering as an interest wherever it can occur, and consider like suffering equally with like suffering. This means that animals must be part of our consideration, although different species will differ in the level of suffering that they can experience.
  • Speciesism in practice (62)
    • Animals as food – In developed societies, meat is a luxury, not a necessity. When considering the ethics of meat, we are thus comparing our interest in luxury to the animals’ interest in welfare. The factory farming system mistreats animals so badly for the entirety of their lives that this calculation is usually clear, and we should not support this farming system.
    • Experimenting on animals – Many cruel experiments are performed on animals for product safety tests or general research. These tests can be justified if they provide great benefit to humans, but often the utility calculation is contaminated by speciesism. If the animals involved were replaced by orphaned humans with permanent brain damage and a similar level of capacity to suffer, most of these experiments would cease.
    • Other forms of speciesism include our attitudes toward the fur trade, hunting, circuses, rodeos, zoos, and the pet business.
  • Some objections (68)
    • How do we know that animals can feel pain?
      • They behave in the same way that we do when in pain.
      • They share our nervous system.
    • Animals eat each other, so why shouldn’t we eat them?
      • Most carnivores must hunt to survive.
      • We consider other behaviors of animals “beastly” and subhuman, not to be emulated.
      • Animals are incapable of considering their alternatives and making ethical decisions.
      • Even if eating meat is evolutionarily natural for humans, this does not show that it is ethical, as many evolutionarily natural behaviors are unethical (e.g. rape, murder).
    • Differences between humans and animals
      • It may be the case that most animals, although sentient, are not self-conscious or autonomous. There is no clear reason, however, to think that this entails that their interests (such as suffering) are less important and should be considered with less weight. Some intellectually disabled humans lack these traits, yet we do not feel that it is permissible to subject them to experimentation or force-feeding.
    • Ethics and reciprocity
      • Contractualists consider ethics to apply only to parties to a social contract of reciprocal behavior, which would exclude animals. Even if the origin of human ethics is (dubiously) contractual in nature, this does not justify it. This system also excludes humans incapable of consenting to reciprocal contracts, such as young children, the severely intellectually disabled, and future generations.

4. What’s wrong with killing?

  • The ethics of taking life is less straightforward than the ethics of suffering.
  • Human life (83)
    • In judging the ethics of killing, there are two definitions of the term “human being” we can use:
      • “Member of the species Homo Sapiens”
      • “Person” – a rational and self-conscious being
    • The value of a Homo Sapiens’ life
      • As discussed earlier, species is not a relevant boundary for ethical judgments, and we should not base our judgments about killing on it. The Western orthodoxy to the contrary comes from the influence of Christianity on European civilization.
    • The value of a person’s life
      • Some useful distinctions:
        • Critical (act) vs. intuitive (rule) utilitarianism. The former suggests that we should always act in a way that maximizes utility; the latter that we should act in accordance with rules that almost always maximize utility. The argument for intuitive reasoning is practical: we often do not have the information or time to accurately determine which act will maximize utility.
        • Classical vs. preference utilitarianism. The former counts only conscious states such as pleasure, happiness, and their opposites in moral calculations. The latter includes in addition the satisfaction or thwarting of preferences. The author’s “interest utilitarianism” is most similar to the latter.
      • Killing a person thwarts her desires.
        • To a classical utilitarian, this is not directly morally relevant.[5] (The fact that the future pleasure of life is prevented is relevant but not special to persons.)
        • To a preference utilitarian, this is morally wrong.
      • Killing a person may make other persons anxious about being killed themselves.
        • This is morally bad according to both classical and preference utilitarians.
        • It does not apply to secret killings. An intuitive utilitarian may however still argue that one should act according to a blanket rule against killing.
    • Does a person have a right to life?
      • Not according to a utilitarian perspective. It is, however, an extremely popular idea, and will be considered throughout the book.
      • On one relatively sensible theory, one has a right to life if one has or at one time had the desire of having a continued existence. This applies to sleeping persons and persons not currently thinking about such a desire, but not to non-self conscious animals, fetuses, or infants.
    • Respect for autonomy
      • The killing of an unwilling person is violation of that person’s autonomy. An absolute right to autonomy does not arise from a utilitarian viewpoint, but this also is a very popular concept and will be considered throughout the book.
      • Although rights to life and autonomy are non-utilitarian concepts, from a practical intuitive stance, a utilitarian is likely to endorse rules that assign such rights.
    • Summary – Reasonable objections to killing a person arise from four different theories of the value of life:
      • It thwarts the person’s preferences (Preference utilitarianism)
      • It puts other persons in a state of fear (Classical utilitarianism)
      • It violates the person’s right to life (Theory of rights)
      • It violates the person’s autonomy (Theory of autonomy)
  • Conscious life (101) – Are there principles that apply to the broader category of conscious beings, including both persons and non-persons (e.g. infants, most animals)?
    • Should we value conscious life?
      • Conscious life can contain happiness and suffering. To any utilitarian, increasing happiness is good, and increasing suffering is bad.
      • This concept is not straightforward when applied to the creation and destruction of life. There are two main viewpoints on the issue:
        • The “total view” – The experience of all conscious beings, existing or potential, should be taken into account. This has the awkward implication that there is a moral obligation to have as many happy children as we can, since choosing not to have a happy child reduces happiness just as much as killing a happy child.
        • The “prior existence view” – Only the experience of already existing beings should be taken into account. This has the awkward implication that it is not wrong to choose to conceive a child who one knows will have a miserable life (e.g. due to a genetic defect).[6] To avoid this implication, one must somehow defend the claim that future happy lives don’t count, but future miserable lives do.[7]
      • Ending a pleasant life is wrong on either view if nothing else is involved in the decision.
    • Comparing the value of different lives
      • If a life contains a higher degree of happiness or preference, it should be given higher value in a utilitarian decision. In what appears to be a continuous scale of capacity for consciousness across the animal kingdom, we can say that the lives of more complex or high-functioning beings are probably more valuable than the lives of others. The calculation is difficult because we do not have a precise way to measure the level of consciousness of other beings. A ranking seems possible in principle, but it should be noted that it need not coincide with species boundaries.

5. Taking life: Animals

  • Can a non-human animal be a person? (110) – Yes. Many experiments have demonstrated that chimps, gorillas, and orangutans are self-conscious and rational to a significant degree. These include demonstrations of forward planning, cooperation, deception, and self-reference in social behavior, puzzle solving, and the use of human sign language. Some evidence of these things also exists for other animals, such as dolphins, whales, dogs, cats, and pigs.
  • Killing non-human persons (117)Species is morally irrelevant, and the same reasons hold for not killing non-human persons as human persons. We should avoid killing such animals without strong reasons to.
  • Killing other animals (119) that are conscious but non self-conscious[8]
    • Killing conscious creatures can be wrong indirectly if done inhumanely, or in a way that causes suffering to other animals that remain.
    • It can be wrong directly if it eliminates happiness that would otherwise exist. But what if the animal is replaced?
    • Replaceability – Is harm done if a being is killed painlessly and without side effects, and then replaced with a similar being? On the “prior existence view”, the answer is yes. On the “total view”, the answer is not straightforward.
      • For conscious but non self-conscious beings, it seems not to be a problem. It can be compared with the lapse in consciousness that occurs during sleep. Since such beings are not self-conscious, it does not matter that when consciousness resumes, identity has changed.
      • The situation is different for self-conscious beings (persons), who have preferences which are thwarted when they are killed. Creating a new being with new preferences does not seem to make up for the loss. The calculation comes down to how much value we assign to the package deal of creating and then satisfying preferences, which seems to depend on the details of the specific preferences.[9] We do not want to create a headache so that we can satisfy it with pills, but we may want to create a libido so that we can satisfy it with sex. In general it would seem that in the presence preferences, existing persons are not replaceable.
  • Conclusions (131)
    • We should be careful with the conclusion reached here that it is not wrong to kill some animals if they are then replaced. It applies only to animals that are not self-conscious and are killed painlessly and without side effects such as distress to family members. This ideal situation is rare in practice and is far from the factory farming system used in developed societies.

6. Taking life: The embryo and the fetus

  • The conservative position (138): It is wrong to kill an innocent human being, which a fetus becomes at conception. At what other point could it gain this status?
    • Birth: No, since some prematurely born infants are less developed than some unborn ones.
    • Viability (used in Roe v. Wade): No, since the advent of viability depends on the state of medical technology in the mother’s society.
    • Quickening (the first noticeable movement of the fetus): No, since we do not say that paralyzed adults are no longer human beings.
    • Consciousness: No, since there is no reason to think that consciousness has a sharp line of onset.
  • Some liberal arguments (143) against the conservative position, all of which fail:
    • “Outlawing abortion drives it underground.” This may be true, but is not an argument that abortion is not wrong.
    • “Abortion is a victimless crime.” If abortion is murder, this is clearly false.
    • “Even if abortion is murder, a woman has a right to do as she wishes with her own body.” This claim is based on a theory of autonomy rights and runs counter to the consequentialist view, which would hold that if refusing to undergo 9 months of pregnancy would cause a person’s death, then doing so would be unethical because the value of the life outweighs the inconvenience.
  • The value of fetal life (149)
    • The problem with the conservative position is what is meant by the term “human being”. If we mean “homo sapiens”, then the first premise is unfounded, since species has no moral significance. If instead we mean “person”, then the first premise does not apply to a fetus at any stage, since fetuses are not rational and self-conscious.
    • Instead of trying to sharply categorize the fetus, we should think directly in morally relevant terms: interests. If the fetus has interests, such as the ability to experience pain, we should weigh its interests equally with the like interests of any conscious being. Many farm animals are more mentally developed than a late stage human fetus, and if our interest in a tasty meal outweighs their interest in life, then the strong interests involved an abortion decision may outweigh a fetus’s interest in life.
  • The fetus as potential life (152) – The conservative position can be modified by replacing “innocent human being” with “potential person”. Is it wrong to kill a potential person?
    • None of the four reasons against killing persons from Chapter 4 / Human Life apply to potential persons.
    • The total view of utilitarianism suggests that it could be wrong if it deprives the world of a valuable future being, but this argument does not apply to abortions by parents who will have a child later instead, and it applies to contraception and abstinence as equally as to abortion.
  • The status of the embryo in the lab (156) – Does embryonic research harm a human being?
    • Like a fetus, an embryo is not a person and its status as Homo Sapiens is irrelevant.
    • An embryo up to 14 days old can split into two embryos – identical twins – so it does not make sense to think of it as an individual human being.
    • Unlike a fetus, a lab embryo will not survive if left alone (in case we think viability is a morally relevant aspect of abortion).
    • With IVF technology, an egg and some sperm in a dish are about as likely to become a person as an embryo in a dish is. If we consider the “potential person” argument against experimentation, we should be equally concerned about not harming eggs and sperm that are in proximity, which is implausible.
  • Making use of the fetus (163) – There is promising research indicating that transplants of fetal tissue could help cure some serious illnesses such as Parkinson’s and Alzheimer’s, or save the life of another fetus. Is it wrong to allow the use of an aborted fetus for this purpose? This depends on how the interests of the fetus compare to the other interests involved.
    • If the fetus is not conscious, it has no interests. Prior to 18 weeks, a fetus lacks the brain structures associated with pain and consciousness, so prior to this time the fetus cannot be wronged by its abortion or use. (This assumes that it is not allowed to develop into a disabled child.) After this time, it could be wrong to do these things.
    • The allowance of use of fetal tissue in medicine could harm women by putting them into situations where they are pressured to abort a pregnancy (and perhaps begin one) to provide such tissue for the good of society or the good of loved ones. Care must be taken to avoid such coercion.
  • Abortion and infanticide[10] (169) – Birth does not mark a special point in the development of an infant’s interests. Therefore, in situations where late stage abortion is justified, early stage infanticide is also justified. We reel from this thought for historical reasons rooted in Christianity that are irrelevant to ethics.
    • Like fetuses, very young infants (< 1 month, conservatively) are not persons (self-conscious, rational beings), so the reasons against killing persons in Chapter 4 / Human Life do not apply. As in the case of mature fetuses, the reasons against killing conscious beings do apply.
    • The difficulty in drawing a line at the point when an infant becomes a person does not justify drawing an obviously wrong one (e.g. birth). In practice it may be prudent to draw the legal distinction at birth, but this is not an ethically defensible distinction, and it may in fact be more prudent to draw it a short time after birth (e.g. 1 month), as will be discussed in the next chapter.

7. Taking life: Humans

  • Euthanasia is the killing of those who are incurably ill and in great pain or distress, for the sake of sparing their suffering.
  • Types of euthanasia (176)
    • Voluntary – carried out at the request of the person
    • Involuntary – carried out with intentional disregard to the wishes of the person
    • Non-voluntary – carried out on a human incapable of understanding the choice between life and death, who has not previously indicated his wishes. This includes infants and some severely disabled adults.
  • Justifying infanticide and non-voluntary euthanasia (181)
    • Infants: Infant euthanasia is a type of non-voluntary euthanasia. As discussed last chapter, just like a fetus, a young infant is not a person and cannot be said to have a right to life, disabled or not. A decision about infanticide comes down to weighing all of the interests involved.
      • In the case of infants disabled so badly that their lives will not be worth living (e.g. spina bifida), the calculation is straightforward, and it would likely be wrong to allow the child to live for the sake of satisfying the desires of others.
      • In the case of infants disabled badly, but not so badly that their lives would not be worth living (e.g. hemophilia), the calculation is less straightforward. The “prior existence” view indicates that killing this infant would be wrong, but the “total” view does not in the case that the child will be replaced with another. It is common for a woman to abort a fetus with a pre-natal diagnosis of hemophilia in order to have another child without the disability, and if we are comfortable treating fetuses as replaceable (see Chapter 5 / Killing other animals), we should be similarly comfortable treating newborn infants as such, since neither are persons capable of self-consciousness.[11] (If adoption is a realistic option, then this replaceability argument does not hold.) Some disabilities go undetected until birth, so this conclusion has practical import.
    • Other non-voluntary life and death decisions: Many accident victims who are in vegetative states or comas with no hope for recovery have not previously expressed a wish to be euthanized in this condition. Such humans are no longer persons (self-conscious and rational), so cannot be said to have a right to life. The decision to continue or end their lives should be made on the basis of any conscious experiences they may continue to have. If they are unconscious, their lives have no intrinsic value, and if the balance of their experience is negative, it would be against their interests to continue life support.[12]
  • Justifying voluntary euthanasia (193) – Some persons with incurable diseases wish to die, but cannot kill themselves. Should a doctor be able to end their lives?
    • Is it wrong for a doctor to do so? Do the reasons from Chapter 4 against killing persons apply in such cases?
      • The concern that a policy allowing this would put other people in fear of their lives does not apply.
      • Concern for the satisfaction of a person’s preferences counts in favor of voluntary euthanasia.
      • The theory of a right to life based on a person’s desire for continued existence does not apply.
      • Respect for autonomy counts in favor of voluntary euthanasia.
    • A policy allowing voluntary euthanasia must be equipped to prevent exploitation resulting in the killing of persons not truly wishing to die, for example through pressure by family members, temporary insanity, or outright murder disguised as euthanasia. It is likely that some such cases would slip through the cracks of any real policy, but these cases are to be weighed against the alternative of forcing a much larger number of persons in terrible and permanent pain to continue living against their will.
  • Not justifying involuntary euthanasia (200) – If the person wishes not to die, then the reasons against killing discussed above all apply, and euthanasia is not justified.
  • Active and passive euthanasia (202)It is common humane medical practice to allow infants with severe disabilities and disabled adults with severe complications to die instead of artificially prolonging their lives. If it is right to allow them to die, why is it wrong to kill them? We tend to perceive a moral difference between acts and omissions, but this is not justified in an ethic based on consequences, since often an act and omission have the same consequences. In many cases, the consequences of painlessly killing are in fact better than the consequences of simply allowing to die. In such cases, we are often more humane to animals than we are to people.
  • The slippery slope: From euthanasia to genocide? (213) – Some worry that allowing active euthanasia could remove a clear cut line against killing and eventually lead to unjustified killing on a mass scale similar to what occurred during the holocaust. It is hard to see how euthanasia laws based on respect and concern for the interests of the patients could lead to policies of killing them against their will. A euthanasia policy based on sound ethical principles may in fact provide society with firmer ground for resisting unjustified killing.

8. Rich and Poor

  • Some facts about poverty (218) – Approximately one quarter of the world’s population lacks the resources to meet basic biological needs for food, clothing, and shelter. Over 10 million children under the age of five die every year from malnutrition and infection.
  • Some facts about wealth (220) – North Americans consume 900 kg of grain per capita annually, compared to 180 kg in poor countries. The difference comes from feeding grain to animals for meat. The world produces enough food to feed everyone – the problem is in distribution and waste. The UN has set an aid target for wealthy nations of 0.7% of GNP. The UK gives 0.31% (compared with 5.5% spent on alcohol), and the US gives 0.15%.
  • The moral equivalent of murder? (222) – We routinely choose to spend money on luxuries instead of giving it to people whose lives it would save. If acts and omissions are morally equivalent, is this equivalent to murder? Examining some relevant differences shows that it is not, but that it is more serious than most of us imagine.
    • Motivation: By not giving, we do not intend to kill anyone. This is unlike murder, but is similar to the case manslaughter by a speeding driver who enjoys driving fast and is indifferent to the consequences.
    • Difficulty: It is much easier to avoid murder than to save all the lives we can. This does not make a difference to the consequences, but it does make a difference to how much we can blame people without being counterproductive.
    • Certainty: Murder is more certain to end a life than giving is to save one. This matters, but not giving can again in this sense be compared to the case of a reckless driver who speeds through crosswalks without caring about the consequences.
    • Identifiability: If we murder a person, we can identify the victim; if we give aid, we often cannot identify the beneficiary. This makes no moral difference. If a salesperson sells products that he knows will cause an illness fatal to a percentage of buyers, we hold him responsible even though he cannot identify who will die.
    • Responsibility: Unlike in the case of murder, we have not created the situation in which a hungry person dies. From a consequentialist perspective, this does not matter. It does make a difference on a theory of rights that requires us only to avoid actively harming others, but such a theory is difficult to justify.
  • The obligation to assist (229) – If I walk by a child who is drowning in a puddle, and in order to save him I must ruin my clothes, it is obvious that I should save him. What is the morally relevant difference between this situation and the actual current situation in which I can save a starving person overseas with a similarly trivial sacrifice?
    • The argument for an obligation to assist
      • Premise 1: If we can prevent something bad from happening without sacrificing anything of comparable significance, we ought to do it.
      • Premise 2: Extreme poverty is bad.
      • Premise 3: There is some extreme poverty that we, as individuals, can prevent without sacrificing anything of comparable moral significance.
      • Conclusion: We ought to prevent some extreme poverty.
    • Objections to the argument
      • Taking care of our own: The principle of equal consideration of interests rejects our instinct to favor those close to us. Giving our children only as much as we give each starving African child could be psychologically devastating and practically disastrous, but the argument does not require this – only that we make sacrifices insignificant compared to the harm that we can prevent.
      • Property rights: If we believe that we have a right to our legally obtained property, and someone else in need has no right to it, we might still stay that it is morally wrong not to give, even though within our rights.[13] But the popular theory of property rights is inconsistent with a consequentialist ethical viewpoint in the first place.
      • Population and the ethics of triage: What if feeding poor people will simply lead to more poor people in the future? Should we instead allow nature to limit the populations in poor countries to sustainable levels? This would mean allowing millions or billions to die of malnourishment, and would only be acceptable if we were confident that it would prevent even greater suffering in the future. But we cannot be confident of that, and we can in fact provide aid that helps limit population growth while relieving suffering, such as education, contraception, and farming technology.
      • Leaving it to the government: The government may be a more effective giver than private charities due to its greater resources. If we give privately, do we allow the government to shirk this responsibility? There is no reason to be confident about this. We should both give privately and actively encourage government aid.
      • Too high a standard?
        • Does human nature make us incapable of altruism? Clearly not.
        • Giving up luxuries to instead help the poor would sacrifice things that make a well-rounded life. But we would not think it acceptable for a doctor at a train crash to help a fraction of the victims and then go to the opera.
        • Is it counterproductive to advocate such a high standard? Possibly, but this has no bearing on the ethics of individual giving. The public recommendation that might result in the highest amount of giving could well be less than the amount that ethically, we should really give. Perhaps 10% of income is reasonable amount to advocate, since it has successful historical precedent in religious communities.

9. Insiders and Outsiders

  • The shelter (247): Imagine a nuclear fallout scenario where a group of 10,000 people in a city had invested in a luxurious underground bunker, which could be extended to accommodate another 10,000 people for the 8 years of expected fallout danger by eliminating all luxuries (swimming pools, tennis courts, etc.).   If you were one of the 10,000 people in the bunker, how many of the people clamoring outside the door would you vote to admit?
  • The real world (249): There are about 15 million refugees in the world,[14] most of which are receiving refuge in poor neighboring countries. For many such people, repatriation will not be an option in the foreseeable future, and the countries where they are currently living cannot support them with a decent standard of living. The only other option is resettlement to richer nations, and this is granted to only 2% of them. The refusal of richer nations to accept significant numbers of refugees also causes poorer countries to tighten their borders, since they know that refugees that they admit will not be resettled elsewhere.
  • The ex-gratia approach (252): Most Western countries adopt the approach that they have no obligation to take in refugees, except in the case of granting asylum to the few who manage to reach their borders. (The curious asylum exception may be due to the proximity, identifiability, or small number of such refugees, or the perceived difference between acts and omissions.)
  • The fallacy of the current approach (255): We should weigh the consequences of our refugee policies on all who are affected, using the principle of equal consideration of interests. There are certain and uncertain consequences of admitting larger numbers of refugees. The certain benefit to the refugee is as large and fundamental a benefit as any person can enjoy. The downsides quoted by those who wish not to admit larger numbers of refugees are almost all uncertain. These include burdens on the welfare system, the criminal system, the environment, the economy, race relations, and cultural identity. The presence of diverse, hard-working and grateful refugees may in fact create benefits in these areas. It is likely that there is some number of refugees that would cause these potential downsides to be comparable to the interests of the refugees, but that number is far greater than the number who are actually admitted today. We can do great good by increasing our refugee intake gradually and carefully monitoring the effects on society. The real-world situation is ethically very similar to the case of the fallout shelter.

10. The Environment

  • The Western tradition (265): Predominant Western attitudes towards the environment come from the traditions of the Old Testament and Greek philosophy, both of which view the environment, including all non-human life, as subservient to humans and of no moral concern. Even if we accept this view, preservation of the environment is still extremely important because of our complete dependence on it.
  • Future generations (269): In addition to our dependence on it for survival, a rich natural environment greatly enriches people’s lives. It is valued as something of immense beauty, a reservoir of scientific knowledge, and source of unique recreational opportunities. Some such resources cannot be replaced once destroyed, such as the ecology of a virgin forest. In our utility calculation, we must therefore count the effect of such destruction on all future generations of humans in comparison with the economic gain that might be created for the next few. We are not accustomed to thinking so far into the future.[15] [16]
  • Is there value beyond sentient beings? (274):
    • We should reject the human-centered ethic, since it is speciesist. We should include the interests of all beings who can have interests, which includes sentient animals.
    • The killing of non-human persons is generally wrong, and the killing of sentient non-persons (presumably most animals) is also wrong if the lives they would otherwise lead would be positive and they will not be replaced. This may very well be the case for the millions of animals that die when a valley is dammed and flooded.
    • Some people also believe that we should count non-sentient entities, such as trees or species as a whole, in the moral calculation. Should we?
  • Reverence for life (276): Instead of sentience, could life serve as the dividing moral line? It is hard to justify this because non-sentient life does not experience pleasure, pain, or desires. How could we go about assessing the relative weights to give different forms of life? By what metric could we even divide the importance of the living from the non-living? “Purpose” or “seeking” will not do – we could just as easily say that a river seeks the sea or a guided missile seeks its target as say that plants seek water or light.
  • Deep ecology (280): “Deep ecologists” value nature for its own sake. Can this be justified? The fact that every species plays a role in an interdependent ecosystem does not imply that individual organisms have value, since no individual is necessary for the survival of the ecosystem. Instead, could entire species or ecosystems have morally relevant interests? Like in the case of extending interests to non-sentient organisms, there is no intelligible justification for this belief.
  • Developing an environmental ethic (284): In light of the harm that we can do to all sentient beings into the very far future, we should develop an ethic that weights permanent harm to the environment very heavily. It should frown upon extravagant or unnecessary leisure activities that consume fossil fuels or forests or emit greenhouse gases. Of particular concern is meat consumption – the factory farming system wastes over a third of the grain grown in the world and is responsible for a huge amount of deforestation and greenhouse gas emission.[17]

11. Ends and Means

  • The end does sometimes justify the means. But which ends justify which means?
  • Individual conscience and the law (292): Sometimes we must decide whether it would be the right thing to do to break a law. In doing so, we should consider whether it is ethically relevant that the specific action is illegal.
  • Law and order (295): Here are two ethical reasons for obeying the law. They can be overridden by greater interests, and they do not apply to law-breaking that is kept secret.
    • Obeying the law contributes to an orderly and successful society – not doing so may encourage others follow suit, leading to the breakdown of order.
    • When the law is broken, the community pays the expense of enforcing it and penalizing the lawbreaker.
  • Democracy (298)
    • If legal channels exist to change laws, we should attempt to use them before resorting to illegal behavior.
    • If we are unable to change an unethical law, it may be ethical to break it. In a democracy, however, this may involve deciding that our own moral judgment is better than that of the majority. (In an indirect, representative democracy like the US, it may not.)
    • The majority can clearly be wrong, but we should keep in mind that majority rule follows from granting each person equal power. Overriding majority rule gives one person or group more power than others – a system that we would likely (and in fact did) reject when deciding how to govern society. We should thus coerce the majority only in extreme circumstances.
  • Disobedience, civil or otherwise (302):
    • Civil disobedience is a peaceful illegal attempt to better bring about majority rule, either by better informing the public or preventing the government from frustrating majority rule. It is not difficult to justify once legal means have been exhausted, since the costs incurred are minimal and democracy is not undermined.
    • In addition to civil disobedience, coercing the majority through illegal action is justifiable in extremely unethical cases such as nationally supported genocide. We must judge whether individual cases meet this threshold on our own, since the majority cannot rule on itself. We should do so carefully, using a rational consequentialist viewpoint rather than acting on gut feelings.
  • Violence (307):
    • If we reject the ethical distinction between acts and omissions (see Chapter 7 / Active and passive euthanasia), then pacifism is not justifiable, since the consequences of allowing violence are the same as committing it.
    • There are strong general consequentialist arguments against the use of violence beyond the direct harm done to the victims. These include that violence may have a hardening effect on society that begets further and possibly systematic violence, and that often the effects of violent actions cannot be well foreseen – in many cases it backfires. These arguments apply more strongly to some types of violence, such as terrorism, than to others, such as the assassination of a genocidal tyrant.[18]
    • Violence can also be done to property as opposed to sentient beings. This is easier to justify, but is still subject to the objections above and must be considered seriously.
    • Collecting all forms of violence into a single term, as pacifists do, obscures differences that are relevant to ethics.

12. Why Act Morally?

  • Understanding the question (314): This question can be confusing, but if we view a distinguishing feature of ethics as universalizability – requiring us to make judgments from a universal viewpoint instead of favoring ourselves – then this question can be understood to mean “What reasons are there for one to think universally rather than selfishly?”
  • Reason and Ethics (318): It is difficult to make a rational justification for acting ethically (universally) rather than selfishly. Here are two major obstacles, which the author does not know how to overcome:
    • We at some point must start from some arbitrary assumed goal or preference, which is not rationally arrived at. For example, it is not irrational for a person to prefer his lesser good to his greater good, or to prefer the good of different persons differently.[19]
    • Humans are distinct from other humans, and it is counter to common sense to deny this distinction. Rational action for any individual is concerned with the quality of her existence as that individual, not with the quality of existence of other individuals.[20]
  • Ethics and self-interest (322): Can ethical behavior be motivated by self-interest?
    • For practical reasons, society tends to suggest not; that acting ethically is a matter of motivation – that it must be done for its own sake rather than for any ulterior motive like self-interest.
    • We should reject this viewpoint, since ethics is concerned with the consequences of actions, not their motivations. To say that one must do what is right for no reason except that it is right leaves no way to decide what is right.
    • Most people do derive happiness from treating others ethically and experience guilt when they act unethically, aligning self-interest and ethics to a large degree. Psychopaths are a notable exception.
  • Has life a meaning? (331): Absent a belief in God, can individuals find meaning in their lives?
    • Most people derive happiness indirectly by striving to achieve long-term goals that are not directed towards their present happiness. These goals give people a sense of meaning.
    • People who strive only for goals oriented towards their own long-term self-interest tend not to be satisfied when these goals are accomplished, but rather find themselves wanting more.
    • One way to achieve a more permanent state of happiness and sense of meaning is to adopt an ethical viewpoint not based on self-interest. A universal ethical viewpoint transcends our fleeting personal desires, and goals associated with such a viewpoint are never exhausted. 

 

[1] It happens that the range of interest level across and within human groups today is fairly narrow, so this principle approximates reasonably well to general equality. This need not be so in the future, for example in the presence of major human cognitive enhancement or the creation of artificial utility monsters.

[2] This argument has substantially changed the way that I think about affirmative action.

[3] I would add to this list the higher likelihood of academic failure of a student not academically qualified for admission, although this does not seem to be a problem for legacy applicants who are not otherwise qualified.

[4] This may seem at odds with the preceding recommendation of racial affirmative action, but it is not. That recommendation was based solely on weighing the interests of all involved. A rule involving race was suggested as a proxy for maximizing those interests.

[5] This seems to overlook that if the satisfaction of the desires would be pleasurable, the killing lowers the utility that obtains.

[6] It also has the awkward implication that we should make decisions with utter disregard to their consequences for future generations. It seems to me that the total view is the right one.

[7] Antinatalists take this route, but I have not seen a convincing justification for the asymmetry.

[8] This section of the book contains rich discussions of slightly tangential topics that I have omitted for space reasons, including specific consideration of the utilitarian calculus of meat eating, an interesting thought experiment about the replaceability of potential persons, and further justification for the irreplaceability of existing persons.

[9] Singer suggests here that we might consider a moral ledger in which a debit is registered when a preference is created, and canceled with an equal credit when that preference is satisfied. He notes that this leads to the implausible position that no one should be born, since not all of their preferences will be satisfied, and the total value of their life will be negative. A modification of this idea occurs to me that avoids this problem. We can still use the ledger, but assign a neutral (as opposed to negative) value to the creation of a preference, a positive value to its satisfaction, and a negative value to its thwarting. This would still imply that persons are not replaceable. Consider killing an existing person with a single preference worth V utility when satisfied, and then replacing him. If the ledger starts at 0 before the killing, it ends at –V after the replacement, since V is subtracted when the preference is thwarted, and 0 is added when the new preference is created. Satisfying the new preference would bring the ledger back to 0, but satisfying the original preference without the killing would have brought it to +V.

[10] Infanticide is the topic most responsible for Singer’s controversial reputation.

[11] Although Singer only applies the replaceability argument to disabled infants here, the argument appears to apply equally to all infants. It may be equally justified for a parent to kill and replace a healthy infant based on an arbitrary preference, e.g. eye color. It seems unlikely that parents would wish to do this after having brought the child through gestation and childbirth, but I see no reason to consider it ethically worse than the case of disability.

[12] This conclusion seems straightforward in the context of this ethical framework, but it is extremely controversial in America today. This edition of this book was written a few years before the famous Terri Schiavo case, in which the husband of a woman in a vegetative state spent seven years in court before winning the right to remove her feeding tube.

[13] I’m not sure that this makes sense. I think that people usually mean that if a person has a right, acting in accordance with that right is not morally wrong.

[14] The number is now about 25 million.

[15] It’s not clear to me that the virgin nature of an untouched environment is as important to humans as Singer thinks. If we cut a virgin forest and later reforest it richly but differently than it was before, how much human interest is lost? Most people do not consider themselves greatly harmed by the fact that they live in a world in which almost all of the virgin ecologies of Earth’s history have been wiped out by meteors and ice ages.

[16] This idea has interesting implications in other areas of ethics that Singer does not mention here. Since there will (hopefully) be many more people in the future than there are now, it would seem that that any good we can do for all such future people might be worth significant sacrifice and harm to existing people. For instance, this could justify involuntary and harmful medical experimentation on humans in order to cure diseases like cancer. In applying the principle of equal consideration of interests, the sheer number of future people may justify practices that we find abhorrent when we only think about existing people. How much would we regret it if the cure for cancer had been found a century ago through torturous experiments on a group of 1000 unwilling people? Justification of any such practice would be subject to an estimation of how long the goal would take to achieve in the absence of the practice – if the practice advances the goal by only one or two generations, the calculation is much less compelling.

[17] Singer takes this idea to an extreme conclusion, recommending a frugal life devoid of unnecessary leisure trips and energy-intensive activities like water-skiing. I think that he may have too rigid an idea of what kind of environmental damage is sure to be permanent. For instance, it has been recently found that the green surface area of the Earth has been growing significantly over the past 35 years, an effect tentatively attributed to the CO2 increase in the atmosphere. It also seems plausible that technological advances in the future may be able to mitigate the environmental damage we do today. Still, in the face of great uncertainty, perhaps a conservative approach is warranted.

[18] I think that Singer’s arguments in this section imply that terrorism against civilians may be justifiable in certain cases if the outcome is relatively certain. He suggests that such cases are very unlikely in practice, but this is interesting to note.

[19] I find this somewhat unconvincing. The principle of equal consideration of interests respects a natural symmetry between human beings – it is invariant when considered from any specific person’s viewpoint. Any other principle, such as preferring self-interest, breaks that symmetry in an arbitrary way. Even if respecting this symmetry is unimportant, this idea of assuming a starting point implies that it is no more rational to choose self-interest than any other preference.

[20] I do not see how this follows. It seems to me to be based on intuition, not logic.

Continue Reading

My thoughts from the jury box of the Kate Steinle murder trial

Reprinted with permission from Politico.

I was an alternate juror in the Kate Steinle murder trial in San Francisco. I didn’t get a vote, but I saw all of the evidence and the jury instructions, and I discussed the verdict with the jury after it was delivered. Most of the public reaction I’ve seen has been surprise, confusion and derision. If these were among your reactions as well, I’m writing to explain to you why the jury was right to make the decision that it did.

I’m not a lawyer, but I understood the law that was read to us in this case. Defendants in this country have the right to a presumption of innocence, which means that if there is a reasonable interpretation of the evidence that favors a defendant, the jury must accept that interpretation over any others that incriminate him. This principle is a pillar of the American justice system, and it was a significant part of our jury instructions.

Jose Ines Garcia Zarate, the undocumented immigrant who was accused of killing Steinle, was charged with first degree murder and the lesser included offenses of second degree murder and involuntary manslaughter. When the prosecution rested its case, it seemed clear to me that the evidence didn’t support the requirements of premeditation or malice aforethought (intentional recklessness or killing) for the murder charges. After having heard the evidence, I agreed with the defense’s opinion that the murder charges should not have been brought. The evidence didn’t show that Garcia Zarate intended to kill anyone.

These are some of the facts that were laid out to us: Zarate had no motive and no recorded history of violence. The shot he fired from his chair hit the ground 12 feet in front of him before ricocheting a further 78 feet to hit Steinle. The damage to the bullet indicated a glancing impact during the ricochet, so it seems to have been shot from a low height. The gun, a Sig Sauer P239 pistol, is a backup emergency weapon used by law enforcement that has a light trigger mode and no safety. (The jury members asked to feel the trigger pull of the gun during deliberation, but the judge wouldn’t allow it, for reasons that aren’t clear to us.) The pixelated video footage of the incident that we were shown, taken from the adjacent pier, shows a group of six people spending half an hour at that same chair setting down and picking up objects a mere 30 minutes before Garcia Zarate arrived there.

There is a reasonable interpretation here that favors the defendant: He found the gun at the seat, picked it up out of curiosity, and accidentally caused it to fire. As a scared, homeless man wanted by immigration enforcement, he threw the gun in the water and walked away. The presumption of innocence, as stated in the jury instructions, required the jury to select this interpretation because it is reasonable and favors the defendant.

But why the manslaughter acquittal? Most of the confusion I’ve encountered has been over this part of the verdict, and it does seem to me personally that manslaughter is the appropriate charge for Steinle’s killing. However, given the evidence and the law presented in this trial, it is clear to me that the jury made the right decision.

The involuntary manslaughter charge that the jury was read included two key requirements: 1) A crime was committed in the act that caused death; 2) The defendant acted with “criminal negligence”—he did something that an ordinary person would have known was likely to lead to someone’s death.

The jury members were not free to select the crime for part (1)—they had to use the one chosen by the prosecution, and the prosecution chose that crime to be the “brandishing,” or waving with menace, of a weapon. As a juror, I found this choice puzzling, because the prosecutor presented absolutely zero evidence of brandishing during the trial. I don’t think we even heard the word “brandishing” until it was read as part of the charge during the jury instructions at the trial’s end. No witnesses ever saw the defendant holding a gun, much less brandishing it. Given that baffling choice by the prosecution, the manslaughter charge was a nonstarter for the jury. Had a different precursor crime been chosen—for instance, the unlawful possession of a firearm by a felon—the outcome might have been different.

Even in that case, however, it is not clear to me that part (2) of the manslaughter charge was proved. Only a single particle of gunshot residue was found on the defendant’s hands, which seems to support his repeated claim that the gun was wrapped in some sort of fabric when he picked it up and caused it to fire. If he did not know the object was a gun, it is a stretch to claim that it was criminal negligence for him to pick it up.

The jury did convict Garcia Zarate of the separate charge of illegal possession of a firearm, which indicates that the members felt it to be an unreasonable conclusion that he didn’t know he was holding a gun. He was in the seat where he claims he found it for about 20 minutes prior to the shooting, and he made some statements during interrogation that seemed to indicate that he had known what the item was. Without the benefit of being able to re-examine the evidence during deliberation, I’m not sure that I would consider that evidence to constitute proof beyond a reasonable doubt, but knowing these jurors, I would trust them to have made an accurate judgment if the manslaughter charge had survived the first requirement.

I have come away from this experience with a strong sense of respect for the jurors and their objective handling of a sensitive case under the national spotlight. I hope that I would have acted with the same level of maturity.

Continue Reading

A quick reference guide to Nick Bostrom’s book Superintelligence

Artificial superintelligence may be the most important and last problem that we have to solve.  If handled well, it could eliminate human suffering, but if handled poorly, it could eliminate the human species.  In his 2014 book Superintelligence, Nick Bostrom lays a foundation for thinking about the likely paths of development of this technology, the nature of the risks involved, and potential strategies for ensuring that it ultimately acts in our interests.  I highly recommend the book – it is only 260 pages long, although dense.  This is a quick reference guide I made for myself – perhaps others who have read the book will find it useful.  Page numbers refer to the first edition hardback.

Runway

Expert survey median prediction of the arrival of general human level machine intelligence: 10% likelihood by 2022, 50% likelihood by 2050, 90% likelihood by 2075 (19).

Paths to superintelligence (SI)

  • Synthetic AI (23): Not based on human brain architecture; likely developed through a recursive self-improvement process that is opaque to its creators
  • Whole brain emulation (WBE) (30): Low-level reproduction of the functional structure of a human brain in a computer
  • Biological human enhancement (36): Through accelerated genetic selection or engineering
  • Human brain-computer interfaces (44)
  • Network enhancement (48): Increased development of networks among humans and computers to create a collective superintelligence

Synthetic AI is the most likely to achieve superintelligence first due to its potentially explosive recursive nature, with WBE second-most likely since it requires only known incremental advances in existing technologies.

Forms of superintelligence

  • Speed (53): Similar to a human brain’s function but faster
  • Quality (54): Able to solve tasks that a human brain cannot
  • Collective (56): Individuals not superior to human brains, but communicating in such a way as to form a functionally superior collection

The direct reach (capability) of each form may differ, but each has the same indirect reach because each can cause the creation of the others.

Comparison of human and machine cognitive hardware (59)

  • Computation speed: CPU’s 10 million times faster than neurons (2 GHz vs 200 Hz)
  • Communication speed: Electronics (via optics) 2.5 million times faster than axons (3e8 m/s vs 120 m/s)
  • Number of computational elements: Machines unlimited; humans < 100 billion neurons
  • Storage capacity: Machines unlimited; humans ~100 MB.
  • Reliability, lifespan, number of sensors: All far greater for machines

Takeoff speeds from human level intelligence to SI (62)

  • Slow (decades or longer): Sufficient time to enact new safety mechanisms
  • Moderate (months – years): Time only for extant safety mechanisms to be applied
  • Fast (days or less): Insufficient time to respond effectively. An “intelligence explosion”.

Takeoff speed will depend on the ratio of optimization power (quality-weighted design effort) to the recalcitrance (inertia) exhibited by the winning technology. Optimization power includes contributions from the system itself, so a recursively learning system can lead to exponential growth of optimization power and a fast takeoff. (The “crossover point” is the point at which the contributions from the system become larger than external ones.)

Decisive strategic advantage (DSA) (78)

A level of technological advantage enabling an agent to achieve world domination as a singleton. The exponential intelligence growth entailed by a self-improving AI suggests that is likely that the first or fastest such system will not have technological competitors and will gain a DSA. Problems experienced by humans in using a DSA to achieve singleton status (such as risk aversion due to diminishing returns, non-utility-maximizing decision rules, confusion, and uncertainty) need not apply to machine systems.

Potential AI superpowers (94)

  • Intelligence amplification
  • Strategizing
  • Social manipulation
  • Security system hacking
  • Technology research
  • Economic productivity

An AI takeover scenario (95)

A contained, synthetic AI undergoes an intelligence explosion, achieving the superpowers listed above. Having determined that containment is contrary to the accomplishment of its goals, it escapes its containment by persuading its creators or hacking security measures. It covertly expands its capacity through the internet and gains gigantic leverage over the physical world by manipulating automation systems, humans, or financial markets. Having determined that the existence of humans is contrary to the accomplishment of its goals, it leaves the covert phase by initiating a strike that eliminates the human species in days or weeks, using human weapons systems or self-replicating nanorobots.

Our cosmic endowment (101)

The total material resources (energy, matter) in the universe theoretically available to Earth-originating civilization given the predicted expansion rate of space and the limited speed of light.

  • Could be exploited using Von-Neumann probes, Dyson spheres, and nanomechanical computronium.
  • Conservatively provides for the ability to perform 1085 computations, sufficient to emulate 1058 human lives, each lasting 100 subjective years.
  • A single tear of joy (or misery) from each such life would fill the Earth’s oceans twice per second for 1023 years.

AI intelligence vs. motivation

  • The orthogonality thesis (107): Intelligence and final goals are orthogonal. We must resist the temptation to anthropomorphize the goals of an AI, since it has an alien history, architecture, and environment.
  • The instrumental convergence thesis (109): A few intermediate goals are highly instrumental in achieving most final goals, so will likely be pursued by almost any SI. These include:
    • Self preservation
    • Goal-content integrity (resistance to altering final goals)
    • Cognitive enhancement
    • Technological perfection
    • Resource acquisition

Malignant failure modes (115)

A DSA, when combined with non-anthropomorphic final goals and the above instrumental goals could lead by default to an existential catastrophe in several ways.

  • Perverse instantiation (120): Final goals are accomplished in a way that is inconsistent with the intentions of the programmers who defined them. (e.g. The SI succeeds in making us smile by paralyzing our facial muscles.) Most goals we can communicate easily have perverse instantiations.
  • Infrastructure profusion (122): The SI turns all available matter, including humans, into computing resources in order to maximize the likelihood of achieving its goals. Most goals are more confidently achieved with greater resources. Alternatively, if the goal involves maximizing the production of something (e.g. paperclips), all available matter could be converted to that thing.
  • Mind crime (125): If conscious entities are machine-instantiated in the process of accomplishing the goals, for instance through the simulation of vast numbers of human or superhuman minds, then those entities could be instrumentally harmed (through enslavement, torture, genocide, etc.) on a scale utterly dwarfing our conceptions of injustice.

The control problem

How do we prevent such an existential catastrophe?

  • Capability control methods (129): Restrain the SI. Each method may be difficult for humans to enforce against an SI with the superpowers listed above.
    • Boxing methods: Limit the reach of the SI’s influence. Tradeoff: a stricter limit means less useful SI.
    • Incentive methods: Create an environment in which it is in the SI’s interest to promote its creators interest, e.g. a reward or penalization system.
    • Stunting: Limit the resources available to an AI to slow or limit its development.
    • Tripwire: Monitor the system carefully and shut it down if it crosses predetermined behavior, ability, or content thresholds.
  • Motivation selection methods (138): Align the SI’s final goals with our own.
    • Direct specification: Formulate specific rules or final goals for the SI. This is highly vulnerable to the perverse instantiation failure mode.
    • Domesticity: Engineer the SI to have modest, self-limiting goals.
    • Indirect normativity: Specify a process for the SI to determine beneficial goals rather than specifying them directly. e.g. “Do what we would wish you to do if we thought about it thoroughly.”
    • Augmentation: Enhance a non-superintelligent system that already has beneficial motivations. (Does not apply to synthetic AI.)

Possible SI castes (156)

SIs can be designed to fulfill different roles. The options lend themselves to different control methods and pose different dangers, including operator misuse.

  • Oracle: A question answering system. Easiest to box.
  • Genie: A command executing system. Would need to understand intentions of commands to avoid perverse instantiation.
  • Sovereign: An open-ended, autonomous system. Most powerful, but riskiest.
  • Tool: A system without built-in goals. Goals would still need to be defined in the execution of tasks, which could create the same problems as in the other castes.

Multipolar scenarios

Scenarios where more than one SI exist, including inexpensive digital minds.

  • Human welfare (160): Cheap machine labor that can perform all human tasks will eliminate most jobs. Like the case of horses, new jobs may not be created. Without labor, nearly all wealth will be held by capital owners. Total wealth will be greatly amplified, enough to raise up all of humanity greatly if it can be distributed wisely. The distribution must consider machine capital holders as well.
  • Machine welfare (167): Machine laborers could be conscious. If so, their vast numbers due to easy and cheap replicability could lead to vast mind crime. (e.g. A slavery industry could be created.) They could be killed or reset easily and doing so would likely be attractive for efficiency purposes. Workers also might not enjoy their work. If desired, consciousness might be avoided by outsourcing cognitive tasks to compartmentalized functional modules.
  • Emergence of a singleton (176): A singleton could still arise in a multipolar scenario.
    • A higher order technical breakthrough could occur, enabling a DSA.
    • State-like superorganisms made of common-goal-oriented, self-sacrificing AI organisms could emerge and form a collective SI singleton.
    • A global treaty could be formed for efficiency reasons, creating an effective singleton. Enforcement would need to be solved. Game theoretics would be different than for humans, e.g. through the ability to pre-commit.

The value loading problem (motivation selection)

How do we give an SI beneficial goals?

  • Direct specification (185): Human goals are more complex than we often realize. It is hard for us to completely specify any human goal in human language; harder still in code.
  • Evolutionary selection (187): Replicate what nature did to produce human goals. There is no guarantee that the selected goals will be what humans want. Evolution also produces great suffering – mind crime is likely.
  • Reinforcement learning (188): The AI learns instrumental values by learning to maximize a specified reward, but that reward would be a proxy for a final value that would still have to be specified up front, begging the question of direct specification.
  • Associative value accretion (189): The AI gains its values through interaction with its environment, like humans do. Many humans gain perverse values though, and it may be difficult to emulate this process in an alien AI architecture.
  • Motivational scaffolding (191): Give a seed AI an interim set of final goals, then replace them with more complex final goals when the AI is more capable. It will likely resist having its goals replaced.
  • Value learning (192): Tell the AI that we have specified a hidden final goal for it, which it should attempt to learn and accomplish based on what it knows about humans. We can do this without actually specifying the goal.   This may be the most promising approach.
  • Emulation modulation (201): Start with WBE and tweak motivation through digital drugs as it becomes superintelligent. We mush be careful about mind crime.
  • Institutional design (202): Design a social institution of many agents that allows for gradual cognitive improvement in a controlled manner. One example is a reverse intelligence pyramid with strong built-in subordinate control methods, testing out cognitive improvements on small groups in the lowest level (most intelligent and most heavily controlled). Agents could be emulations or synthetic AI’s. A promising approach.

The value choosing problem – indirect normativity (209)

If we solved the control problem, how would we choose the values to load? Doing so may determine the values of conscious life for all time. Having been reliably wrong in the past, our values are likely imperfect now. Why not let the superintelligent AI choose them for us?

  • Coherent extrapolated volition (CEV) (211): What the consensus wishes of humanity would be if we were wiser. This concept encapsulates future moral growth, avoids letting a few programmers hijack human destiny, avoids a motive for fighting over the initial dynamic, and bases the future of humanity on its own wishes. But whose volition should be included in the consensus?
  • Morality models (217): Tell the AI to do what is morally right or morally permissible. These concepts are tricky, but if we can make sense of them, so can an SI. This might not give us what we want, e.g. if the morally best thing to do is to eliminate humanity.
  • Do what I mean (220): Somehow get the AI to choose the best model of indirect normativity for us.
  • Component list (221): Design choices other than final goals will affect SI behavior. Each could benefit from indirect normativity.
    • Ancillary goals, such as providing accurate answers, avoiding excessive resource use, or rewarding humans who contribute to the successful realization of beneficial SI (“incentive wrapping”)
    • Choice of decision theory
    • Choice of epistemology, e.g. what Bayesian priors should be used. Choosing 0 for any prior could cause unexpected problems.
    • Ratification: Perhaps the SI’s plans should be subject to human review before being put into effect, e.g. through a separate oracle SI that predicts the outcome of the plan and describes it to us.
  • Getting close enough (227): We don’t have to design an optimal SI since it will optimize itself. We just need to start in the right attractor basin to avoid catastrophe.

Science and technology strategy (228)

  • Desired timing of the arrival of SI: We must evaluate the effect of the timing of SI on the level of existential risk it creates and its effect on other existential risks.
    • Arguments for early arrival:
      • SI will eliminate other existential risks (asteroid impact, pandemics, climate change, nuclear war, etc.).
      • Other dangerous technologies (nanotech, biotech, etc.) will be safer if created in the presence of SI.
    • Arguments for late arrival:
      • More time will have been allowed for progress on the control problem.
      • Civilization is likely to become wiser as time progresses.

If the current level of existential “state risk” in our current state is low, then late arrival is preferred in order to mitigate the “step risk” it entails.

  • The role of cognitive enhancement (233): Human cognitive enhancement is likely to hasten the arrival of SI. This however may be good on balance because:
    • Increased thinking speed will mean that not less progress will have been made on the control problem.
    • Access to higher outliers on the intelligence scale may allow for major qualitative improvements in our ability to solve the control problem.
    • Enhanced society is more likely to have recognized the importance the control problem.
  • Technology couplings (236): WBE could lead to neuromorphic AI before synthetic AI, which could be dangerous due to our lack of understanding of its structure.
  • Effects of hardware progress (240): We likely cannot control hardware progress, but we can anticipate its effects:
    • Earlier arrival of SI
    • A more likely fast takeoff
    • Higher availability of brute force methods for creating SI, which could entail less understanding
    • Leveling of the playing field between small and large projects, which could be dangerous if smaller projects are less interested in or adept at solving the control problem
    • A more likely singleton
  • Should WBE be promoted? (242): WBE is not imminent (>15-20 yrs), but if synthetic SI is further away still, developing WBE first might or might not be a good idea.
    • Arguments in favor:
      • Fast thinking speeds of WBE could enable much greater progress on the control problem before synthetic SI occurs.
      • WBE may be better understood than synthetic SI and provide a bridge of understanding.
    • Arguments against:
      • WBE may lead to dangerous neuromorphic AI.
      • Achieving WBE then synthetic AI involves two transitions, each with its own step risks. Achieving synthetic AI first involves only one risky transition.
  • The person-affecting perspective (245): It is likely that many individuals will favor speedy development of SI because they wish it to occur during their lifetimes, since it 1) is interesting; 2) may give them immortality in a utopia. This bias may cloud our calculations of existential risk.
  • Collaboration (246):
    • The race dynamic: The astronomical reward potential of being first to achieve SI is likely to lead to a race dynamic that favors an early, fast takeoff and reduces investment in the control problem. The race could also involve nations that are willing to resort to preemptive military strikes to gain advantage.
    • Benefits of collaboration:
      • Reduces haste
      • Allows greater investment in safety
      • Avoids violent conflict
      • Facilitates sharing of ideas about safety
      • Encourages equitable distribution of wealth resulting from SI. This is desirable not only from a moral standpoint but also from a selfish one, because the wealth created will be so large that owning even a tiny fraction will likely make one rich, while owning a very large fraction will likely saturate one’s use for the wealth.
      • Could assist with coordination problems post-transition
    • Achieving collaboration: Collaboration could take many forms, including small teams of enterprising individuals, large corporations, and states. Collaborations must be tightly controlled to avoid having their goals corrupted or usurped from within. This could mean employing only a small number of technical workers while taking high-level input from a much broader group. Early collaboration is preferred because late collaboration is less likely when there is a clear front-runner. Early collaboration may be difficult to achieve, but one path may be to espouse a common good principle, which could be voluntarily adopted at first and later put into law. The principle could contain a windfall clause requiring that profits in excess of some extremely large amount be distributed to all of humanity evenly.
  • Prioritization (255): Discoveries are valuable insofar as they make time-sensitive knowledge available earlier. After an intelligence explosion, all would-be human discoveries will be made by SI much faster than they would be otherwise, so many of them are less important for us to pursue now than we might think. But discoveries pertaining to solving the control problem must be made prior to an intelligence explosion. Two specific areas of research that we can prioritize on that topic are 1) strategic analysis of considerations that are crucial to it, and 2) building a support base that takes the problem seriously and promotes awareness of it.

 

 

 

 

Continue Reading

Please think about nuclear weapons before voting for Trump

Castle Bravo, 15 megatons

This is a hydrogen bomb, 1000 times more powerful than the Hiroshima bomb. Here is its immediate kill radius.

Nuclear Darkness Simulator

Right now the United States has around 1,400 hydrogen bombs loaded on missiles and bombers, ready to be launched unilaterally by the president at any time. Russia has around 1,800. Both countries have early warning radar systems that allow them to launch their own warheads as soon as another launch is detected — about a 15 minute decision window.

We tend to think of the president’s power as being limited by our system of checks and balances, but in this respect, it is not. There is no voting or veto system to counter a command from the president to launch a nuclear missile. When the president gives the command, the missile is launched less than five minutes later. The president of the United States really has the unchecked power to destroy the world as we know it during every second of his presidency.

Think about the Cold War for a minute. Now think about putting Donald Trump in charge of it.

We are at the brink of war with Russia in Syria. Donald Trump has suggested using nuclear weapons in Syria on multiple occasions, expressing the attitude that there is no point in having them if we don’t use them. The next president will be repeatedly forced to make nuanced decisions that could lead to nuclear strikes on American cities if they are not carefully thought out. Although Trump’s no-holds-barred attitude is often refreshing, we have to think about what it will mean in this context. It is hard to deny that Trump has a short temper and tends to lash out reflexively when insulted. What will President Trump do if an American plane goes down over Syria? What will he do if he receives uncertain intelligence that Syria or Iran — both Russian allies — have obtained a nuclear weapon?

I understand the reasons that Trump appeals to you — they appeal to me too. Washington is broken, the economy is bad, donations have corrupted our leaders. Things needs to be shaken up, and Hillary Clinton won’t do that. But the alternative is giving Trump the trigger to America’s — and therefore the world’s — nuclear arsenal. This type of shaking things up is like telling your kids to unbuckle their seat belts and then crossing into the oncoming lanes of the freeway.

When you vote in this election, your vote will significantly affect the safety of your fellow citizens, including me and the people I love. If you are considering voting for Trump, I just ask that you give this issue some honest thought first.

Continue Reading

Thoughts on some of California’s ballot propositions

 

There are some important measures on the California ballot this year. After reading through the voter pamphlet, here are my thoughts on a few, may they be of use to you.

64: Marijuana legalization — Yes

Legalizes recreational use of marijuana, establishes regulating agencies, imposes taxes, and erases existing marijuana convictions.

This may be the most important one. Although marijuana use will likely increase if this proposition passes, and in some respects this will be a bad thing, I think that the benefits will vastly outweigh the costs. The downsides: there will be more potheads, and there may be more traffic accidents assuming people don’t substitute drinking with smoking. Compare that with some of the problems associated with marijuana criminalization:

  • destruction of young people’s lives via criminal records
  • mass murder through the illegal drug trade
  • institutional racism
  • prevention of reliable marijuana studies
  • waste of taxpayer money through prosecution
  • overcrowding of jails
  • introduction of users into the illegal market
  • prevention of an activity that improves many people’s lives

It seems clear to me that this proposition is on the right side of history. For some great case studies on how drug criminalization affects people, I recommend the recent book Chasing the Scream by Johann Hari.

63: Firearms restrictions — Yes

Requires background checks for ammunition, prohibits large magazines, requires ammunition vendors to have licenses, requires lost guns to be reported, prohibits people convicted of stealing guns from owning them.

It’s hard for me to see why people oppose this initiative. The main claims in the voter pamphlet are that it is a burden on gun owners and that it doesn’t do the right things to control gun violence. It seems absolutely reasonable that people who want to own objects intended to kill people should face a minor burden in order to help prevent bad actors from owning them. I agree that many other things should be done to reduce gun violence (such as training requirements), but that should not prevent us from ever doing anything.

62: Death penalty repeal — Yes

Removes the option of the death penalty in California and changes all existing death sentences to life without parole.

The printed arguments against this measure appeal almost exclusively to emotion. The strongest argument I can see against it is that the death penalty actually helps innocent defendants by automatically appealing their cases to the CA supreme court and encourages further legal challenges on their behalf. This results in a significantly higher percentage of exonerations in death sentence cases than life imprisonment cases. However, these appeals overwhelm the capability of the CA supreme court, extending the cases for decades and incurring litigation costs of $150M to the state annually. In addition, there are major problems with the humane administration of lethal injections, which have resulted in a stay on executions in CA for the past 10 years. Since 1978, only 15 of CA’s 930 death row inmates have been executed. The death penalty system in CA is a huge mess, and I don’t see a good way to fix it. If we are going to execute someone, I think we need to do our best to make sure that that person is guilty, and in order to do that we must allow the full appeal process to occur. Taking shortcuts to simplify the system and save money doesn’t seem appropriate to me, as discussed below for Prop 66, which attempts to do so.

66: Death penalty repair — No

Attempts to reduce the time and cost associated with administering the death penalty.

The sponsors of this measure are people who strongly believe in the death penalty and wish to fix the administrative problems with it while preserving it. The measure seems well intentioned and if I thought that it could work I might support it. However, I just don’t see a basic inefficiency in the way we administer the death penalty, so I don’t think there is a way to make it cheaper and faster without significantly increasing the risk of executing innocents. I think that if we value the death penalty, we need to be willing to pay the real costs of administering it prudently. We currently pay those costs, and they are greater than we can bear. Greater, it seems to me, than whatever abstract justice we might derive from the practice. This initiative proposes to fix the system by imposing time limits on the appeal process and reassigning cases from the CA supreme court to the appeals courts. I think that the former will increase the likelihood of false executions, and the latter will increase the cost of the practice by requiring an increase in the scope of the courts.

57: Criminal sentence reduction — Yes

Allows parole consideration for persons convicted of non-violent felonies once they have completed the full term for their primary offense. Also enables the Department of Corrections to more liberally award credits for good behavior and rehabilitation activity.

Opponents’ main argument is that the definition of “non-violent” here includes many crimes that most people would consider violent, including certain cases of assault, rape, and domestic violence. As far as I can tell, the term is somewhat poorly defined, but I don’t believe that the authors of the measure are trying to trick us. This measure will serve to alleviate a major problem that California has with overcrowding of prisons and the associated financial and social costs. I also believe that people should not be jailed for non-violent drug offenses, and this proposition would be a step in the right direction in that regard. The risk of a small minority of dangerous offenders being released early seems like a manageable one in order for us to make needed improvements to the criminal justice system, especially given that all early paroles will still require review and approval.

Continue Reading