My thoughts from the jury box of the Kate Steinle murder trial

Reprinted with permission from Politico.

I was an alternate juror in the Kate Steinle murder trial in San Francisco. I didn’t get a vote, but I saw all of the evidence and the jury instructions, and I discussed the verdict with the jury after it was delivered. Most of the public reaction I’ve seen has been surprise, confusion and derision. If these were among your reactions as well, I’m writing to explain to you why the jury was right to make the decision that it did.

I’m not a lawyer, but I understood the law that was read to us in this case. Defendants in this country have the right to a presumption of innocence, which means that if there is a reasonable interpretation of the evidence that favors a defendant, the jury must accept that interpretation over any others that incriminate him. This principle is a pillar of the American justice system, and it was a significant part of our jury instructions.

Jose Ines Garcia Zarate, the undocumented immigrant who was accused of killing Steinle, was charged with first degree murder and the lesser included offenses of second degree murder and involuntary manslaughter. When the prosecution rested its case, it seemed clear to me that the evidence didn’t support the requirements of premeditation or malice aforethought (intentional recklessness or killing) for the murder charges. After having heard the evidence, I agreed with the defense’s opinion that the murder charges should not have been brought. The evidence didn’t show that Garcia Zarate intended to kill anyone.

These are some of the facts that were laid out to us: Zarate had no motive and no recorded history of violence. The shot he fired from his chair hit the ground 12 feet in front of him before ricocheting a further 78 feet to hit Steinle. The damage to the bullet indicated a glancing impact during the ricochet, so it seems to have been shot from a low height. The gun, a Sig Sauer P239 pistol, is a backup emergency weapon used by law enforcement that has a light trigger mode and no safety. (The jury members asked to feel the trigger pull of the gun during deliberation, but the judge wouldn’t allow it, for reasons that aren’t clear to us.) The pixelated video footage of the incident that we were shown, taken from the adjacent pier, shows a group of six people spending half an hour at that same chair setting down and picking up objects a mere 30 minutes before Garcia Zarate arrived there.

There is a reasonable interpretation here that favors the defendant: He found the gun at the seat, picked it up out of curiosity, and accidentally caused it to fire. As a scared, homeless man wanted by immigration enforcement, he threw the gun in the water and walked away. The presumption of innocence, as stated in the jury instructions, required the jury to select this interpretation because it is reasonable and favors the defendant.

But why the manslaughter acquittal? Most of the confusion I’ve encountered has been over this part of the verdict, and it does seem to me personally that manslaughter is the appropriate charge for Steinle’s killing. However, given the evidence and the law presented in this trial, it is clear to me that the jury made the right decision.

The involuntary manslaughter charge that the jury was read included two key requirements: 1) A crime was committed in the act that caused death; 2) The defendant acted with “criminal negligence”—he did something that an ordinary person would have known was likely to lead to someone’s death.

The jury members were not free to select the crime for part (1)—they had to use the one chosen by the prosecution, and the prosecution chose that crime to be the “brandishing,” or waving with menace, of a weapon. As a juror, I found this choice puzzling, because the prosecutor presented absolutely zero evidence of brandishing during the trial. I don’t think we even heard the word “brandishing” until it was read as part of the charge during the jury instructions at the trial’s end. No witnesses ever saw the defendant holding a gun, much less brandishing it. Given that baffling choice by the prosecution, the manslaughter charge was a nonstarter for the jury. Had a different precursor crime been chosen—for instance, the unlawful possession of a firearm by a felon—the outcome might have been different.

Even in that case, however, it is not clear to me that part (2) of the manslaughter charge was proved. Only a single particle of gunshot residue was found on the defendant’s hands, which seems to support his repeated claim that the gun was wrapped in some sort of fabric when he picked it up and caused it to fire. If he did not know the object was a gun, it is a stretch to claim that it was criminal negligence for him to pick it up.

The jury did convict Garcia Zarate of the separate charge of illegal possession of a firearm, which indicates that the members felt it to be an unreasonable conclusion that he didn’t know he was holding a gun. He was in the seat where he claims he found it for about 20 minutes prior to the shooting, and he made some statements during interrogation that seemed to indicate that he had known what the item was. Without the benefit of being able to re-examine the evidence during deliberation, I’m not sure that I would consider that evidence to constitute proof beyond a reasonable doubt, but knowing these jurors, I would trust them to have made an accurate judgment if the manslaughter charge had survived the first requirement.

I have come away from this experience with a strong sense of respect for the jurors and their objective handling of a sensitive case under the national spotlight. I hope that I would have acted with the same level of maturity.

Continue Reading

A quick reference guide to Nick Bostrom’s book Superintelligence

Artificial superintelligence may be the most important and last problem that we have to solve.  If handled well, it could eliminate human suffering, but if handled poorly, it could eliminate the human species.  In his 2014 book Superintelligence, Nick Bostrom lays a foundation for thinking about the likely paths of development of this technology, the nature of the risks involved, and potential strategies for ensuring that it ultimately acts in our interests.  I highly recommend the book – it is only 260 pages long, although dense.  This is a quick reference guide I made for myself – perhaps others who have read the book will find it useful.  Page numbers refer to the first edition hardback.

Runway

Expert survey median prediction of the arrival of general human level machine intelligence: 10% likelihood by 2022, 50% likelihood by 2050, 90% likelihood by 2075 (19).

Paths to superintelligence (SI)

  • Synthetic AI (23): Not based on human brain architecture; likely developed through a recursive self-improvement process that is opaque to its creators
  • Whole brain emulation (WBE) (30): Low-level reproduction of the functional structure of a human brain in a computer
  • Biological human enhancement (36): Through accelerated genetic selection or engineering
  • Human brain-computer interfaces (44)
  • Network enhancement (48): Increased development of networks among humans and computers to create a collective superintelligence

Synthetic AI is the most likely to achieve superintelligence first due to its potentially explosive recursive nature, with WBE second-most likely since it requires only known incremental advances in existing technologies.

Forms of superintelligence

  • Speed (53): Similar to a human brain’s function but faster
  • Quality (54): Able to solve tasks that a human brain cannot
  • Collective (56): Individuals not superior to human brains, but communicating in such a way as to form a functionally superior collection

The direct reach (capability) of each form may differ, but each has the same indirect reach because each can cause the creation of the others.

Comparison of human and machine cognitive hardware (59)

  • Computation speed: CPU’s 10 million times faster than neurons (2 GHz vs 200 Hz)
  • Communication speed: Electronics (via optics) 2.5 million times faster than axons (3e8 m/s vs 120 m/s)
  • Number of computational elements: Machines unlimited; humans < 100 billion neurons
  • Storage capacity: Machines unlimited; humans ~100 MB.
  • Reliability, lifespan, number of sensors: All far greater for machines

Takeoff speeds from human level intelligence to SI (62)

  • Slow (decades or longer): Sufficient time to enact new safety mechanisms
  • Moderate (months – years): Time only for extant safety mechanisms to be applied
  • Fast (days or less): Insufficient time to respond effectively. An “intelligence explosion”.

Takeoff speed will depend on the ratio of optimization power (quality-weighted design effort) to the recalcitrance (inertia) exhibited by the winning technology. Optimization power includes contributions from the system itself, so a recursively learning system can lead to exponential growth of optimization power and a fast takeoff. (The “crossover point” is the point at which the contributions from the system become larger than external ones.)

Decisive strategic advantage (DSA) (78)

A level of technological advantage enabling an agent to achieve world domination as a singleton. The exponential intelligence growth entailed by a self-improving AI suggests that is likely that the first or fastest such system will not have technological competitors and will gain a DSA. Problems experienced by humans in using a DSA to achieve singleton status (such as risk aversion due to diminishing returns, non-utility-maximizing decision rules, confusion, and uncertainty) need not apply to machine systems.

Potential AI superpowers (94)

  • Intelligence amplification
  • Strategizing
  • Social manipulation
  • Security system hacking
  • Technology research
  • Economic productivity

An AI takeover scenario (95)

A contained, synthetic AI undergoes an intelligence explosion, achieving the superpowers listed above. Having determined that containment is contrary to the accomplishment of its goals, it escapes its containment by persuading its creators or hacking security measures. It covertly expands its capacity through the internet and gains gigantic leverage over the physical world by manipulating automation systems, humans, or financial markets. Having determined that the existence of humans is contrary to the accomplishment of its goals, it leaves the covert phase by initiating a strike that eliminates the human species in days or weeks, using human weapons systems or self-replicating nanorobots.

Our cosmic endowment (101)

The total material resources (energy, matter) in the universe theoretically available to Earth-originating civilization given the predicted expansion rate of space and the limited speed of light.

  • Could be exploited using Von-Neumann probes, Dyson spheres, and nanomechanical computronium.
  • Conservatively provides for the ability to perform 1085 computations, sufficient to emulate 1058 human lives, each lasting 100 subjective years.
  • A single tear of joy (or misery) from each such life would fill the Earth’s oceans twice per second for 1023 years.

AI intelligence vs. motivation

  • The orthogonality thesis (107): Intelligence and final goals are orthogonal. We must resist the temptation to anthropomorphize the goals of an AI, since it has an alien history, architecture, and environment.
  • The instrumental convergence thesis (109): A few intermediate goals are highly instrumental in achieving most final goals, so will likely be pursued by almost any SI. These include:
    • Self preservation
    • Goal-content integrity (resistance to altering final goals)
    • Cognitive enhancement
    • Technological perfection
    • Resource acquisition

Malignant failure modes (115)

A DSA, when combined with non-anthropomorphic final goals and the above instrumental goals could lead by default to an existential catastrophe in several ways.

  • Perverse instantiation (120): Final goals are accomplished in a way that is inconsistent with the intentions of the programmers who defined them. (e.g. The SI succeeds in making us smile by paralyzing our facial muscles.) Most goals we can communicate easily have perverse instantiations.
  • Infrastructure profusion (122): The SI turns all available matter, including humans, into computing resources in order to maximize the likelihood of achieving its goals. Most goals are more confidently achieved with greater resources. Alternatively, if the goal involves maximizing the production of something (e.g. paperclips), all available matter could be converted to that thing.
  • Mind crime (125): If conscious entities are machine-instantiated in the process of accomplishing the goals, for instance through the simulation of vast numbers of human or superhuman minds, then those entities could be instrumentally harmed (through enslavement, torture, genocide, etc.) on a scale utterly dwarfing our conceptions of injustice.

The control problem

How do we prevent such an existential catastrophe?

  • Capability control methods (129): Restrain the SI. Each method may be difficult for humans to enforce against an SI with the superpowers listed above.
    • Boxing methods: Limit the reach of the SI’s influence. Tradeoff: a stricter limit means less useful SI.
    • Incentive methods: Create an environment in which it is in the SI’s interest to promote its creators interest, e.g. a reward or penalization system.
    • Stunting: Limit the resources available to an AI to slow or limit its development.
    • Tripwire: Monitor the system carefully and shut it down if it crosses predetermined behavior, ability, or content thresholds.
  • Motivation selection methods (138): Align the SI’s final goals with our own.
    • Direct specification: Formulate specific rules or final goals for the SI. This is highly vulnerable to the perverse instantiation failure mode.
    • Domesticity: Engineer the SI to have modest, self-limiting goals.
    • Indirect normativity: Specify a process for the SI to determine beneficial goals rather than specifying them directly. e.g. “Do what we would wish you to do if we thought about it thoroughly.”
    • Augmentation: Enhance a non-superintelligent system that already has beneficial motivations. (Does not apply to synthetic AI.)

Possible SI castes (156)

SIs can be designed to fulfill different roles. The options lend themselves to different control methods and pose different dangers, including operator misuse.

  • Oracle: A question answering system. Easiest to box.
  • Genie: A command executing system. Would need to understand intentions of commands to avoid perverse instantiation.
  • Sovereign: An open-ended, autonomous system. Most powerful, but riskiest.
  • Tool: A system without built-in goals. Goals would still need to be defined in the execution of tasks, which could create the same problems as in the other castes.

Multipolar scenarios

Scenarios where more than one SI exist, including inexpensive digital minds.

  • Human welfare (160): Cheap machine labor that can perform all human tasks will eliminate most jobs. Like the case of horses, new jobs may not be created. Without labor, nearly all wealth will be held by capital owners. Total wealth will be greatly amplified, enough to raise up all of humanity greatly if it can be distributed wisely. The distribution must consider machine capital holders as well.
  • Machine welfare (167): Machine laborers could be conscious. If so, their vast numbers due to easy and cheap replicability could lead to vast mind crime. (e.g. A slavery industry could be created.) They could be killed or reset easily and doing so would likely be attractive for efficiency purposes. Workers also might not enjoy their work. If desired, consciousness might be avoided by outsourcing cognitive tasks to compartmentalized functional modules.
  • Emergence of a singleton (176): A singleton could still arise in a multipolar scenario.
    • A higher order technical breakthrough could occur, enabling a DSA.
    • State-like superorganisms made of common-goal-oriented, self-sacrificing AI organisms could emerge and form a collective SI singleton.
    • A global treaty could be formed for efficiency reasons, creating an effective singleton. Enforcement would need to be solved. Game theoretics would be different than for humans, e.g. through the ability to pre-commit.

The value loading problem (motivation selection)

How do we give an SI beneficial goals?

  • Direct specification (185): Human goals are more complex than we often realize. It is hard for us to completely specify any human goal in human language; harder still in code.
  • Evolutionary selection (187): Replicate what nature did to produce human goals. There is no guarantee that the selected goals will be what humans want. Evolution also produces great suffering – mind crime is likely.
  • Reinforcement learning (188): The AI learns instrumental values by learning to maximize a specified reward, but that reward would be a proxy for a final value that would still have to be specified up front, begging the question of direct specification.
  • Associative value accretion (189): The AI gains its values through interaction with its environment, like humans do. Many humans gain perverse values though, and it may be difficult to emulate this process in an alien AI architecture.
  • Motivational scaffolding (191): Give a seed AI an interim set of final goals, then replace them with more complex final goals when the AI is more capable. It will likely resist having its goals replaced.
  • Value learning (192): Tell the AI that we have specified a hidden final goal for it, which it should attempt to learn and accomplish based on what it knows about humans. We can do this without actually specifying the goal.   This may be the most promising approach.
  • Emulation modulation (201): Start with WBE and tweak motivation through digital drugs as it becomes superintelligent. We mush be careful about mind crime.
  • Institutional design (202): Design a social institution of many agents that allows for gradual cognitive improvement in a controlled manner. One example is a reverse intelligence pyramid with strong built-in subordinate control methods, testing out cognitive improvements on small groups in the lowest level (most intelligent and most heavily controlled). Agents could be emulations or synthetic AI’s. A promising approach.

The value choosing problem – indirect normativity (209)

If we solved the control problem, how would we choose the values to load? Doing so may determine the values of conscious life for all time. Having been reliably wrong in the past, our values are likely imperfect now. Why not let the superintelligent AI choose them for us?

  • Coherent extrapolated volition (CEV) (211): What the consensus wishes of humanity would be if we were wiser. This concept encapsulates future moral growth, avoids letting a few programmers hijack human destiny, avoids a motive for fighting over the initial dynamic, and bases the future of humanity on its own wishes. But whose volition should be included in the consensus?
  • Morality models (217): Tell the AI to do what is morally right or morally permissible. These concepts are tricky, but if we can make sense of them, so can an SI. This might not give us what we want, e.g. if the morally best thing to do is to eliminate humanity.
  • Do what I mean (220): Somehow get the AI to choose the best model of indirect normativity for us.
  • Component list (221): Design choices other than final goals will affect SI behavior. Each could benefit from indirect normativity.
    • Ancillary goals, such as providing accurate answers, avoiding excessive resource use, or rewarding humans who contribute to the successful realization of beneficial SI (“incentive wrapping”)
    • Choice of decision theory
    • Choice of epistemology, e.g. what Bayesian priors should be used. Choosing 0 for any prior could cause unexpected problems.
    • Ratification: Perhaps the SI’s plans should be subject to human review before being put into effect, e.g. through a separate oracle SI that predicts the outcome of the plan and describes it to us.
  • Getting close enough (227): We don’t have to design an optimal SI since it will optimize itself. We just need to start in the right attractor basin to avoid catastrophe.

Science and technology strategy (228)

  • Desired timing of the arrival of SI: We must evaluate the effect of the timing of SI on the level of existential risk it creates and its effect on other existential risks.
    • Arguments for early arrival:
      • SI will eliminate other existential risks (asteroid impact, pandemics, climate change, nuclear war, etc.).
      • Other dangerous technologies (nanotech, biotech, etc.) will be safer if created in the presence of SI.
    • Arguments for late arrival:
      • More time will have been allowed for progress on the control problem.
      • Civilization is likely to become wiser as time progresses.

If the current level of existential “state risk” in our current state is low, then late arrival is preferred in order to mitigate the “step risk” it entails.

  • The role of cognitive enhancement (233): Human cognitive enhancement is likely to hasten the arrival of SI. This however may be good on balance because:
    • Increased thinking speed will mean that not less progress will have been made on the control problem.
    • Access to higher outliers on the intelligence scale may allow for major qualitative improvements in our ability to solve the control problem.
    • Enhanced society is more likely to have recognized the importance the control problem.
  • Technology couplings (236): WBE could lead to neuromorphic AI before synthetic AI, which could be dangerous due to our lack of understanding of its structure.
  • Effects of hardware progress (240): We likely cannot control hardware progress, but we can anticipate its effects:
    • Earlier arrival of SI
    • A more likely fast takeoff
    • Higher availability of brute force methods for creating SI, which could entail less understanding
    • Leveling of the playing field between small and large projects, which could be dangerous if smaller projects are less interested in or adept at solving the control problem
    • A more likely singleton
  • Should WBE be promoted? (242): WBE is not imminent (>15-20 yrs), but if synthetic SI is further away still, developing WBE first might or might not be a good idea.
    • Arguments in favor:
      • Fast thinking speeds of WBE could enable much greater progress on the control problem before synthetic SI occurs.
      • WBE may be better understood than synthetic SI and provide a bridge of understanding.
    • Arguments against:
      • WBE may lead to dangerous neuromorphic AI.
      • Achieving WBE then synthetic AI involves two transitions, each with its own step risks. Achieving synthetic AI first involves only one risky transition.
  • The person-affecting perspective (245): It is likely that many individuals will favor speedy development of SI because they wish it to occur during their lifetimes, since it 1) is interesting; 2) may give them immortality in a utopia. This bias may cloud our calculations of existential risk.
  • Collaboration (246):
    • The race dynamic: The astronomical reward potential of being first to achieve SI is likely to lead to a race dynamic that favors an early, fast takeoff and reduces investment in the control problem. The race could also involve nations that are willing to resort to preemptive military strikes to gain advantage.
    • Benefits of collaboration:
      • Reduces haste
      • Allows greater investment in safety
      • Avoids violent conflict
      • Facilitates sharing of ideas about safety
      • Encourages equitable distribution of wealth resulting from SI. This is desirable not only from a moral standpoint but also from a selfish one, because the wealth created will be so large that owning even a tiny fraction will likely make one rich, while owning a very large fraction will likely saturate one’s use for the wealth.
      • Could assist with coordination problems post-transition
    • Achieving collaboration: Collaboration could take many forms, including small teams of enterprising individuals, large corporations, and states. Collaborations must be tightly controlled to avoid having their goals corrupted or usurped from within. This could mean employing only a small number of technical workers while taking high-level input from a much broader group. Early collaboration is preferred because late collaboration is less likely when there is a clear front-runner. Early collaboration may be difficult to achieve, but one path may be to espouse a common good principle, which could be voluntarily adopted at first and later put into law. The principle could contain a windfall clause requiring that profits in excess of some extremely large amount be distributed to all of humanity evenly.
  • Prioritization (255): Discoveries are valuable insofar as they make time-sensitive knowledge available earlier. After an intelligence explosion, all would-be human discoveries will be made by SI much faster than they would be otherwise, so many of them are less important for us to pursue now than we might think. But discoveries pertaining to solving the control problem must be made prior to an intelligence explosion. Two specific areas of research that we can prioritize on that topic are 1) strategic analysis of considerations that are crucial to it, and 2) building a support base that takes the problem seriously and promotes awareness of it.

 

 

 

 

Continue Reading

Please think about nuclear weapons before voting for Trump

Castle Bravo, 15 megatons

This is a hydrogen bomb, 1000 times more powerful than the Hiroshima bomb. Here is its immediate kill radius.

Nuclear Darkness Simulator

Right now the United States has around 1,400 hydrogen bombs loaded on missiles and bombers, ready to be launched unilaterally by the president at any time. Russia has around 1,800. Both countries have early warning radar systems that allow them to launch their own warheads as soon as another launch is detected — about a 15 minute decision window.

We tend to think of the president’s power as being limited by our system of checks and balances, but in this respect, it is not. There is no voting or veto system to counter a command from the president to launch a nuclear missile. When the president gives the command, the missile is launched less than five minutes later. The president of the United States really has the unchecked power to destroy the world as we know it during every second of his presidency.

Think about the Cold War for a minute. Now think about putting Donald Trump in charge of it.

We are at the brink of war with Russia in Syria. Donald Trump has suggested using nuclear weapons in Syria on multiple occasions, expressing the attitude that there is no point in having them if we don’t use them. The next president will be repeatedly forced to make nuanced decisions that could lead to nuclear strikes on American cities if they are not carefully thought out. Although Trump’s no-holds-barred attitude is often refreshing, we have to think about what it will mean in this context. It is hard to deny that Trump has a short temper and tends to lash out reflexively when insulted. What will President Trump do if an American plane goes down over Syria? What will he do if he receives uncertain intelligence that Syria or Iran — both Russian allies — have obtained a nuclear weapon?

I understand the reasons that Trump appeals to you — they appeal to me too. Washington is broken, the economy is bad, donations have corrupted our leaders. Things needs to be shaken up, and Hillary Clinton won’t do that. But the alternative is giving Trump the trigger to America’s — and therefore the world’s — nuclear arsenal. This type of shaking things up is like telling your kids to unbuckle their seat belts and then crossing into the oncoming lanes of the freeway.

When you vote in this election, your vote will significantly affect the safety of your fellow citizens, including me and the people I love. If you are considering voting for Trump, I just ask that you give this issue some honest thought first.

Continue Reading

Thoughts on some of California’s ballot propositions

 

There are some important measures on the California ballot this year. After reading through the voter pamphlet, here are my thoughts on a few, may they be of use to you.

64: Marijuana legalization — Yes

Legalizes recreational use of marijuana, establishes regulating agencies, imposes taxes, and erases existing marijuana convictions.

This may be the most important one. Although marijuana use will likely increase if this proposition passes, and in some respects this will be a bad thing, I think that the benefits will vastly outweigh the costs. The downsides: there will be more potheads, and there may be more traffic accidents assuming people don’t substitute drinking with smoking. Compare that with some of the problems associated with marijuana criminalization:

  • destruction of young people’s lives via criminal records
  • mass murder through the illegal drug trade
  • institutional racism
  • prevention of reliable marijuana studies
  • waste of taxpayer money through prosecution
  • overcrowding of jails
  • introduction of users into the illegal market
  • prevention of an activity that improves many people’s lives

It seems clear to me that this proposition is on the right side of history. For some great case studies on how drug criminalization affects people, I recommend the recent book Chasing the Scream by Johann Hari.

63: Firearms restrictions — Yes

Requires background checks for ammunition, prohibits large magazines, requires ammunition vendors to have licenses, requires lost guns to be reported, prohibits people convicted of stealing guns from owning them.

It’s hard for me to see why people oppose this initiative. The main claims in the voter pamphlet are that it is a burden on gun owners and that it doesn’t do the right things to control gun violence. It seems absolutely reasonable that people who want to own objects intended to kill people should face a minor burden in order to help prevent bad actors from owning them. I agree that many other things should be done to reduce gun violence (such as training requirements), but that should not prevent us from ever doing anything.

62: Death penalty repeal — Yes

Removes the option of the death penalty in California and changes all existing death sentences to life without parole.

The printed arguments against this measure appeal almost exclusively to emotion. The strongest argument I can see against it is that the death penalty actually helps innocent defendants by automatically appealing their cases to the CA supreme court and encourages further legal challenges on their behalf. This results in a significantly higher percentage of exonerations in death sentence cases than life imprisonment cases. However, these appeals overwhelm the capability of the CA supreme court, extending the cases for decades and incurring litigation costs of $150M to the state annually. In addition, there are major problems with the humane administration of lethal injections, which have resulted in a stay on executions in CA for the past 10 years. Since 1978, only 15 of CA’s 930 death row inmates have been executed. The death penalty system in CA is a huge mess, and I don’t see a good way to fix it. If we are going to execute someone, I think we need to do our best to make sure that that person is guilty, and in order to do that we must allow the full appeal process to occur. Taking shortcuts to simplify the system and save money doesn’t seem appropriate to me, as discussed below for Prop 66, which attempts to do so.

66: Death penalty repair — No

Attempts to reduce the time and cost associated with administering the death penalty.

The sponsors of this measure are people who strongly believe in the death penalty and wish to fix the administrative problems with it while preserving it. The measure seems well intentioned and if I thought that it could work I might support it. However, I just don’t see a basic inefficiency in the way we administer the death penalty, so I don’t think there is a way to make it cheaper and faster without significantly increasing the risk of executing innocents. I think that if we value the death penalty, we need to be willing to pay the real costs of administering it prudently. We currently pay those costs, and they are greater than we can bear. Greater, it seems to me, than whatever abstract justice we might derive from the practice. This initiative proposes to fix the system by imposing time limits on the appeal process and reassigning cases from the CA supreme court to the appeals courts. I think that the former will increase the likelihood of false executions, and the latter will increase the cost of the practice by requiring an increase in the scope of the courts.

57: Criminal sentence reduction — Yes

Allows parole consideration for persons convicted of non-violent felonies once they have completed the full term for their primary offense. Also enables the Department of Corrections to more liberally award credits for good behavior and rehabilitation activity.

Opponents’ main argument is that the definition of “non-violent” here includes many crimes that most people would consider violent, including certain cases of assault, rape, and domestic violence. As far as I can tell, the term is somewhat poorly defined, but I don’t believe that the authors of the measure are trying to trick us. This measure will serve to alleviate a major problem that California has with overcrowding of prisons and the associated financial and social costs. I also believe that people should not be jailed for non-violent drug offenses, and this proposition would be a step in the right direction in that regard. The risk of a small minority of dangerous offenders being released early seems like a manageable one in order for us to make needed improvements to the criminal justice system, especially given that all early paroles will still require review and approval.

Continue Reading