-
Notifications
You must be signed in to change notification settings - Fork 0
/
Carl_Shulman’s_views_on_AI_safety.page
22 lines (19 loc) · 2.81 KB
/
Carl_Shulman’s_views_on_AI_safety.page
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
---
title: Carl Shulman’s views on AI safety
format: markdown
categories: AI_safety
...
|Topic|View|
|-----------------|------------------------------------------------------------|
|AI timelines|see [this comment](http://lesswrong.com/lw/gfb/update_on_kim_suozzi_cancer_patient_in_want_of/8btm)|
|Value of highly reliable agent design (e.g. decision theory, logical uncertainty) work||
|Value of intelligence amplification work|See comments like [this one](http://lesswrong.com/lw/384/genetically_engineered_intelligence/32jj)|
|Value of pushing for whole brain emulation|See [this report](https://intelligence.org/files/SS11Workshop.pdf). He gives some points against in a comment starting with "However, the conclusion that accelerating WBE (presumably via scanning or neuroscience, not speeding up Moore's Law type trends in hardware) is the best marginal project for existential risk reduction is much less clear."[^wbe_unclear] See also [this comment](http://lesswrong.com/lw/1s3/hedging_our_bets_the_case_for_pursuing_whole/1p8e) and [this one](http://lesswrong.com/lw/1s3/hedging_our_bets_the_case_for_pursuing_whole/1oyu). "The type of AI technology: whole brain emulation looks like it could be relatively less difficult to control initially by solving social coordination problems, without developing new technology, while de novo AGI architectures may vary hugely in the difficulty of specifying decision algorithms with needed precision".[^type_of_ai_tech]|
|Difficulty of AI alignment||
|Shape of takeoff/discontinuities in progress||
|Type of AI safety work most endorsed||
|How "prosaic" AI will be||
|How well we need to understand philosophy before building AGI|Some discussion in [this thread](http://lesswrong.com/lw/i7p/how_does_miri_know_it_has_a_medium_probability_of/9iq6), [this comment](http://lesswrong.com/r/discussion/lw/e05/friendly_ai_and_the_limits_of_computational/765y).|
|Kind of AGI we will have first (de novo, neuromorphic, WBE, etc.)|See [this comment](http://lesswrong.com/lw/1hn/call_for_new_siai_visiting_fellows_on_a_rolling/1b6o), [this comment](http://lesswrong.com/lw/2lr/the_importance_of_selfdoubt/35hj), [this comment](http://lesswrong.com/lw/8ld/against_wbe_whole_brain_emulation/5bvb). Also [this comment](http://lesswrong.com/lw/6dr/discussion_yudkowskys_actual_accomplishments/4faq) is on this topic, but not Carl's views necessarily.|
[^wbe_unclear]: [“CarlShulman comments on Hedging our Bets: The Case for Pursuing Whole Brain Emulation to Safeguard Humanity's Future”](http://lesswrong.com/lw/1s3/hedging_our_bets_the_case_for_pursuing_whole/1oyg). LessWrong. Retrieved March 8, 2018.
[^type_of_ai_tech]: [“CarlShulman comments on Safety Culture and the Marginal Effect of a Dollar”](http://lesswrong.com/lw/634/safety_culture_and_the_marginal_effect_of_a_dollar/4bnq). LessWrong. Retrieved March 9, 2018.