-
Notifications
You must be signed in to change notification settings - Fork 0
/
AI_safety.page
105 lines (75 loc) · 5.07 KB
/
AI_safety.page
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
---
title: AI safety
format: markdown
categories: AI_safety Cause_areas existential-risks
...
**Note**: As of late 2017/2018, Issa is working on [AI Watch](https://aiwatch.issarice.com/), which is partly motivated by cause prioritization.
# Summary
Importance
: \<importance rating\>
Tractability
: \<tractability rating\>
Neglectedness
: \<neglectedness rating\>
<!-- keywords: ai siai miri singularity eliezer yudkowsky agi
asi fai friendly ai
-->
[AI Impacts](http://www.aiimpacts.org/) is an informational site that "aims to improve our understanding of the likely impacts of human-level artificial intelligence".
The main EA organization working in this field seems to be [MIRI](http://intelligence.org/), which does relevant math research and sponsors forecasting projects like [AI Impacts](http://www.aiimpacts.org/).
FHI might also have more information.
[*Superintelligence: Paths, Dangers, Strategies*](http://www.amazon.com/dp/0199678111/ref=cm_sw_su_dp) by [Nick Bostrom](http://www.nickbostrom.com/) lays out a foundation for navigating scenarios where machine brains surpass human brains in general intelligence.
[Artificial General Intelligence](http://www.scholarpedia.org/article/Artificial_General_Intelligence)
[How large do you think the first Strong AI be\(lines of code, servers etc\)? • /r/artificial](https://www.reddit.com/r/artificial/comments/2qyg8j/how_large_do_you_think_the_first_strong_ai/cnathbn)
[Artificial Intelligence • /r/artificial](https://www.reddit.com/r/artificial/)
[Comparative Table of Cognitive Architectures \(started on October 27, 2009; last update: June 18, 2012\)](http://bicasociety.org/cogarch/architectures.htm)
# Importance
FIXME
# Tractability
FIXME
# Neglectedness
FIXME
# Organizations
- [Center for Human-Compatible Artificial Intelligence](https://humancompatible.ai/)
- [Center for Security and Emerging Technology](https://cset.georgetown.edu/about-us/)
- [Future of Humanity Institute](https://www.fhi.ox.ac.uk/)
- [Future of Life Institute](http://futureoflife.org/)
- [Human-Centered Artificial Intelligence Standford University](https://hai.stanford.edu/)
- [Machine Intelligence Research Institute](https://intelligence.org/)
- [OpenAI](https://en.wikipedia.org/wiki/OpenAI)
- [Median Group](http://mediangroup.org/)
- [Modeling Cooperation](http://modelingcooperation.com/)
- [Meta Ethical AI](http://www.metaethical.ai/)
# Events
**On AI Safety:**
- [AI Safety](https://www.ai-safety.org/)
- [SafeAI](https://safeai.webs.upv.es/)
- [Artificial Intelligence Safety Engineering](https://www.waise.org/)
**Related events:**
- [International Joint Conference on Artificial Intelligence](https://www.ijcai19.org/)
# Grants
- [Berkeley Existential Risk Initiative: Individual Grants Program](http://existence.org/individual-grants/)
# See also
- [AI strategy]()
- [Views on AI safety]()
- [Simulation hypothesis]()
# External links
- [General resources on AI safety](https://www.facebook.com/groups/aisafetyopen/permalink/263224891047211/)
- [Artificial General Intelligence: Coordination & Great Powers](https://foresight.org/wp-content/uploads/2018/11/AGI-Coordination-Geat-Powers-Report.pdf)
- [AI alignment landscape by Paul Christiano](https://ai-alignment.com/ai-alignment-landscape-d3773c37ae38)
- Some relevant timelines:
- [Timeline of AI safety](https://timelines.issarice.com/wiki/Timeline_of_AI_safety)
- [Timeline of Machine Intelligence Research Institute](https://timelines.issarice.com/wiki/Timeline_of_Machine_Intelligence_Research_Institute)
- [Timeline of Center for Applied Rationality](https://timelines.issarice.com/wiki/Timeline_of_Center_for_Applied_Rationality)
- [Timeline of Berkeley Existential Risk Initiative](https://timelines.issarice.com/wiki/Timeline_of_Berkeley_Existential_Risk_Initiative)
- [Timeline of Future of Humanity Institute](https://timelines.issarice.com/wiki/Timeline_of_Future_of_Humanity_Institute)
- [Timeline of Foundational Research Institute](https://timelines.issarice.com/wiki/Timeline_of_Foundational_Research_Institute)
- [Timeline of OpenAI](https://timelines.issarice.com/wiki/Timeline_of_OpenAI)
- [CarlShulman comments on How does MIRI Know it Has a Medium Probability of Success?](http://lesswrong.com/lw/i7p/how_does_miri_know_it_has_a_medium_probability_of/9i5b)
- [Don’t Worry, Smart Machines Will Take Us With Them](http://nautil.us/issue/28/2050/dont-worry-smart-machines-will-take-us-with-them)
- Jeff Kaufman's posts on AI safety:
- ["Looking into AI Risk"](http://www.jefftk.com/p/looking-into-ai-risk)
- ["Superintelligence Risk Project"](http://www.jefftk.com/p/superintelligence-risk-project)
- ["Conversation with Dario Amodei"](http://www.jefftk.com/p/conversation-with-dario-amodei)
- ["Conversation with Michael Littman"](http://www.jefftk.com/p/conversation-with-michael-littman)
- ["Superintelligence Risk Project Update"](http://www.jefftk.com/p/superintelligence-risk-project-update)
- [artificial intelligence index - 2018 annual report](http://cdn.aiindex.org/2018/AI%20Index%202018%20Annual%20Report.pdf)