-
Notifications
You must be signed in to change notification settings - Fork 1
/
aises_5_1
45 lines (44 loc) · 2.8 KB
/
aises_5_1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
<!-- Complex Systems -->
<h1 id="overview">5.1 Overview</h1>
<p>Artificial intelligence systems and the societies they operate within
belong to the class of <em>complex systems</em>. These types of systems
have significant implications for thinking about and ensuring AI safety.
Complex systems exhibit surprising behaviors and defy conventional
analysis methods that examine individual components in isolation. To
develop effective strategies for AI safety, it is crucial to adopt
holistic approaches that account for the unique properties of complex
systems and enable us to anticipate and address AI risks.<p>
This chapter begins by elucidating the qualitative differences between
complex and simple systems. After describing standard analysis
techniques based on mechanistic or statistical approaches, the chapter
demonstrates their limitations in capturing the essential
characteristics of complex systems, and provides a concise definition of
complexity. The “Hallmarks of Complex Systems” section then explores
seven indications of complexity and establishes how deep learning models
exemplify each of them.<p>
Next, the “Social Systems as Complex Systems” section shows how various
human organizations also satisfy our definition of complex systems. In
particular, the section explores how the hallmarks of complexity
materialize in two examples of social systems that are pertinent to AI
safety: the corporations and research institutes pursuing AI
development, and the decision-making structures responsible for
implementing policies and regulations. In the latter case, there is
consideration of how advocacy efforts are affected by the complex nature
of political systems and the broader social context.<p>
Having established that deep learning systems and the social systems
surrounding them are best described as complex systems, the chapter
moves on to what this means for AI safety. The “General Lessons” section
derives five learnings from the chapter’s examination of complex systems
and sets out their implications for how risks might arise from AI. The
“Puzzles, Problems, and Wicked Problems” section then reframes the
contrasts between simple and complex systems in terms of the different
kinds of problems that the two categories present, and the distinct
styles of problem-solving they require.<p>
By examining the unintended side effects that often arise from
interfering with complex systems, the “Challenges with Interventionism”
section illustrates the necessity of developing comprehensive approaches
to mitigating AI risks. Finally, the “Systemic Issues” section outlines
a method for thinking holistically and identifying more effective,
system-level solutions that address broad systemic issues, rather than
merely applying short-term “quick fixes” that superficially address
symptoms of problems.</p>