This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
EA Forum Bot Site
HPS FOR AI SAFETY
EA Forum
Login
Sign up
HPS FOR AI SAFETY
Get notified
A collection of AI safety posts from the history and philosophy of science (HPS) point of view.
5
An Epistemological Account of Intuitions in Science
Eleni_A
Eleni_A
+ 0 more
·
2y
ago
· 20m read
0
0
17
Alignment is hard. Communicating that, might be harder
Eleni_A
Eleni_A
+ 0 more
·
2y
ago
· 4m read
1
1
5
"Normal accidents" and AI systems
Eleni_A
Eleni_A
+ 0 more
·
2y
ago
· 1m read
1
1
6
It's (not) how you use it
Eleni_A
Eleni_A
+ 0 more
·
2y
ago
· 3m read
3
3
18
Alignment's phlogiston
Eleni_A
Eleni_A
+ 0 more
·
2y
ago
· 2m read
1
1
5
Who ordered alignment's apple?
Eleni_A
Eleni_A
+ 0 more
·
2y
ago
· 4m read
0
0
18
There is no royal road to alignment
Eleni_A
Eleni_A
+ 0 more
·
2y
ago
· 3m read
2
2
5
Against the weirdness heuristic
Eleni_A
Eleni_A
+ 0 more
·
2y
ago
· 2m read
0
0
13
Cognitive science and failed AI forecasts
Eleni_A
Eleni_A
+ 0 more
·
2y
ago
· 2m read
0
0
16
Emerging Paradigms: The Case of Artificial Intelligence Safety
Eleni_A
Eleni_A
+ 0 more
·
2y
ago
· 22m read
0
0