The concept of artificial intelligence (AI) safety consists of research, strategies, and policies aimed at ensuring these systems are reliable, aligned with human values, and not causing serious harm. While this field traditionally addresses both immediate risks (e.g., algorithmic bias and system reliability) and longer-term risks like questions of AI alignment and existential threats to humanity, the dominant discourse reflects a distinctly…
Article Source
https://www.brookings.edu/articles/a-new-writing-series-re-envisioning-ai-safety-through-global-majority-perspectives/