Committed to advancing Frontier AI Safety by conducting high-impact research.

AI Safety as a Sociotechnical Challenge

The development of AI Safety is fundamentally a sociotechnical challenge: it requires the alignment of AI systems with human values and intentions through both technical innovation and a deep understanding of the social contexts in which these systems operate. Addressing AI risks involves more than just managing advanced capabilities; it necessitates careful consideration of how these technologies interact with human behaviors, social structures, cultural norms, and existing power dynamics.

Our mission is to develop rigorous techniques for preventing misaligned behavior, evaluating AI systems, and building societal confidence in their robustness.

AI Safety

AI Alignment

AI systems are rapidly becoming more capable and more general. Despite AI’s potential to radically improve human society, there are still open questions about how we build AI systems that are controllable, aligned with both our values and intentions and interpretable.

AI X-Risks

AI capabilities are developing at an exponential rate raising concerns about their long-term implications. We focus on risks with the potential to cause significant harm on a global scale, affecting millions or even billions of people. If these scenarios materialize, humanity may not have a second chance. 

AI Governance

The implementation of policies, regulations, and standardized practices are equally essential to address risks such as bias, misuse, and unintended consequences. Public bodies are facing increasingly difficult decisions about how to respond to this challenges. Our goal is to support these organizations by providing insightful research and guidance.

AI Systems Evaluations

How can we assess an AI system’s safety through interaction? We recognize the critical and challenging nature of evaluating AI systems, particularly in the era of frontier multimodal models. We believe in the need for a science of evaluations and in the benefits of improving its accessibility. 

Diverse. Interdisciplinary. Independent.

Established in Argentina, FAIR is an integral part of the Laboratory of Innovation and Artificial Intelligence of the University of Buenos Aires (UBA IALAB). As a unit of a Latin American public university, intellectual independence is a core value. We foster the inclusion of a wide range of perspectives and expertise, driving innovation and striving to address the most serious concerns about AI risks from a diverse global community. Addressing these challenges necessarily requires an interdisciplinary approach.

Subscribe

Enter your email below to receive updates.

Contact us

victoriacarro@ialab.com.ar

Av. Figueroa Alcorta 2263, CABA, Argentina