Foundational concepts to understand AI: free online curricula

Summary

Alignment Course is an online curriculum that provides materials in a useful order to learn fundamentals.

The Governance Course offers similar material at the beginning, but then moves towards governance (regulations etc.).

Within the first “SESSION” both offer a video insight titled: How ChatGPT Works Technically.

Robots illustrating AI (artificial intelligence.)

Wisconsin AI Safety Initiative

Founded in Spring 2023, the Wisconsin AI Safety Initiative (waisi.org) taspires to serve as an incubator for high-impact careers promoting and facilitating the safe advancement of artificial intelligence.

The Waisi site offers local programs on 2 main subjects:  AI Alignment (strictly to human intention) and AI Governance (to adhere to robust regulations.)

While their local classes are closed, they offer links to online curricula that offer materials including a large number of videos, that provides the fundamental understanding of AI starting with Neural Network and Machine Learning.

The early parts of the Alignment Course curriculum is very appropriate to learn about the foundation of AI systems, starting with neural networks and Machine Learning fundamentals.

 Alignment Course curriculum

The aim of the Alignment Course is to understand AI alignment, and extreme risks posed by misaligned AI. Below are the session titles. See course page for more details.

Introduction to Machine Learning SESSION_0 Machine learning has gone through a revolution over the last decade: almost all cutting-edge systems now make use of deep learning.
Artificial General Intelligence SESSION_1 Scaling up neural networks predictably leads to more powerful and general capabilities, and we’re not far away from being able to train networks with sizes comparable to human brains.
Reward misspecification and instrumental convergence SESSION_2 Language models often hallucinate realistic false facts. Fine-tuning them using human feedback makes this less common, but also makes the hallucinations harder to distinguish from the truth.
Goal misgeneralisation SESSION_3 Even without knowing which goals an agent will learn, we can predict some properties of its behavior which would be incentivized under many different goals.
Task decomposition for scalable oversight SESSION_4 Scalable oversight refers to methods that enable humans to oversee AI systems
Adversarial techniques for scalable oversight SESSION_5 We can train models to tell us when other models are making mistakes – but right now they’re not always able to explain how they know that the mistakes are happening.
Interpretability SESSION_6 By studying the connections between neurons, we can find meaningful algorithms in the weights of neural networks.
Governance SESSION_7 Solving alignment technically is just part of the puzzle. Governance issues around development and deployment of AGI will need to be solved too.
Agent foundations SESSION_8 The theoretical foundations of the field of machine learning break in a number of ways when we use them to describe real-world agents.
Careers and Projects SESSION_9 AI safety is a young field with few legible opportunities [..] main objective is to set some time aside to think about your career and goals […]

Governance Course curriculum

Some of the early sessions share materials with the course, for example Session 1 is dedicated to an “Introduction to AI and Machine Learning.” The courses later diverge in focus.

Introduction to AI and Machine Learning SESSION_1 introduces the technical basics of machine learning, which is the dominant approach to AI.
Introduction to potential catastrophic risks from AI SESSION_2 AI could contribute to global-scale misuse, conflict, and accidents.
Challenges in achieving AI safety SESSION_3 focus on why AI safety may be challenging to achieve—even if most AI developers act responsibly.
AI standards and regulations SESSION_4 addressing risks of catastrophic AI accidents and misuse: regulating industrial-scale AI development.
Closing regulatory gaps through non-proliferation SESSION_5 AI developers in countries that do not set adequate guardrails could still cause global damage.
Closing regulatory gaps through international agreements SESSION_6 cooperation-oriented approach: states could establish international agreements on AI safety regulations.
Additional proposals SESSION_7 additional (often complementary) governance proposals, still with a focus on navigating the impacts of increasingly advanced AI.
Career Advice SESSION_8 What kind of work is happening in this ecosystem, and—if you’d like to contribute—how can you do so?
Projects SESSION_9 opportunity for you to work on and develop a project that’s relevant to AI governance. There are four project tracks […]

Image credits: assembled illustrations from Wisconsin AI Safety Initiative waisi.org