
Numerically Linearizing a Dynamic System
In this video we show how to linearize a dynamic system using numerical techniques. In other words, the linearization process does not require an analytical description of the system. This...
See MoreTeaching resources for a reinforcement learning course
Teaching resources by Dimitri P. Bertsekas for reinforcement learning courses. The website has links for freely available textbooks (for instructional purposes), videolectures, and course...
See MoreData-Driven Control: The Goal of Balanced Model Reduction
In this lecture, we discuss the overarching goal of balanced model reduction: Identifying key states that are most jointly controllable and observable, to capture the most input—output...
See MoreSingular Value Decomposition (SVD): Dominant Correlations
This lectures discusses how the SVD captures dominant correlations in a matrix of data.
See MoreStanford CS234: Reinforcement Learning | Winter 2019 | Lecture 8 - Policy Gr...
Professor Emma Brunskill
Assistant Professor, Computer Science
Stanford AI for Human Impact Lab
Stanford Artificial Intelligence Lab
Statistical Machine Learning Group
RL Course by David Silver - Lecture 2: Markov Decision Process
Explores Markov Processes including reward processes, decision processes and extensions.
See MoreEuler Angles and the Euler Rotation Sequence
In this video we discuss how Euler angles are used to define the relative orientation of one coordinate frame to another.
See MoreData-Driven Control: Balanced Proper Orthogonal Decomposition
In this lecture, we introduce the balancing proper orthogonal decomposition (BPOD) to approximate balanced truncation for high-dimensional systems.
See MoreExtremum Seeking Control: Challenging Example
This lecture explores the use of extremum-seeking control (ESC) to solve a challenging control problem with a right-half plane zero.
See MoreThe Frobenius Norm for Matrices
This video describes the Frobenius norm for matrices as related to the singular value decomposition (SVD).
See MoreStanford CS234: Reinforcement Learning | Winter 2019 | Lecture 1 - Introduct...
Professor Emma Brunskill
Assistant Professor, Computer Science
Stanford AI for Human Impact Lab
Stanford Artificial Intelligence Lab
Statistical Machine Learning Group
RL Course by David Silver - Lecture 7: Policy Gradient Methods
Looks at different policy gradients, including Finite Difference, Monte-Carlo and Actor Critic.
See MoreData-Driven Control: Change of Variables in Control Systems (Correction)
This video corrects a typo in the previous lecture.
See MoreKoopman Spectral Analysis (Multiscale systems)
In this video, we discuss recent applications of data-driven Koopman theory to multi-scale systems.
See MoreStanford CS234: Reinforcement Learning | Winter 2019 | Lecture 10 - Policy G...
Professor Emma Brunskill
Assistant Professor, Computer Science
Stanford AI for Human Impact Lab
Stanford Artificial Intelligence Lab
Statistical Machine Learning Group
Smart Projectile State Estimation Using Evidence Theory
This journal article provides a very good practical understanding of Dempster-Shafer theory using sensor fusion and state estimation as the backdrop.
See MoreThe Navigation Equations: Computing Position North, East, and Down
In this video we show how to compute the inertial velocity of a rigid body in the vehicle-carried North, East, Down (NED) frame. This is achieved by rotating the velocity expressed in the...
See MoreManipulating Aerodynamic Coefficients
In this video we discuss some potential problems you may encounter when attempting to perform operations with dimensionless aerodynamic coefficients such as CL and CD.
See MoreData-Driven Control: Observer Kalman Filter Identification
In this lecture, we introduce the observer Kalman filter identification (OKID) algorithm. OKID takes natural input--output data from a system and estimates the impulse response, for later...
See MoreRandomized SVD: Power Iterations and Oversampling
This video discusses the randomized SVD and how to make it more accurate with power iterations (multiple passes through the data matrix) and oversampling.
See MoreRL Course by David Silver - Lecture 1: Introduction to Reinforcement Learnin...
Introduces reinforcment learning (RL), an overview of agents and some classic RL problems.
See MoreComputing Euler Angles: The Euler Kinematical Equations and Poisson’s Kinema...
In this video we discuss how the time rate of change of the Euler angles are related to the angular velocity vector of the vehicle. This allows us to design an algorithm to consume...
See MoreUsing Antenna Toolbox with Phased Array Systems
When you create antenna arrays such as a uniform linear array (ULA), you can use antennas that are built into Phased Array System Toolbox™. Alternatively, you can use Antenna Toolbox™...
See MoreData-Driven Control: Balanced Models with ERA
In this lecture, we connect the eigensystem realization algorithm (ERA) to balanced proper orthogonal decomposition (BPOD). In particular, if enough data is collected, then ERA produces...
See MoreSVD and Optimal Truncation
This video describes how to truncate the singular value decomposition (SVD) for matrix approximation.
See More