cs217.stanford.edu Open in urlscan Pro
2606:50c0:8001::153  Public Scan

URL: https://cs217.stanford.edu/
Submission Tags: phishingrod
Submission: On October 28 via api from DE — Scanned from DE

Form analysis 0 forms found in the DOM

Text Content

HARDWARE ACCELERATORS FOR MACHINE LEARNING (CS 217)


STANFORD UNIVERSITY, WINTER 2023

Bespoke and Customized


This course provides in-depth coverage of the architectural techniques used to
design accelerators for training and inference in machine learning systems. This
course will cover classical ML algorithms such as linear regression and support
vector machines as well as DNN models such as convolutional neural nets, and
recurrent neural nets. We will consider both training and inference for these
models and discuss the impact of parameters such as batch size, precision,
sparsity and compression on the accuracy of these models. We will cover the
design of accelerators for ML model inference and training. Students will become
familiar with hardware implementation techniques for using parallelism,
locality, and low precision to implement the core computational kernels used in
ML. To design energy-efficient accelerators, students will develop the intuition
to make trade-offs between ML model parameters and hardware implementation
techniques. Students will read recent research papers and complete a design
project.




INSTRUCTORS AND OFFICE HOURS:

Ardavan Pedram
Office Hours TBA
Kunle Olukotun
Office Hours TBA

This class meets Tuesday and Thursday from 10:30 - 11:50 AM in Gates B03.


TEACHING ASSISTANTS

Nathan Zhang
Office Hours Tuesday 12-1 PM Gates 498+Zoom, Friday 3-4 PM Zoom



CLASS INFORMATION



Funding for this research/activity was partially provided by the National
Science Foundation Division of Computing and Communication Foundations under
award number 1563113.


SCHEDULE


GUEST LECTURES

David Kanter, MLCommons
MLPerf
Thursday February 9, 2023

--------------------------------------------------------------------------------

Raghu Prabhakar, Sambanova
Reconfigurable Dataflow Architectures
Tuesday February 14, 2023

--------------------------------------------------------------------------------

Jared Casper, Nvidia
Large Language Models
Thursday February 16, 2023

--------------------------------------------------------------------------------

Dan Fu, Stanford
Flash Attention
Tuesday February 21, 2023

--------------------------------------------------------------------------------

Greg Diamos, Something New
Data systems for large models
Thursday February 23, 2023

--------------------------------------------------------------------------------

Swapnil Gandhi, Stanford
Graph Neural Networks
Tuesday February 28, 2023

--------------------------------------------------------------------------------

Sameer Kumar, Google
Distributed Systems
Thursday March 2, 2023

--------------------------------------------------------------------------------

Mike Houston, NVIDIA
Distributed Systems for Deep Learning
Tuesday March 7, 2023

--------------------------------------------------------------------------------

Ce Zhang, ETH

Thursday March 9, 2023

--------------------------------------------------------------------------------

Cliff Young, Google
Hierarchical Codesign and TPU4
Thursday March 16, 2023


LECTURE NOTES (FALL 2018)


RELATED STANFORD COURSES

 * CS230
 * CS231n
 * STATS 385


READING LIST AND OTHER RESOURCES


BASIC INFORMATION ABOUT DEEP LEARNING


CHEAT SHEET – THINGS THAT EVERYONE NEEDS TO KNOW


BLOGS


GRADING

This page was generated by GitHub Pages.