This course is an introduction to circuits and linear analysis. Topics include voltage and current sources, Kirchhoff's laws, Ohm's law, Nodal and Mesh analysis, Thevenin and Norton equivalent circuits, operational amplifiers, inductors and capacitors, frequency analysis of first-order and second-order systems, simple filter designs, and power dissipation calculations.
Students will learn topics by listening to lectures and participating in discussions and from reading the textbook and supplementary readings. A series of hands-on laboratory exercises and a significant team-oriented design project will provide students with an opportunity to apply and explore the material.
Course materials, labs, and readings are available through Canvas.
Course Description
Deep Learning has had a transformative impact on computer vision, natural language processing, and various other domains. The recent significant advancements in deep learning can be attributed to progress made in structured large datasets, machine learning algorithms, and hardware computing systems. Deep learning models impose substantial computational demands, especially as they are increasingly deployed on cloud computing and Internet of Things (IoT) platforms. To address these challenges, Domain-Specific Accelerators (DSAs) specifically designed for deep neural networks (DNNs) have emerged. These accelerators are customized to efficiently handle the specific computing patterns and dataflow of deep learning models. By utilizing these domain-specific accelerators, deep learning models can achieve enhanced performance, improved energy efficiency, and be seamlessly integrated into a wide range of applications and platforms.
Course Objective
This course provides insights into deep learning algorithms and the design of Domain-Specific Accelerators (DSA) for deep learning. It comprehensively covers learning methods, deep neural architecture structures, and diverse hardware platforms used in deep neural networks (DNN). Specifically, it focuses on supervised learning for image processing, surveys the key scaling trends in deep learning, discusses co-design approaches for deep learning algorithms and accelerators, and highlights crucial benchmarking metrics for evaluating DNN algorithmic models and hardware accelerators proposed in academia and industry.
Course Organization and Content
The course will involve a mix of lectures, research paper presentations, and course labs.
Students are expected to read, present, and interact with research papers, and complete four course projects and one final take-home exam.
Course materials, labs, and readings are available through
Canvas.
Prerequisites
This course has no prerequisites in machine learning or computer architecture. However, students are expected to have (1) a basic programming background in a language like Python and (2) a fundamental understanding of introductory Computer Organization and Digital Logical Design. The course is open to both PhD and MS students. Eligible undergraduate students are also welcome to enroll.
Course Evaluation
This course consistently receives an average rating of 4.50 out of 5.0 on course evaluations from previous sessions.
This course is a graduate-level course in computer architecture with special topics in hardware acceleration for machine learning. Advanced undergraduates who have fulfilled the prerequisites are welcome to enroll. This course provides an essential background in the training and inference of deep neural networks (DNNs), deep learning frameworks, and hardware accelerators. This course surveys the recent trends that reduce the computation, storage, and communication cost of DNNs via co-optimization of algorithms and hardware.
Students are expected to read, present, and interact with research papers, and complete three course projects and one final take-home exam. Prerequisites include introductory Computer Organization and Digital Logical Design.
Course materials, labs, and readings are available through Canvas.This course is a graduate-level course in computer architecture with special topics in hardware acceleration for machine learning. Advanced undergraduates who have fulfilled the prerequisites are welcome to enroll. This course provides an essential background in the training and inference of deep neural networks (DNNs), deep learning frameworks, and hardware accelerators. This course surveys the recent trends that reduce the computation, storage, and communication cost of DNNs via co-optimization of algorithms and hardware.
Students are expected to read, present, and interact with research papers, and complete three course projects and one final take-home exam. Prerequisites include introductory Computer Organization and Digital Logical Design.
Course materials, labs, and readings are available through Canvas.This course is a graduate-level course in computer architecture with special topics in hardware acceleration for machine learning. Advanced undergraduates who have fulfilled the prerequisites are welcome to enroll. This course provides an essential background in the training and inference of deep neural networks (DNNs), deep learning frameworks, and hardware accelerators. This course surveys the recent trends that reduce the computation, storage, and communication cost of DNNs via co-optimization of algorithms and hardware. Students are expected to read, present, and interact with research papers, and complete three course projects. Prerequisites include introductory Computer Organization and Digital Logical Design.
Course materials, labs, and readings are available through Canvas.This course provides an introduction to Intelligent Systems Engineering and an overview of the various degree specializations that are available. ISE is a set of modern Systems Engineering areas with various interrelations. This course provides a broad introduction and details of faculty research areas.
Course materials, labs, and readings are available through Canvas.This course is an introduction to circuits and linear analysis. Topics include voltage and current sources, Kirchhoff's laws, Ohm's law, Nodal and Mesh analysis, Thevenin and Norton equivalent circuits, operational amplifiers, inductors and capacitors, frequency analysis of first-order and second-order systems, simple filter designs, and power dissipation calculations.
Students will learn topics by listening to lectures and participating in discussions and from reading the textbook and supplementary readings. A series of hands-on laboratory exercises and a significant team-oriented design project will provide students with an opportunity to apply and explore the material.
Course materials, labs, and readings are available through Canvas.This course is a graduate-level course in computer architecture with special topics in hardware acceleration for machine learning. Advanced undergraduates who have fulfilled the prerequisites are welcome to enroll. This course provides an essential background in the training and inference of deep neural networks (DNNs), deep learning frameworks, and hardware accelerators. This course surveys the recent trends that reduce the computation, storage, and communication cost of DNNs via co-optimization of algorithms and hardware. Students are expected to read, present, and interact with research papers, and complete three course projects. Prerequisites include introductory Computer Organization and Digital Logical Design.
Course materials, labs, and readings are available through Canvas.