Skip to main content
Graph
Search
fr
en
Login
Search
All
Categories
Concepts
Courses
Lectures
MOOCs
People
Practice
Publications
Startups
Units
Show all results for
Home
Lecture
Artificial Neural Networks and Deep Learning: Loss landscape and optimization methods
Graph Chatbot
Related lectures (32)
Crash course on Deep Learning
Covers a crash course on deep learning, including the Mark I Perceptron, neural networks, optimization algorithms, and practical training aspects.
Deep Learning: Convolutional Neural Networks and Training Techniques
Discusses convolutional neural networks, their architecture, training techniques, and challenges like adversarial examples in deep learning.
Feedforward Neural Networks: Activation Functions and Backpropagation
Introduces feedforward neural networks, activation functions, and backpropagation for training, addressing challenges and powerful methods.
Neural Networks: Regression and Classification
Explores neural networks for regression and classification tasks, covering training, regularization, and practical examples.
Multi-layered Perceptron: History and Training Algorithm
Explores the historical development and training of multi-layered perceptrons, emphasizing the backpropagation algorithm and feature design.
Regularized Cross-Entropy Risk
Explores the regularized cross-entropy risk in neural networks, covering training processes and challenges in deep networks.
Neural Networks: Training and Optimization
Explores the training and optimization of neural networks, addressing challenges like non-convex loss functions and local minima.
Neural Networks: Training and Optimization
Explores neural network training, optimization, and environmental considerations, with insights into PCA and K-means clustering.
Deep Learning: Data Representations and Neural Networks
Explores data representations, histograms, neural networks, and deep learning concepts.
Deep Learning for Autonomous Vehicles: Learning
Explores learning in deep learning for autonomous vehicles, covering predictive models, RNN, ImageNet, and transfer learning.
Universal Approximation Theorem: MLP
Covers Multi-Layer Perceptrons (MLP) and their application from classification to regression, including the Universal Approximation Theorem and challenges with gradients.
Recurrent Neural Networks: Training and Challenges
Discusses recurrent neural networks, their training challenges, and solutions like LSTMs and GRUs.
Previous
Page 2 of 2
Next