ABSTRACT: We define a new class of ``implicit'' deep learning prediction rules that generalize the recursive rules of feedforward neural networks. These models are based on the solution of a fixed-point equation involving a single vector of hidden features, which is thus only implicitly defined. The new framework greatly simplifies the notation of deep learning, and opens up many new possibilities, in terms of novel architectures and algorithms, robustness analysis, adversarial attacks, and design, interpretability, sparsity, and network architecture optimization.
- This seminar will not meet in-person, but will be hosted online via Zoom at https://berkeley.zoom.us/j/412230357 (Meeting ID: 412 230 357)
- Start date: 2020-03-10 11:00:00
- End date: 2020-03-10 12:30:00
- Venue: Online via Zoom