Core Focus

Introduction to Learning Principles

Supervised Learning

graph LR
    A[Unlabeled Images] --> B[Learning DNN]
    B --> C[Classification]
    D[Labeled Images] --> C

Transfer Learning

Transfer learning begins with large-scale supervised pretraining, where a Deep Neural Network (DNN) is trained on extensive labeled datasets like ImageNet-1k. This pretrained model can then be effectively adapted for downstream tasks, even with limited labeled data.

image.png

Some Other Examples

LT-ViT

LT-ViT

Multi-modal analysis

Multi-modal analysis

<aside> đź§ 

Supervised Learning Limitations

A bottleneck for building generalized models due to:

Self-Supervised Learning

This leads to Self-supervised Learning, which aims to learn meaningful representations from unlabeled data by creating self-supervised tasks. This approach allows models to leverage vast amounts of unlabeled data, making them more robust and generalizable.

Think of it like educating a child—providing guidance rather than constant supervision.

image.png