michal.i/o
Explorer
blog
2022-12-17 TIL
N+1 ways to implement attention
journal
2024-08-12
2024-08-19
2024-08-26
2024-09-02
2024-09-09
2024-09-16
2024-09-18 - Pytorch Conference Notes
2024-09-23
2024-09-30
2024-10-07
2024-10-14
2024-10-21
2024-10-28
2024-11-04
2024-11-18
2024-11-25
2024-12-02
notes
business
accounting
consulting
Growth
legal
marketing
Open Source Business Models
pricing
Productivity Software
sales
VC Alternatives
dev
algorithms
arrow
bashrc x zshrc
ClickHouse
cloud
CRDTs
cuda
data visualization and dashboarding
Databases
django
docker
duckdb
ffmpeg
hardware
kubernetes
Latencies
logging
networking
object-stores
parquet
postgres
python
pytorch
ray
react-native
redis
rust
search
security
sqlite
terraform
web-servers
math
Linear Algebra
Math for ML
Optimization
Probability
ml
conferences
2023 NeurIPS
models
Mistral7B
papers
2023-04-14 - Combined Scaling for Zero-shot Transfer Learning
2023-12-04 - MobileCLIP - Fast Image-Text Models through Multi-Modal Reinforced Training
2023-12-04 - Rejuvenating image-GPT as Strong Visual Representation Learners
2023-12-05 - Mamba Linear-Time Sequence Modeling with Selective State Spaces
2023-12-09 - SILC Improving Vision Language Pretraining with Self-Distillation
2023-12-09 - Text as Image Learning Transferable Adapter for Multi-Label Classification
2023-12-17 - Stable and low-precision training for large-scale vision-language models
2024-10-04 - Movie Gen A Cast of Media Foundation Models
2024-10-10 - Pixtral 12B
2024-11-03 - GATED DELTA NETWORKS IMPROVING MAMBA2 WITH DELTA RULE
2024-11-03 - On the Efficiency of Convolutional Neural Networks
2024-11-03 - ReMoE FULLY DIFFERENTIABLE MIXTURE-OF-EXPERTS WITH RELU ROUTING
2024-11-03 - TokenFormer - RETHINKING TRANSFORMER SCAL-ING WITH TOKENIZED MODEL PARAMETERS
2024-11-17 - Mixture-of-Transformers A Sparse and Scalable Architecture for Multi-Modal Foundation Models
research ideas
Bad apples for label noise early stopping
Early Fusion Multimodal Encoder Models
Latent Transformers with small vocabularies
Learn to Initialize from OS Models
Learning Skip Layers
Mixture of Modules
Multi Modal Learning to Rank as a replacement for CLIP
Recurrent Computation with Transformers by repeating layers
Remove all the things
Sapiens for Robotics
Small Proxy model to predict loss for given sample
SSMs 4 Rec
Task Routing for Multimodal LLMs
Tiny Foundational model by distilling from a lot of SOTA models
Tiny LLMs with rag in the middle
Two Stream SSMs
Universal embedding space for popular foundational models (or adapters)
Untitled
VLMs for better Vision Backbones
White space separated conv text encoder
3D Computer Vision
A glossary of all the ways ML models fail to train
Active Learning
Agents
Alignment and Post Training
Approximate Nearest Neighbor Search
autograd
benchmarks
CLIP
Cloud GPUs
cnns
Code LLMs
compilers
compression
Computer Graphics
Computer Vision Backbones
Contrastive Learning
Data Curation
Data Formats for ML
Data Loading
Decoder Transformer Inference
Deepspeed
Diffusion Models
Distributed Training
Embedding Models
Evaluation Metrics
Extreme Classification
FairScale
feature-stores
Few Shot Learning
fine-tuning
Flow Matching
Generative Models
GPUs
graphs
Human Pose Estimation and Human Modeling
Image Recognition
Imitation Learning
Instance Recognition and Retrieval
Instance Retrieval and Instance Recognition
jax
Label Noise
Learning to Rank
LLM Reasoning and Test Time Compute
LLM Training and Tuning
logsumexp
Long Context Transformers
Long Tail Classification and Class Imbalance
Machine Learning Tricks and Best Practices
maes
Mamba
matryoshka embeddings
medical
mixture of experts
ML Conferences
ML Courses & Books
ML for Math
ML Infrastructure
ML Scaling
MLX
Mobile Inference
Model Distillation and Transfer Learning
Model Routing
Multi Label Classification
multi-modal
multi-task
Natural Language Processing
nerf
Networking
Normalization
Numerics
Object Detection
ocr
paper-params
Parameter Efficient Fine Tuning (PEFT)
PrefixLM
Quantization
recommenders
Reinforcement Learning (RL)
resources
Retrieval Augmented Generation (RAG)
Retrieval Augmented Models
RL for LMs
Robotics
segmentation
Semantic Search and Ranking
Semi Supervised Learning
Server Inference
SLAM
Small Foundational Models
softmax
speech
State Space Models (SSMs)
Storage
Structured Generation with LLMs
Synthetic Data
Tabular Machine Learning
Tensor Tricks
Text Embeddings
text2sql
Token Dropping, Prunning and Compression
torch compile
Transformer Alternatives (mostly SSMs)
Transformer Properties
transformers
tricks
triton
Untitled
Variational Autoencoders (VAE)
video
Video Generation
Vision - Language Models
Vision Transformers
Visual Search
xformers
xlstm
wiki
Conformer
Contextual Document Embeddings (CDE)
ControlNet
DETR
FLUX
Gecko - Versatile Text Embeddings Distilled from Large Language Models
HNSW
KV Cache Compression
LayerSkip
LO-PQ
Maximal Update Parametrization (μP)
Mixture of Depth
Mixture-of-Transformer
Speech-to-Speech
Stable Diffusion 3 and 3.5
Test Time Learning (Local Learning)
Token Dropping
Vision-Language-Action Models (VLA)
Wav2vec
WaveNet
Untitled
Untitled 1
Dark mode
Light mode
Home
❯
notes
❯
ml
❯
Variational Autoencoders (VAE)
Variational Autoencoders (VAE)
Dec 04, 2024
1 min read
UFV - INF721: Deep Learning - L20: Variational Autoencoders - YouTube
Search
Search
Explorer
blog
2022-12-17 TIL
N+1 ways to implement attention
journal
2024-08-12
2024-08-19
2024-08-26
2024-09-02
2024-09-09
2024-09-16
2024-09-18 - Pytorch Conference Notes
2024-09-23
2024-09-30
2024-10-07
2024-10-14
2024-10-21
2024-10-28
2024-11-04
2024-11-18
2024-11-25
2024-12-02
notes
business
accounting
consulting
Growth
legal
marketing
Open Source Business Models
pricing
Productivity Software
sales
VC Alternatives
dev
algorithms
arrow
bashrc x zshrc
ClickHouse
cloud
CRDTs
cuda
data visualization and dashboarding
Databases
django
docker
duckdb
ffmpeg
hardware
kubernetes
Latencies
logging
networking
object-stores
parquet
postgres
python
pytorch
ray
react-native
redis
rust
search
security
sqlite
terraform
web-servers
math
Linear Algebra
Math for ML
Optimization
Probability
ml
conferences
2023 NeurIPS
models
Mistral7B
papers
2023-04-14 - Combined Scaling for Zero-shot Transfer Learning
2023-12-04 - MobileCLIP - Fast Image-Text Models through Multi-Modal Reinforced Training
2023-12-04 - Rejuvenating image-GPT as Strong Visual Representation Learners
2023-12-05 - Mamba Linear-Time Sequence Modeling with Selective State Spaces
2023-12-09 - SILC Improving Vision Language Pretraining with Self-Distillation
2023-12-09 - Text as Image Learning Transferable Adapter for Multi-Label Classification
2023-12-17 - Stable and low-precision training for large-scale vision-language models
2024-10-04 - Movie Gen A Cast of Media Foundation Models
2024-10-10 - Pixtral 12B
2024-11-03 - GATED DELTA NETWORKS IMPROVING MAMBA2 WITH DELTA RULE
2024-11-03 - On the Efficiency of Convolutional Neural Networks
2024-11-03 - ReMoE FULLY DIFFERENTIABLE MIXTURE-OF-EXPERTS WITH RELU ROUTING
2024-11-03 - TokenFormer - RETHINKING TRANSFORMER SCAL-ING WITH TOKENIZED MODEL PARAMETERS
2024-11-17 - Mixture-of-Transformers A Sparse and Scalable Architecture for Multi-Modal Foundation Models
research ideas
Bad apples for label noise early stopping
Early Fusion Multimodal Encoder Models
Latent Transformers with small vocabularies
Learn to Initialize from OS Models
Learning Skip Layers
Mixture of Modules
Multi Modal Learning to Rank as a replacement for CLIP
Recurrent Computation with Transformers by repeating layers
Remove all the things
Sapiens for Robotics
Small Proxy model to predict loss for given sample
SSMs 4 Rec
Task Routing for Multimodal LLMs
Tiny Foundational model by distilling from a lot of SOTA models
Tiny LLMs with rag in the middle
Two Stream SSMs
Universal embedding space for popular foundational models (or adapters)
Untitled
VLMs for better Vision Backbones
White space separated conv text encoder
3D Computer Vision
A glossary of all the ways ML models fail to train
Active Learning
Agents
Alignment and Post Training
Approximate Nearest Neighbor Search
autograd
benchmarks
CLIP
Cloud GPUs
cnns
Code LLMs
compilers
compression
Computer Graphics
Computer Vision Backbones
Contrastive Learning
Data Curation
Data Formats for ML
Data Loading
Decoder Transformer Inference
Deepspeed
Diffusion Models
Distributed Training
Embedding Models
Evaluation Metrics
Extreme Classification
FairScale
feature-stores
Few Shot Learning
fine-tuning
Flow Matching
Generative Models
GPUs
graphs
Human Pose Estimation and Human Modeling
Image Recognition
Imitation Learning
Instance Recognition and Retrieval
Instance Retrieval and Instance Recognition
jax
Label Noise
Learning to Rank
LLM Reasoning and Test Time Compute
LLM Training and Tuning
logsumexp
Long Context Transformers
Long Tail Classification and Class Imbalance
Machine Learning Tricks and Best Practices
maes
Mamba
matryoshka embeddings
medical
mixture of experts
ML Conferences
ML Courses & Books
ML for Math
ML Infrastructure
ML Scaling
MLX
Mobile Inference
Model Distillation and Transfer Learning
Model Routing
Multi Label Classification
multi-modal
multi-task
Natural Language Processing
nerf
Networking
Normalization
Numerics
Object Detection
ocr
paper-params
Parameter Efficient Fine Tuning (PEFT)
PrefixLM
Quantization
recommenders
Reinforcement Learning (RL)
resources
Retrieval Augmented Generation (RAG)
Retrieval Augmented Models
RL for LMs
Robotics
segmentation
Semantic Search and Ranking
Semi Supervised Learning
Server Inference
SLAM
Small Foundational Models
softmax
speech
State Space Models (SSMs)
Storage
Structured Generation with LLMs
Synthetic Data
Tabular Machine Learning
Tensor Tricks
Text Embeddings
text2sql
Token Dropping, Prunning and Compression
torch compile
Transformer Alternatives (mostly SSMs)
Transformer Properties
transformers
tricks
triton
Untitled
Variational Autoencoders (VAE)
video
Video Generation
Vision - Language Models
Vision Transformers
Visual Search
xformers
xlstm
wiki
Conformer
Contextual Document Embeddings (CDE)
ControlNet
DETR
FLUX
Gecko - Versatile Text Embeddings Distilled from Large Language Models
HNSW
KV Cache Compression
LayerSkip
LO-PQ
Maximal Update Parametrization (μP)
Mixture of Depth
Mixture-of-Transformer
Speech-to-Speech
Stable Diffusion 3 and 3.5
Test Time Learning (Local Learning)
Token Dropping
Vision-Language-Action Models (VLA)
Wav2vec
WaveNet
Untitled
Untitled 1
Backlinks
No backlinks found
Graph View