Papers
arxiv:2210.10615

A Unified View of Masked Image Modeling

Published on Oct 19, 2022
Authors:
,

Abstract

MaskDistill, a unified masked image modeling technique, reconstructs normalized semantic features for improved performance in image classification and semantic segmentation.

AI-generated summary

Masked image modeling has demonstrated great potential to eliminate the label-hungry problem of training large-scale vision Transformers, achieving impressive performance on various downstream tasks. In this work, we propose a unified view of masked image modeling after revisiting existing methods. Under the unified view, we introduce a simple yet effective method, termed as MaskDistill, which reconstructs normalized semantic features from teacher models at the masked positions, conditioning on corrupted input images. Experimental results on image classification and semantic segmentation show that MaskDistill achieves comparable or superior performance than state-of-the-art methods. When using the huge vision Transformer and pretraining 300 epochs, MaskDistill obtains 88.3% fine-tuning top-1 accuracy on ImageNet-1k (224 size) and 58.8% semantic segmentation mIoU metric on ADE20k (512 size). The code and pretrained models will be available at https://aka.ms/unimim.

Community

I implemented MaskDistill from scratch in PyTorch and reproduced the paper's results with ViT-Base. Code and pre-trained weights are open sourced:

https://github.com/drkostas/MaskDistill-PyTorch

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2210.10615
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2210.10615 in a dataset README.md to link it from this page.

Spaces citing this paper 1

Collections including this paper 1