Side Adapter Network for Open-Vocabulary Semantic Segmentation

1HUST,2Microsoft
CVPR 2023 Highlight

*Indicates Equal Contribution
banner

Keywords

Fast, Accurate, Parameter-effificient Tuning

Abstract

SAN models the semantic segmentation task as a region recognition problem. A side network is attached to a frozen CLIP model with two branches: one for predicting mask proposals, and the other for predicting attention bias which is applied in the CLIP model to recognize the class of masks. This decoupled design has the benefit CLIP in recognizing the class of mask proposals. Since the attached side network can reuse CLIP features, it can be very light. In addition, the entire network can be trained end-to-end, allowing the side network to be adapted to the frozen CLIP model, which makes the predicted mask proposals CLIP-aware. Our approach is fast, accurate, and only adds a few additional trainable parameters. We evaluate our approach on multiple semantic segmentation benchmarks. Our method significantly outperforms other counterparts, with up to 18 times fewer trainable parameters and 19 times faster inference speed.

Method

To fully unleash the capability of CLIP in open vocabulary semantic segmentation, we present \emph{Side Adapter Network} (SAN), which is an end-to-end framework where mask prediction and recognition are intertwined with the CLIP model. The SAN is implemented by a lightweight vision transformer that can leverage the feature of CLIP, and it has two types of outputs: mask proposals and attention biases. The attention biases are applied to the self-attention of CLIP for recognizing the class of mask proposals. In practice, we fuse the feature of shallow CLIP layers into SAN, and apply the attention biases to rest deeper CLIP layers for recognition. With this single-forward design, the cost of CLIP model can be minimized.

SAN

BibTeX

@proceedings{xu2023side,
        title={Side Adapter Network for Open-Vocabulary Semantic Segmentation},
        author={Mengde Xu, Zheng Zhang, Fangyun Wei, Han Hu, Xiang Bai},
        journal={CVPR},
        year={2023}
        eprint={2302.12242},
        archivePrefix={arXiv},
        primaryClass={cs.CV}
      }