From f1a95e610a9008d7a26550d6e149c6b305d519bf Mon Sep 17 00:00:00 2001 From: Shubhranshi Agarwal <114684758+shubhranshii@users.noreply.github.com> Date: Sat, 8 Mar 2025 19:51:59 +0530 Subject: [PATCH] Fix typo in README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 092355a84..c83d81c2b 100644 --- a/README.md +++ b/README.md @@ -71,7 +71,7 @@ This repo is the official implementation of ["Swin Transformer: Hierarchical Vis 1. Swin Transformer received ICCV 2021 best paper award (Marr Prize). ***08/09/2021*** -1. [Soft Teacher](https://arxiv.org/pdf/2106.09018v2.pdf) will appear at ICCV2021. The code will be released at [GitHub Repo](https://github.com/microsoft/SoftTeacher). `Soft Teacher` is an end-to-end semi-supervisd object detection method, achieving a new record on the COCO test-dev: `61.3 box AP` and `53.0 mask AP`. +1. [Soft Teacher](https://arxiv.org/pdf/2106.09018v2.pdf) will appear at ICCV2021. The code will be released at [GitHub Repo](https://github.com/microsoft/SoftTeacher). `Soft Teacher` is an end-to-end semi-supervised object detection method, achieving a new record on the COCO test-dev: `61.3 box AP` and `53.0 mask AP`. ***07/03/2021*** 1. Add **Swin MLP**, which is an adaption of `Swin Transformer` by replacing all multi-head self-attention (MHSA) blocks by MLP layers (more precisely it is a group linear layer). The shifted window configuration can also significantly improve the performance of vanilla MLP architectures.