Use the top benchmark model
GitHub - XPixelGroup/HAT: CVPR2023 - Activating More Pixels in Image Super-Resolution Transformer…
CVPR2023 - Activating More Pixels in Image Super-Resolution Transformer Arxiv - HAT: Hybrid Attention Transformer for…
git clone https://github.com/XPixelGroup/HAT.git
pip install -r requirements.txt
python setup.py develop
Download pre-trained model
pip install — upgrade gdown
Pre-trained models provided in the repository can be found on Google Drive.
Rewriting the configuration file
Below are the steps to run it with an image you have prepared yourself.
The configuration file for the model in options/test
Replace with the path of the pre-trained model file you downloaded earlier.
test_1: # the 1st test dataset
and specify the path of the directory containing the image you want to super-resolve in dataroot.
Set the val items to only the following.
suffix: ~ # add suffix to saved images, if None, use exp name
Run the test by specifying the configuration file rewritten above.
python hat/test.py -opt options/test/HAT_GAN_Real_SRx4.yml
Results are saved in the results directory.
The background is now clear.
If you encounter CUDA out of memory, add the following to the configuration file.
Divide the input image into smaller pieces and execute.
tile: # use the tile mode for limited GPU memory when testing.
tile_size: 512 # the higher, the more utilized GPU memory and the less performance change against the full image. must be an integer multiple of the window size.
tile_pad: 32 # overlapping between adjacency patches.must be an integer multiple of the window size.
I’m a freelance engineer.
Please feel free to contact us with a brief development description.
I am creating applications using machine learning and AR technology.
I send machine learning / AR related information.