Skip to yearly menu bar Skip to main content


Poster

CLIPtone: Unsupervised Learning for Text-based Image Tone Adjustment

Hyeongmin Lee · Kyoungkook Kang · Jungseul Ok · Sunghyun Cho

Arch 4A-E Poster #271
[ ]
[ Poster
Wed 19 Jun 10:30 a.m. PDT — noon PDT

Abstract:

Recent image tone adjustment (or enhancement) approaches have predominantly adopted supervised learning for learning human-centric perceptual assessment. However, these approaches are constrained by intrinsic challenges of supervised learning. Primarily, the requirement for expertly-curated or retouched images escalates the data acquisition expenses. Moreover, their coverage of target style is confined to stylistic variants inferred from the training data. To surmount the above challenges, we propose an unsupervised learning-based approach for text-based image tone adjustment method, CLIPtone, that extends an existing image enhancement method to accommodate natural language descriptions. Specifically, we design a hypernetwork to adaptively modulate the pretrained parameters of the backbone model based on text description. To assess whether the adjusted image aligns with the text description without ground truth image, we utilize CLIP, which is trained on a vast set of language-image pairs and thus encompasses knowledge of human perception. Our approach offers numerous benefits such as a range of adjustments, minimal data collection expenses, and the ability for zero-shot predictions. While our work may bear similarities to existing text-based image editing and colorization methods, it stands out by preserving the contents of the original image, remaining lightweight and efficient in the adjustment process. Our approach’s efficacy is substantiated through comprehensive experiments, including a user study.

Chat is not available.