|
Attention Calibration for Disentangled Text-to-Image Personalization
Yanbing Zhang, Mengping Yang (Student Project Lead), Qin Zhou, Zhe Wang*
CVPR 2024, (Oral Presentation),
[PDF]
[BibTeX]
We propose an attention calibration mechanism to improve the concept-level understanding of the T2I model. Specifically, we first introduce new learnable modifiers bound with classes to capture attributes of multiple concepts. Then, the classes are separated and strengthened following the activation of the cross-attention operation, ensuring comprehensive and self-contained concepts. Additionally, we suppress the attention activation of different classes to mitigate mutual influence among concepts.
|
|
Revisiting the Evaluation of Image Synthesis with GANs
Mengping Yang*, Ceyuan Yang*, Yichi Zhang, Qingyan Bai, Yujun Shen, Bo Dai
NeruIPS Datasets and Benchmarks 2023,
[PDF]
[BibTeX]
We make in-depth analyses on how to represent a data point in the feature space, how to calculate a fair distance using selected samples, and how many instances to use from each set.
Together with these analysis, we build a comprehensive system for synthesis comparison.
|
|
Image Synthesis under Limited Data: A Survey and Taxonomy
Mengping Yang, Zhe Wang*
ArXiv 2023,
[PDF]
[Project]
[BibTeX]
We provide a comprehensive survey on image synthesis under limited data, including data-efficient generative modeling, few-shot generative adaptation, few-shot and one-shot image synthesis.
|
|
Improving Few-shot Image Generation by Structural Discrimination and Textural Modulation
Mengping Yang, Zhe Wang*, Wenyi Feng, Qian Zhang, Ting Xiao
ACM MM 2023,
[PDF]
[Project]
[BibTeX]
We propose textural modulation (TexMod) and strctural discriminator (StructD) for improving the performance of few-shot image generaion.
|
|
Semantic-Aware Generator and Low-level Feature Augmentation for Few-shot Image Generation
Zhe Wang*, Jiaoyan Guan, Mengping Yang (Student Project Lead), Ting Xiao, Ziqiu Chi
ACM MM 2023,
[PDF]
[BibTeX]
We propose semantic-aware generator (SAG) and low-level feature augmentation (LFA) for improving the performance of few-shot image generaion.
|
|
ProtoGAN: Towards high diversity and fidelity image synthesis under limited data
Mengping Yang, Zhe Wang, Ziqiiu Chi, Wenli Du
InS 2023,
[PDF]
[BibTeX]
we propose ProtoGAN, a GAN that incorporates the metric-learning-based prototype mechanism into adversarial learning by aligning the prototypes and features of synthesized distribution and the real distribution.
|
|
DFSGAN: Introducing editable and representative attributes for few-shot image generation
Mengping Yang, Saisai Niu, Zhe Wang, Dongdong Li, Wenli Du
EAAI 2023,
[PDF]
[BibTeX]
we propose DFSGAN for few-shot image generation, which takes dynamic Gaussian mixture (DGM) latent codes as the generator’s input.
|
|
FreGAN: Exploiting Frequency Components for Training GANs under Limited Data
Mengping Yang, Zhe Wang, Ziqiu Chi, Yanbing Zhang
NeurIPS 2022,
[PDF]
[Project]
[BibTeX]
We propose a frequency-aware model for training GANs under limited data, facilitating high-quality few-shot image syntheisi.
|
|
WaveGAN: Frequency-Aware GAN for High-Fidelity Few-Shot Image Generation
Mengping Yang, Zhe Wang, Ziqiu Chi, Yanbing Zhang
ECCV 2022,
[PDF]
[Project]
[BibTeX]
We propose a frequency-aware model for few-shot image generation, enabling high-fidelity synthesis for downstream tasks.
|
|
Better Embedding and More Shots for Few-shot Learning
Ziqiu Chi, Zhe Wang, Mengping Yang, Wei Guo, Xinlei Xu
IJCAI 2022,
[PDF]
We develop Better Embedding and More Shots to address the distorted embedding of target data in few-shot learning.
|
|
Gravitation balanced multiple kernel learning for imbalanced classification
Mengping Yang, Zhe Wang, Yanqiong Li, Yangming Zhou, Dongdong Li, Wenli Du
NCAA 2022,
[PDF]
We propose gravitational balanced multiple kernel learning (GBMKL) method for imbalanced classification.
|