Holistic Evaluation of Text-to-Image Models: Human evaluation procedure
13 Oct 2024
This article outlines the human evaluation process for AI-generated images using Amazon Mechanical Turk and compliance with Human Subjects.
A Deep Dive Into Stable Diffusion and Other Leading Text-to-Image Models
13 Oct 2024
Explore the latest advancements in text-to-image models like Stable Diffusion, DALL-E, and Dreamlike Diffusion.
From Birdwatching to Fairness in Image Generation Models
12 Oct 2024
Discover how various benchmarks, including MS-COCO and DrawBench, are used to evaluate AI image generation models.
Human vs. Machine: Evaluating AI-Generated Images Through Human and Automated Metrics
12 Oct 2024
Explore how human annotators assess AI-generated images based on alignment, quality, aesthetics, and originality.
Holistic Evaluation of Text-to-Image Models: Datasheet
12 Oct 2024
Discover the HEIM benchmark—a new tool designed to holistically evaluate text-to-image models across 12 crucial aspects like quality, bias, and more.
Holistic Evaluation of Text-to-Image Models: Author contributions, Acknowledgments and References
12 Oct 2024
Meet the authors whose contributions formed a holistic evaluation of text-to-image models.
Limitations in AI Model Evaluation: Bias, Efficiency, and Human Judgment
12 Oct 2024
Explore the limits of current AI model evaluations.
New Dimensions in Text-to-Image Model Evaluation
12 Oct 2024
Explore an holistic evaluation of image generation models sets new benchmarks for quality, ethics, aesthetics, and societal impact.
Paving the Way for Better AI Models: Insights from HEIM’s 12-Aspect Benchmark
12 Oct 2024
Discover HEIM, the groundbreaking benchmark assessing text-to-image models across 12 key aspects like quality, fairness, originality, and more.