Welcome to another edition of WTF AI.
A blog post on early childhood readers called for an illustration of children’s blocks spelling out “hyperlexia.” First stop, give Adobe image generation a go.
Instead of “Children’s letter blocks spell out H Y P E R L E X I A” it returned children with blocks spelling “Hyprlxic” “Hyprexiic” and “Hyprlzic.”
And there’s a funky reason why…



We know about the large language models (LLMs) that train text generation AI. At Gonzaga University, projects with LLMs include training them to recognize emotion via word usage.
I thought image models acted similarly, associating a text prompt with visual patterns. The descriptions of the source images are crucial. Turns out that the image generator isn’t getting a big-picture plan. It’s pulling together patterns that it’s been fed and building an image by individual pixels. Are you old enough to have played with early computer Paint software? This is the same pixel-by-pixel drawing on steroids.
So even though I spelled out the word, it had no human to correct the text during the generation process. It knows the general patterns of those letters, but not enough to

No AI was harmed in the creation of this text.
