I of course mean "bullshit" in the highly technical sense defined by Frankfurt [1]. The defining feature that separates a bullshitter from a liar is that a liar knows and understands the truth and intentionally misrepresents the matters of fact to further their aims, whereas a bullshitter is wholly unconcerned with the truth of the matters they are discussing and is only interested in the social game aspect of the conversation. Bullshit is far more insidious than a lie, for bullshit can (and often does) turn out to be coincident with the truth. When that happens the bullshitter goes undetected and is free to infect our understanding with more bullshit made up on the spot.
DallE generates the images it thinks you want to see. It is wholly unconcerned with the actual objects rendered that are the ostensible focus of the prompt. In other words, its bullshitting you. It was only trained on how to get your approval, not to understand the mechanics of the world it is drawing. In other words, we've trained a machine to have daddy issues.
A profoundly interesting question (to me) is if there's a way to rig a system of "social game reasoning" into ordinary logical reasoning. Can we construct a Turing Tarpit out of a reasoning system with no true/false semantics, a system only designed to model people liking/disliking what you say? If the answer is yes, then maybe a system like Dalle will unexpectedly gain real understanding of what it is drawing. If not, systems like Dalle will always be Artificial Bullshit.
[1] http://www2.csudh.edu/ccauthen/576f12/frankfurt__harry_-_on_...