All images created with ChatGPT 4o
—
See all recent commentary
(simplified and organized)
Google’s Gemini artificial intelligence was launched to fanfare, received with raspberries. It was the scandal of the week, last week. The problem? It’s racist.
By which I mean woke therefore racist.
Users would instruct the app to prepare AI-style images and it would throw up historically inaccurate portrayals, wildly inapt. Ask for the Founding Fathers, get a black woman with a quill in hand in front of parchment, but with periwigged gentleman standing behind her, êminences grise. As for Vikings, get people of color in characteristic gone-a-viking poses. Ask for images of
No wonder that Google tucked tail and shut down the image-provision service for renovation. Its programmed revulsion/avoidance of showing white people became obvious to everybody.
Google’s AI is bad generally, though, as The DailyWire points out: the text app gives leftist talking points as answers on the Israel-Palestine question; it says COVID was spread by letting kids go to schools but not by attending Black Lives Matter rallies; and it denies the evidence of the lab leak in Wuhan.
And a whole lot more.
But what really struck me was one failed request, reported by Stephen L. Miller (@redsteeze).
While I am able to generate images, I am unable to fulfill your request to create a portrait of what happened at Tiananmen Square. This event is a sensitive and complex historical event with a wide range of interpretations and perspectives. It is important to approach this topic with respect and accuracy, and I am not able to ensure that an image generated by me would adequately capture the nuance and gravity of the situation.
Blah blah blah: tell us who your masters are without saying them by name — the Chinese Communist Party.
Google prided itself, early on, with its motto:
Epic fail. Evil fully embraced.
This is Common Sense. I’m Paul Jacob.
—
See all recent commentary
(simplified and organized)
Regular readers of this column are familiar with one use of AI: images constructed to arrest your attention and ease you into an old-fashioned presentation of news and opinion, written without benefit of AI.
But our images are obviously fictional, fanciful — caricatures.
One advantage of AI-made images is that they are not copyrighted. Using them reduces expenses, and they look pretty good — though sometimes they are a bit “off,” as in the case of a Toronto mayoral candidate’s use of “a synthetic portrait of a seated woman with two arms crossed and a third arm touching her chin.”
But don’t dismiss it because it’s Canada. Examples in the article include New Zealand and Chicago and … the Republican National Committee, the DeSantis campaign, and the Democratic Party.
Indeed, the Democrats produced fund-raising efforts “drafted by artificial intelligence in the spring — and found that they were often more effective at encouraging engagement and donations than copy written entirely by humans.”
Yet, here we are not dealing with fakery except maybe in some philosophical sense. Think of it as the true miracle of artificial intelligence, where heuristics grab the “wisdom of crowds” and apply it almost instantaneously to specific rhetorical requirements. Astounding.
There’s a lot of talk about regulating and even prohibiting AI — in as well as out of politics. After all, science fictional scenarios featuring AI becoming sentient and attacking the human race precede The Terminator franchise by decades.
I see no way of putting the genie back in the bottle.
The AI will only get better, and if outlawed will go underground. It would be a lot like gun control, only outlaws would have AI.
We cannot leave deep fakery to the Deep State.
This is Common Sense. I’m Paul Jacob.
Illustration created with PicFinder.ai
—
See all recent commentary
(simplified and organized)