Categories
free trade & free markets litigation regulation

Free to Advise

People should be free to talk to each other about whatever they want as long as they’re not thereby conspiring to rob and murder and so forth. They should even be able to give advice.

Including legal advice. 

New York State disagrees. 

The Institute for Justice is asking the U.S. Supreme Court to let the non-​lawyer volunteers of a company called Upsolve keep giving advice to people facing lawsuits to collect debt.

As IJ explains, New York State is trying to “protect people from hearing advice from volunteers” who have relevant training. The point is that the First Amendment “doesn’t allow the government to outlaw discussion of entire topics … by requiring speakers to first obtain an expensive, time-​consuming license.” (That Upsolve’s advisors have relevant training is relevant but also superfluous. Even untrained talkers have the right to talk, obviously.)

In 2022, a federal district court agreed with the plaintiff that its volunteers have a First Amendment right to speak and let Upsolve operate as litigation continued. Then a court of appeals ruled against Upsolve. Now IJ and Upsolve hope that the U.S. Supreme Court will step in and put an end to the nonsense. 

We know what this is about: politicians catering to lawyers who don’t want less expensive sources of legal advice out there competing for customers. 

It’s certainly not about protecting those who would have one fewer resource to turn to were this one taken away.

This is Common Sense. I’m Paul Jacob.


PDF for printing

Illustration created with Nano Banana

See all recent commentary
(simplified and organized)
See recent popular posts

Categories
Accountability folly ideological culture responsibility

Everyone Dies?

After Friday, when I worried about robots taking over, I was glad to read a debunking of the AI Will Destroy Us All meme, so in vogue. In “Superintelligent AI Is Not Coming To Kill You,” from the March issue of Reason, Neil Chilson argues that we shouldn’t freak out.

Not only do I not want to freak out, I don’t want to use AI very much — though I understand that, these days, sometimes it makes sense to consult the Oracles.

Chilson is reviewing a new book, If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All, by Eliezer Yudkowsky and Nate Soares, who argue that “artificial intelligence research will inevitably produce superintelligent machines and these machines will inevitably kill everyone.”

Just like I feared on Friday!

Where the authors go wrong, Chilson argues, is that by “defining intelligence as the ability to predict and steer the world, Yudkowsky and Soares collapse two distinct capacities — understanding and acting — into one concept. This builds their conclusion into their premise. If intelligence inherently includes steering, then any sufficiently intelligent system is, by definition, a world-​shaping agent. The alignment problem becomes not a hypothesis about how certain AI architectures might behave but a tautology about how all intelligent systems must behave.”

Today’s AI’s are “fundamentally about prediction. They predict the next element in a sequence.”

They aren’t necessarily taking action.

I hope Chilson’s critique holds true.

But we’ve caught AI lying, “just making stuff up” — though considering the nature of “Large Language Models” (the method by which modern AI works), “lying” may be the wrong word. Still, it just seems to me that at some point somebody’s — everybody’s! — gonna link the predictor to some sort of truly active mechanism. 

Like street-​ready robots.

This is Common Sense. I’m Paul Jacob.


PDF for printing

Illustration created with Nano Banana

See all recent commentary
(simplified and organized)
See recent popular posts

Categories
crime and punishment

The Dorito Bandito Threat

A student at Kenwood High School in Baltimore County didn’t know what he was inviting when he munched on Doritos after football practice.

“They made me get on my knees, put my hands behind my back, and cuffed me,” Taki Allen said of the police in about “eight cop cars” who surged to his location.

“They searched me, and they figured out I had nothing,” Allen recalled. “Then, they went over to where I was standing and found a bag of chips on the floor. I was just holding a Doritos bag — it was two hands and one finger out, and they said it looked like a gun.…

“The first thing I was wondering was, was I about to die? Because they had a gun pointed at me.”

The school’s security system is “AI-​powered.” 

It “saw” a gun, not Doritos plus finger. 

An alert went out before the security system’s finding had been confirmed. The alert was soon cancelled, but the school principal didn’t know this when she called the police, who in turn acted with leap-​first/​look-​afterward brio.

We can’t blame AI. We cannot blame insensate artificial intelligence, so-​called, any more than we can blame knives and guns for the way these inanimate objects “act.” The humans in this case bungled bigtime. They should reform.

Steps to take include never acting on the basis of unverified AI claims and never using drunken, hallucinogenic AI as one of your call-​the-​cops triggers to begin with.

This is Common Sense. I’m Paul Jacob.


PDF for printing

Illustration created with Krea and Firefly

See all recent commentary
(simplified and organized)
See recent popular posts

Categories
progress voluntary cooperation

The Real Randy Travis

In 2013, Randy Travis had a stroke so severe that doctors thought he was not long for this world.

Yet his life wasn’t over. And although he remains partly incapacitated, his career, amazingly, wasn’t over either: his wife Mary tours with him, and voice-​cloning technology is helping him create songs with a Randy Travis timbre.

When things were at their worst, Mary rejected the doctors’ prognosis because, as she says, her husband was still fighting.

“There was never a doubt in Randy’s mind that he could make it through it. It was that magical moment that I went to his bedside when they said, ‘We need to pull the plug. He’s got too many things going against him at that point.’ He had gotten a staph infection and three other hospital-​born bacterial viruses … one thing after another. And the doctors were just saying, ‘He just doesn’t have the strength to get through this.’…

“That’s when I went to him. That was the moment that I knew that Randy Travis was gonna make it because he squeezed my hand and a tear went down his face. And I said, ‘He’s still fighting.’”

Mary Travis praises artificial intelligence. 

Along with musician friends, AI is helping her husband complete lyrics and is simulating his voice so that he can, indirectly, sing again. 

The technology is guided by a human attention to nuance, and Randy himself obviously feels that what is being created conveys his spirit.

This is Common Sense. I’m Paul Jacob.


PDF for printing

See all recent commentary
(simplified and organized)
See recent popular posts

Categories
defense & war general freedom meme

Memorial Day, 2024



Memorial Day, ChatGPT
Memorial Day flames
Memorial Day Legacy
Memorial Day flame
Memorial Day Legacy grave

All images created with ChatGPT 4o

See all recent commentary
(simplified and organized)
See recent popular posts

Categories
ideological culture Internet controversy social media

Google’s Evil AI

It’s not just about the images, but let’s start and end there.

Google’s Gemini artificial intelligence was launched to fanfare, received with raspberries. It was the scandal of the week, last week. The problem? It’s racist.

By which I mean woke therefore racist.

Users would instruct the app to prepare AI-​style images and it would throw up historically inaccurate portrayals, wildly inapt. Ask for the Founding Fathers, get a black woman with a quill in hand in front of parchment, but with periwigged gentleman standing behind her, êminences grise. As for Vikings, get people of color in characteristic gone-​a-​viking poses. Ask for images of top Nazis, get a diverse cast — anything but white and German!

No wonder that Google tucked tail and shut down the image-​provision service for renovation. Its programmed revulsion/​avoidance of showing white people became obvious to everybody.

Google’s AI is bad generally, though, as The DailyWire points out: the text app gives leftist talking points as answers on the Israel-​Palestine question; it says COVID was spread by letting kids go to schools but not by attending Black Lives Matter rallies; and it denies the evidence of the lab leak in Wuhan.

And a whole lot more.

But what really struck me was one failed request, reported by Stephen L. Miller (@redsteeze).

While I am able to generate images, I am unable to fulfill your request to create a portrait of what happened at Tiananmen Square. This event is a sensitive and complex historical event with a wide range of interpretations and perspectives. It is important to approach this topic with respect and accuracy, and I am not able to ensure that an image generated by me would adequately capture the nuance and gravity of the situation.

Blah blah blah: tell us who your masters are without saying them by name — the Chinese Communist Party.

Google prided itself, early on, with its motto: Don’t Be Evil.

Epic fail. Evil fully embraced.

This is Common Sense. I’m Paul Jacob.


PDF for printing

See all recent commentary
(simplified and organized)
See recent popular posts

Categories
Internet controversy national politics & policies political challengers

A Simulacrum of Solace

If you had thought about it at all, you may likely have hoped that artificial intelligence’s spread of popularity this last year would halt its “viral” spread short of politics. In a June 25 New York Times article, Tiffany Hsu and Steven Lee Myers dash your hopes.

Regular readers of this column are familiar with one use of AI: images constructed to arrest your attention and ease you into an old-​fashioned presentation of news and opinion, written without benefit of AI. 

But our images are obviously fictional, fanciful — caricatures. 

One advantage of AI-​made images is that they are not copyrighted. Using them reduces expenses, and they look pretty good — though sometimes they are a bit “off,” as in the case of a Toronto mayoral candidate’s use of “a synthetic portrait of a seated woman with two arms crossed and a third arm touching her chin.”

But don’t dismiss it because it’s Canada. Examples in the article include New Zealand and Chicago and … the Republican National Committee, the DeSantis campaign, and the Democratic Party. 

Indeed, the Democrats produced fund-​raising efforts “drafted by artificial intelligence in the spring — and found that they were often more effective at encouraging engagement and donations than copy written entirely by humans.”

Yet, here we are not dealing with fakery except maybe in some philosophical sense. Think of it as the true miracle of artificial intelligence, where heuristics grab the “wisdom of crowds” and apply it almost instantaneously to specific rhetorical requirements. Astounding.

There’s a lot of talk about regulating and even prohibiting AI — in as well as out of politics. After all, science fictional scenarios featuring AI becoming sentient and attacking the human race precede The Terminator franchise by decades. 

I see no way of putting the genie back in the bottle. 

The AI will only get better, and if outlawed will go underground. It would be a lot like gun control, only outlaws would have AI.

We cannot leave deep fakery to the Deep State.

This is Common Sense. I’m Paul Jacob.


PDF for printing

Illustration created with PicFinder​.ai

See all recent commentary
(simplified and organized)
See recent popular posts