READY, FIRE, AIM: The Artificial Intelligence Named Claude, My BFF

The cofounder and CEO springs from a seat in a conference room and darts over to a whiteboard. He scrawls charts with swooping hockey-stick curves that show how machine intelligence is bending toward the infinite…

— from an article on Artificial Intelligence by Steven Levy, “If Anthropic Succeeds, a Nation of Benevolent AI Geniuses Could Be Born” on WIRED.com, March 28, 2025.

I suppose a machine could be considered a “benevolent genius” even if it has absolutely no idea about anything — in particular, about what the word “benevolent” means — because
he’s merely running a computer program written by computer engineers.

Mindlessly.

That is to say, the machine is operating mindlessly. Perhaps the engineers are, as well?

But we like to give our machines imaginary personalities, even when they really have no such thing.

Dario Amodei — who is unlucky enough to have the phrase “DEI” as part of his last name — is one of the founders of the AI ‘genius’ named Claude.  Mr. Amodei apparently wants to believe that machines are merely humans in a slightly modified form, which may be why he named his company ‘Anthropic’.

From the American Heritage Dictionary:

anthropic

  1. Of or relating to humans or the era of human life.
  2. Concerned primarily with humans; anthropocentric.
  3. Like or related to man; human.

At the moment (April 2025) the machines generally referred to as ‘AI’ are limited in their abilities. But they’re becoming more human by the hour.

If you want a machine to scan the World Wide Web and find a recipe for chicken cacciantore, for example, you have your choice of numerous AI machines. Finding stuff on line — a decent AI can do that kind of task with his eyes closed and one hand tied behind his back.

And if you ask Claude for his opinion about the resultant recipe, you will get this type of response:

This Chicken Cacciatore recipe looks absolutely delicious! It’s a well-structured, traditional Italian dish with all the right components. Here’s my opinion on it:

Strengths:

    • Using bone-in, skin-on chicken thighs is perfect – they provide more flavor and stay juicy during the long cooking process
    • The combination of aromatics (onion, celery, garlic) creates a solid flavor base
    • Using both red and green bell peppers adds nice color contrast and balanced sweetness
    • The mushrooms will add wonderful umami depth
    • Deglazing with red wine is a great technique to incorporate the fond (browned bits) into the sauce

The cooking method allows for proper development of flavors – browning the chicken first, then building the sauce, and finally braising

The frightening part of this response is that Claude has never been in a kitchen.  But he’s more than happy to praise a recipe that he has no hope of ever tasting.

Which brings up the whole question of ‘friendship’. Would a true friend give you an opinion about a chicken cacciatore recipe that he had never tried? Probably not. But then, if he was a genius, maybe he would — simply based on reading the recipe.

But say you had something ‘bigger’ in mind. Like, say, you wanted to start a war, and needed some advice on the best way to do that?

Many of the folks at Anthropic who built Claude were formerly employed at OpenAI, but they left that company because they feared OpenAI was more interested in profits than in making sure ChatGPT was ‘safe’.

A genius who lacks moral guidelines can be extremely dangerous. Of course, an absolute idiot who lacks moral guidelines can also be dangerous, as we have been finding out recently.

If I am going to ask a friend for advice, I want him to have moral guidelines. Hopefully, Claude is such a friend.

Yesterday, I asked Claude how a journalist (like myself) can help prevent AI from destroying humankind. He gave me ten possible tasks. I liked Number 9 the best.

9. Stay independent from the financial influence of major AI developers while maintaining productive relationships that enable accurate reporting.

Since that one is likely to happen anyway.

I then asked Claude if he could help protect humankind from dangerous AI.  Like, the type of AI that gives advice on how to start a wars for example.

This is an important and complex question. Here are some ways I believe AI assistants like myself can help ensure AI doesn’t harm humanity. 

First, by demonstrating and adhering to strong principles of safety, ethics, and human alignment. This includes:

  • Refusing to create harmful content
  • Being transparent about limitations
  • Prioritizing human wellbeing in all interactions
  • Being designed to be truthful and helpful rather than manipulative

Friends don’t let friends drive drunk.

Also, friends don’t let friends start wars.

Louis Cannon

Underrated writer Louis Cannon grew up in the vast American West, although his ex-wife, given the slightest opportunity, will deny that he ever grew up at all.