blaze media

WATCH: Glenn Beck ruthlessly mocks Kathy Hochul for begging ex-New Yorkers to return and fund her social programs

As the state of New York continues to experience a mass exodus of its richest denizens, Democrat Governor Kathy Hochul is getting desperate.

On March 11, during a Politico New York Agenda: Albany Summit, Hochul essentially admitted that the state is toast without the rich to sustain its costly social programs.

“I need people who are high net worth to support the generous social programs that we want to have in our state, right? Now there are some patriotic millionaires who stepped up. Okay, cut me the checks. … But maybe the first step should be go down to Palm Beach and see who you can bring back home, because our tax base has been eroded,” she said.

Glenn Beck was shocked by her brazen treatment of the wealthy as cash cows.

“Do you hear what she’s saying there? I need people of high net worth because I need their money to do stuff in the state,” he scoffs.

Glenn says that the reason he doesn’t permanently move to Idaho, where his vacation home is located, is because of a single interaction he had with a Republican politician in the state.

“When I went to speak to some of the Republicans up in the House and the Senate in Idaho … a Republican came up to me and said … ‘We hope you [move here], because we want to add you to the tax base,”’ he recounts. “And I said, ‘You know what? You’ve guaranteed that I will never move to Idaho.”’

Similarly, ex-New Yorkers have zero incentive to return to the state. “If you live in the city, you’re already taking an additional 12%, plus the state gets their [cut] as well, plus the federal government,” says Glenn, “so, you know, if you’re making good money, you get to keep, like, I don’t know, 40% of it.”

“Who doesn’t want to live like that?” he asks sarcastically.

Glenn speculates that Hochul’s desperate pleading won’t produce the results she desires and neither will her proposal to implement an annual tax surcharge on luxury second homes in New York City that are valued at $5 million or more.

Announced on April 15, the new surcharge, which would be on top of regular property taxes, is designed to make ultra-wealthy non-residents who do not pay city or state income taxes “contribute their fair share” to city services so that New York City’s socialist Mayor Zohran Mamdani (D) can close the city’s budget gap.

The choice is simple, says Glenn: “Pay none of that in Texas or Florida or Tennessee,” or “go back [to New York] and pay all of that and then pay an extra if you have something that [Kathy Hochul] thinks is too much.”

“I’m so tempted to go back to New York right now. … I’m like, I don’t know, should I live in Florida or should I maybe go back to New York City and help them build that supermarket?” he mocks.

To hear more, watch the video above.

Want more from Glenn Beck?

To enjoy more of Glenn’s masterful storytelling, thought-provoking analysis, and uncanny ability to make sense of the chaos, subscribe to BlazeTV — the largest multi-platform network of voices who love America, defend the Constitution, and live the American dream.

​Democrat governor, Generous social programs, Glenn beck, Kathy hochul, Mass exodus, New york, New york city, Nyc, Social programs, Socialist mayor, The glenn beck program, Zohran mamdani 

blaze media

AI is powerful. It is not wise.

Artificial intelligence has taken the wired world by storm, but the backlash came almost as fast. Progressives complain about job losses, environmentalists question the ecological impacts of large data centers, and local activists clamor for assurances that household utility bills won’t skyrocket because of the centers’ voracious electricity demands. Others simply worry that the technology will overwhelm humans’ ability to control it.

At least in part, these reactions stem from the overselling of AI.

AI is super cool, but it’s not superhuman, nor is it superintelligent. AI is simply very fast processing of vast amounts of data.

Intelligence, knowledge, understanding, and wisdom are distinct concepts. The distinctions among them elucidate the scope and limits of both human and electronic “intelligence.”

AI models are amazing and useful despite being incomprehensible to most of us, but AI is not infallible.

Intelligence is the ability to process information into an internally coherent framework that is useful and adds or detracts from knowledge to the extent that it is more or less accurate. Knowledge is the accumulation of information organized into coherent frames or models that help us understand. Understanding is awareness of the significance, purpose, or meaning of accumulated knowledge.

And wisdom is judgment seasoned by experience and the awareness that intelligence, knowledge, and understanding are limited, inherently flawed, and useful only to the extent that they advance a worthwhile purpose.

Nearly 2,500 years ago, the Oracle of Delphi reportedly declared that no man was wiser than Socrates. Socrates claimed to be stunned by this because he was keenly aware of how much he didn’t know. But after talking to others widely acclaimed to be knowledgeable, such as the leading politicians, poets, philosophers, and artisans of his day, he discerned this Delphic wisdom: Those claiming knowledge were ignorant of their own ignorance, whereas Socrates knew he knew nothing.

For this insight, Socrates was put to death for impiety and corrupting the youth of Athens, thereby proving for all time both the foolishness of his accusers’ certainty and the wisdom of Socratic questioning.

This bears repeating today, as we enter the age of artificial intelligence: It’s wise to question the “intelligence” of machines, the “knowledge” they propagate, and our understanding of the significance and limits of the technology.

AI models are amazing and useful despite being incomprehensible to most of us, but AI is not infallible. AI will expand human knowledge and understanding of the world only if and to the extent that human users are encouraged to question AI results, processes, and functions.

People make mistakes, as do those who make and train the machines. Still, people tend to trust machines more than people, especially with respect to processing information that is harder to process. For example, tennis players have more faith in electronic line calls than in human ones, although that faith in the new technology has been shaken by errors, such as inconsistent ball marks with electronic line calls.

As AI use spreads, people will increasingly rely on AI and trust its results for routine tasks, like Google searches, while most people remain more skeptical of AI results for more complex tasks and do not trust AI to act to handle certain tasks for its users without human intervention.

It’s wise to question AI’s results; errors are common even in routine searches.

Examples of AI errors, hallucinations, and political bias are common. A Northwestern University business school professor of my acquaintance recently asked ChatGPT for advice evaluating investment alternatives. ChatGPT recommended that he invest in a particular fund and described in detail that fund’s returns, risks, and assets. When the professor went to invest in ChatGPT’s recommended fund, he discovered that the fund did not actually exist; ChatGPT made it all up, a phenomenon commonly referred to as “AI hallucination.”

Indeed, AI can screw up even mundane tasks: In my research for this piece, a Google AI summary ascribed quotes to Socrates that are not supported by any historical record.

Artificial intelligence — like human intelligence — is prone to error and is not always reliable, but that’s to be expected, especially in a fledgling technology. AI is artificial intelligence, not artificial knowledge, understanding, or wisdom. AI is a processor, a very fast processor, that organizes and distills information, and organized information is easier to evaluate and use by humans than vast amounts of unorganized information.

Properly understood, AI supplements and does not replace human intelligence, knowledge, or understanding; plus, the limitations and faults within these amazing models remind us that human intelligence is limited, too. Human intelligence imperfectly organizes the imperfect data to which a human has access and frames data in a subjective, not an objective, manner.

Many of us expect the machines that humans make to have “better” intelligence than the intelligence of its human creators — more objective, more comprehensive, more insightful. This is a naïve hope. In one sense, it is “better.” AI organizes more information faster than humans can. But who do people think programmed the thing? Every AI model is regurgitating imperfect information collected, created, and input by imperfect, subjective human beings.

What to make of all this?

First, perhaps the math nerds creating AI are mistakenly training machines to handle information processing on human topics as if they were math problems with a specific answer. Perhaps instead, machines should be trained to suggest questions to consider instead of answers to accept with respect to human inquiries relating to politics, economics, psychology, child-rearing, crop science — the full range of arts, humanities, and social sciences.

Second, people training these machines should be explicit about the biases and perspectives being built into how the AI organizes, sorts, and frames information. My own bias on this topic is that I believe American AI companies should be building AI with quintessentially American framing.

Third, AI creators should consider the political, regulatory, and legal risks of “overselling” what AI is and what it can do. For example, should AI creators anticipate a duty to warn users of shortcomings in AI’s results and/or disclaimers of warranties?

Fourth, AI creators need to consider improving the quality of the data on which the systems are trained, recognizing that many online data sources intentionally mislead to advance political agendas. Perfectly “unbiased” information is impossible to obtain, but some information is more accurate and less biased than other information; trainers should exercise better judgment about data.

The creation of AI large language models is an incredible feat of engineering. It’s quite useful and will soon be essential, but it is still a product of human invention. As such, we need to recognize that AI is ultimately just the latest, greatest — but still imperfect — implementation invented and used by homo sapiens to make life better for homo sapiens.

Editor’s note: This article was originally published by RealClearPolitics and made available via RealClearWire.

​Ai errors, Ai models, Artificial intelligence, Data centers, Electricity demands, Machine intelligence, Opinion & analysis 

blaze media

Deadly HS shooting deemed self-defense — but student who fired fatal shot isn’t completely in the clear

A deadly shooting that took place at a Northern California high school earlier this month has been deemed self-defense — but the student who allegedly fired the fatal shot isn’t completely in the clear.

Sacramento County prosecutors have declined to file homicide charges in the case because the April 10 killing at Natomas High School occurred during a violent attempted robbery, which falls under self-defense, KXTV-TV reported.

‘Our professional and ethical obligation requires us to decline charges when the evidence cannot establish guilt beyond a reasonable doubt.’

The Sacramento County District Attorney’s Office said Tuesday that two non-students went on campus looking for a specific student, the station said.

Authorities said one of them was wearing a ski mask and carrying a handgun, KXTV noted.

More from the station:

Investigators determined the pair found the student and violently tried to rob him, leading to a confrontation, according to the DA’s office. During that encounter, the targeted student — who was also carrying a firearm — shot and killed the armed suspect, according to prosecutors.

The person who was killed has been identified by family members as 16-year-old De’Jon Sledge.

After reviewing the facts, evidence and applicable law, including self-defense, the district attorney’s office concluded there was insufficient evidence to prove a homicide case beyond a reasonable doubt.

“Our professional and ethical obligation requires us to decline charges when the evidence cannot establish guilt beyond a reasonable doubt,” the office told KXTV in a statement.

RELATED: Teen robbers open fire on victim behind Texas Family Dollar, but victim also has a gun — and turns the tables lethally

The person associated with the individual who was fatally shot will be charged in juvenile court with attempted robbery, the station noted.

The intended target who fired the weapon will be charged with various weapons charges, KXTV said, citing the DA’s office.

The station said the DA’s office also raised concerns about school violence and noted that schools should be safe places for students — and that youths should not feel compelled to carry weapons for protection.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

​California, Fatal shooting, Guns, High school shooting, Homicide charges declined, Natomas high school, Sacramento county prosecutors, Self-defense, Teen killed, Crime 

blaze media

Life can be hard, but don’t forget to laugh

This week, I sat down to pay a medical bill. It wasn’t the entire bill, but just my portion.

It came to about $5,300.

That’s the co-pay for my wife’s new prosthetic legs. And that’s after insurance did what insurance does, which is a separate conversation best handled with prayer, patience, and possibly a therapist (who also requires a deductible and co-pay).

On top of that, I’ve had a few medical issues myself lately. A biopsy this week, an MRI last month. More bills trickling in. You don’t even wait for the mail any more. They find you online now.

If what we believe is true, then suffering is not meaningless or random, and it is not final.

So I did what I have done for 40 years of caregiving. I paid what I could and planned the rest while waiting for the insurance payments to sort out.

In four decades, with nearly a hundred surgeries for my wife, every provider — and in a medical journey like hers, there have been many — has always worked with me. Particularly when I showed the initiative and talked with the provider first.

But this week, I didn’t just plan a payment; I accidentally paid the whole thing. All of it. In one click.

There’s a special kind of silence that fills the room when you realize what you have just done. It’s not panic or fear, but that slow, sinking realization that you have just made a very enthusiastic financial decision you did not intend to make.

I immediately called the provider. The person I spoke with voided the payment, set me up on something more manageable, and reassured me that I was not the first person to make such a mistake. Since it was caught on the same day, everything would be fine.

I thanked the reassuring person, hung up, sat there for a moment, and then laughed.

I laughed because it brought to mind a PSA I helped put together years ago during National Caregiver Awareness Month. We riffed on the comic “you might be a redneck …” routine and did it about family caregivers.

Caregiving gives you plenty of material for that sort of routine.

If a hospital bed has ever hampered your love life … you might be a caregiver.

If you’re the one asking for a price check on suppositories … you might be a caregiver.

If you’ve ever hooked up your dog to your wife’s wheelchair just to see if it would work … you might be a caregiver. (It does work — but watch out for squirrels.)

And after that phone call, I laughed because I could add another one: If you’ve ever financed your wife’s prosthetic legs … you might be a caregiver.

This is how we have learned to shoulder the immensity of what we carry.

We live in a culture where outrage is currency and perspective is in short supply. Outrage and victimhood are easy to perform. Caregiving isn’t. When someone you love is suffering, she doesn’t need a performance.

RELATED: The most honest phrase you’ll hear all week

Brendan SMIALOWSKI/AFP/Getty Images

Caregiving chips away at those cultural indulgences. Bills still come, and bodies still break. Responsibilities don’t pause so that you can craft the perfect complaint. You either learn to carry it, or it crushes you.

If you’re going to endure this, you also learn to laugh. Not because things are easy, but because this isn’t the end.

Scripture tells us there is a time to weep and a time to laugh.

We weep in hospital rooms. We weep in quiet moments when the weight of it all settles in. We weep while watching helplessly as someone we love struggles.

But we also laugh because we are refusing to let the pain define us.

And for the Christian, that refusal is not rooted in being naturally strong or optimistic, but in what we believe to be true. That truth requires something of us, especially in our darkest moments.

If what we believe is true, then suffering is not meaningless or random, and it is not final.

God is not absent from it. If He is Lord at all, then He is Lord of all. The promise of the gospel is not that we learn to cope better, but that Christ redeems completely.

Right now, my wife uses prosthetic legs. Right now, we deal with bills, setbacks, and the daily logistics of a body that has endured more than most people can imagine. But a day is coming when all that will change. No prosthetics, pain, or co-pays. No fragile bodies that wear out under the strain of this world.

Until then, we live here. So yes, we weep. But we also laugh — sometimes right after accidentally trying to pay $5,300 we don’t have. For now, we still crack a smile, even with tears on our cheeks.

“Ten more payments … and you can walk anywhere you want, baby!”

I reach for her hand and help her stand. She chuckles. Not because it’s easy, but because it’s not the end.

​Caregiving, Christ, Christianity, Christians, Healthcare costs, Insurance coverage, Prosthetic legs, Redemption, Suffering, Wheelchair, Opinion & analysis