• About The Regularized Singularity

The Regularized Singularity

~ The Eyes of a citizen; the voice of the silent

The Regularized Singularity

Author Archives: Bill Rider

No One Knows What’s Going to Happen

28 Monday Oct 2024

Posted by Bill Rider in Uncategorized

≈ Leave a comment

Tags

donald-trump, fascism, history, politics, trump

“You’re overthinking it.’ ‘I have anxiety. I have no other type of thinking available.” ― Matt Haig, The Midnight Library

tl;dr

The anxiety is sky-high everywhere you look. The uncertainty is huge and palpable. The upcoming election feels like a doom or a massive relief. This is true no matter who you support. In the meantime, everything seems perpetually frozen. We are all waiting to see what kind of World will greet us in 2025. Will we have hope, or enter into an apocalyptic hellscape? The likely outcome will be something in-between no matter who wins.

“Maturity, one discovers, has everything to do with the acceptance of ‘not knowing.” ― Mark Z. Danielewski, House of Leaves

What I see?

The Nation seems like it is in a purgatory. It matters little who you support to feel this way. Both sides are running on fear of the other. The stakes of the election seem impossibly high and this is paralyzing everyone. All decisions and actions stemming from our governance have ground to a halt. I see it at work where nothing is happening. Everything seems to be frozen in place, waiting for the resolution. That resolution could be swift on November 5th, and that would be kind and merciful. That resolution could take all the way into December and even to January 6th. This would be brutal and the freeze would only deepen.

“Our anxiety does not empty tomorrow of its sorrows, but only empties today of its strengths.” ― C. H. Spurgeon

On the one hand, the society we know is being described in terms of doom and horror. For some people this feels true, and they crave change. It seems to me that they simply want to elect a destroyer who will sweep aside the reality that isn’t working for them. They care little about the nature of the destruction. The system we have today is not working for them. This is not entirely true of course. Others (Elon Musk, Peter Thiel, …) see a system that stands in the way of their greed and domination. In Trump, they see their savior, or their ally, or their dupe, and the path toward annihilation of society’s order. I see the problems too, but want someone to fix them.

On the other side, we have normalcy. Ironically this normalcy is the problem and the strength. Part of the normal is the multiple factions comprising the Democratic party. There are many entrenched interests. We have the people who want progress and acceptance socially for women and LBGTQ people. The biggest block of people is the educated and succeeding part of America. These people are generally okay and doing alright in the current system. They see tearing the current system apart as dangerous. They don’t like the system and often see imperfections, but don’t want to destroy it. They would get on board to fix it. The key is that many people benefit from the current system.

“The only thing that makes life possible is permanent, intolerable uncertainty: not knowing what comes next.” ― Ursula K. Le Guin, The Left Hand of Darkness

What is the reality?

Somewhere between the nihilism of the Trump faction and the normies is truth. Our system is a fucking mess. We have profoundly great inequality in society. We see those losing, the poor and blue-collar folks and the ultra-rich teaming up to take on the educated and reasonably well-off. Social and work life is incredibly uncomfortable. This is due to political, social, and sexual dynamics that are a powderkeg. We all walk around on eggshells almost everywhere. The homeless population is exploding. They are the sign that many are falling off the edge of society. We are not taking care of our citizens and throwing them to the wolves. The government over-regulates and is incredibly inefficient. Everything is getting worse and nothing is getting fixed (systems, roads, etc,…). From where I sit I can see multiple National security programs floundering under the weight of all of this.

At the forefront of our woes as a society are young men. Current society is not working for them. I see it in the young men I know personally and at work. Many of them are flocking toward Trump. His fake masculinity and toughness appeal to them. He puts on an MMA/WWE version of masculinity that is cartoonish. The problem for the Democrats is a lack of response. Tim Walz is part of the reaction. He represents a better more modern form of masculinity, but his impact is dimming. The whole thing has taken gender politics to new dysfunctional highs. Women are under siege from the right, and the type of men they promote is truly toxic. The problem is that the Democrats do not offer something in return. They support movements that seemingly oppose men. This may cost them the election.

“Be the change that you wish to see in the world.” ― Mahatma Gandhi

What I fear?

So you reader might be wondering that with all the problems I see why would I support the normie point-of-view. I really don’t. The issue is the Trump-MAGA won’t fix any problems. They only destroy and only work to make our problems worse. Trump will surely make the inequality worse and do nothing for the common man. He will give them “red meat” in attacking their enemies and doing various cruel things. At the same time, he will enable people like Elon Musk to get even richer. They will continue to exist in a world that 99.99% of Americans can’t fathom. Trump won’t make political corruption leave. He will weaponize it for himself and shift the corruption to help him. Putting a criminal and corrupt man in charge will only supercharge the problem.

“I must not fear. Fear is the mind-killer. Fear is the little-death that brings total obliteration. I will face my fear. I will permit it to pass over me and through me. And when it has gone past I will turn the inner eye to see its path. Where the fear has gone there will be nothing. Only I will remain.” ― Frank Herbert

My real fear is that none of our problems as a Nation will be addressed for many more years. All the problems will just get worse while Americans continue to be divided into warring tribes. I fear bigotry and hatred will be legitimized and supercharged. Progress for women and LBGTQ people will be erased. The American evangelical movement will rule like an American Taliban imposing their morality on everyone. Homeless people will grow and be criminalized rather than helped. Regulation will be destroyed and greed will be pursued absent any morality or ethics. Those with money can escape law, morality, and justice with even greater ease. If you support Trump you are not necessarily a bigot, but you are okay with being ruled by one.

The worst thing I can imagine is myself dying with an epitaph: “born into a democracy; died under a dictator.” America will be swept aside and cast into the dustbin. Worse yet, we could descend into war or simply be auctioned off. If the level of incompetence is allowed to continue unabated our Nation cannot survive. We will fall into incompetence and corruption fueled purely by greed and malice.

Were it the malice of a foreign invaded, it wouldn’t hurt so much. This is the worst case, but little doubt that Trump 2.0 would be a giant shit show. Trump 1.0 was a shit show, but at least some adults with actual ethics were there to limit it. The adults are all gone now.

“An abnormal reaction to an abnormal situation is normal behavior.” ― Victor Frankl

How to Cope?

The thing to remember most of all is that none of us can control what is going to happen. This is the result of forces and events beyond any of our control We are taking part, but only in the smallest way. We are for the most part observers. We will react to the events and our lives will be shaped by them. The shape of the future will be drawn by what is about to occur. This is a big deal. Not knowing what this future holds is the source of the anxiety.

I think the first thing to put your arms around is that things will be bad no matter what. It is a matter of degree. History in the long run is on the side of all of this shit working out. The USA has survived many horrible events and eras. We have continued to exist and even thrive through it all. We will most likely muddle our way through this disaster. In a sense, this is the answer of tragic optimism. Nonetheless, this is a moment of peril for the USA not experienced since the Civil War. Even the fascist threat of World War two didn’t feature this level of threat. Now the fascist threat is inside our Nation. About half the voters seem okay with being led by that fascist.

“Forces beyond your control can take away everything you possess except one thing, your freedom to choose how you will respond to the situation.” ― Victor Frankl

Nonetheless, we should probably be OK, eventually. We’ve been alright before and weathered storms. The biggest question in that statement is how much blood will be shed to get us there. Can we navigate this crisis without killing a lot of our fellow citizens? Can we break the fever and start solving our very real problems in a rational, constructive way? The alternative is a rampant destruction of our institutions and governance followed by a reconstruction. At best, this will be a near-death experience. It will be a truly shitty way to exist.

Americans are fond of saying they hate the government. The thing is that our government is us, and not some separate entity. The lesson is that we hate ourselves. The choice is ours, but I’m not confident we have wisdom. It would be far better to rectify problems and create a government we can love and be proud of. A government that reflects the best of our people and our legacy. In about a week, the future will begin to show itself.

“Two things are infinite: the universe and human stupidity; and I’m not sure about the universe.” ― Albert Einstein

How to make a hydrocode robust

22 Tuesday Oct 2024

Posted by Bill Rider in Uncategorized

≈ Leave a comment

Tags

education, mathematics, philosophy, physics, science

tl;dr

Code users want answers no matter what. The way a code gets there matters a lot. There are useful, but utterly unprincipled ways to achieve this goal. Worse yet, they are popular with users. It is far better to achieve this goal adhering to basic principles that assure solution credibility. Here we discuss both approaches with an emphasis on choosing the principled way. A fundamental principle is to use the combination and consistency with the governing equations as a bedrock. The other basic principles are adherence to conservation and the use of dissipation to produce entropy.

Robust is when you care more about the few who like your work than the multitude who dislike it (artists); fragile when you care more about the few who dislike your work than the multitude who like it (politicians).― Nassim Nicholas Taleb

The Need for Robustness

Recently I was in a high-level, high-stress meeting about the future of hydro codes at work. There was a large number of issues to deal with, but one issue lingers in the background. The meeting was pretty low on technical content, so the discussion focused on management stuff. It was all project plans, timelines, and human resources. All the technical stuff was very high level. Still, one issue is looming: the users demand that a code that always gets answers. The incumbent legacy code is very good at this. The problem is that this is achieved in an appalling way. I’ll get to that.

I agree with the robust code as a goal and the need to give users a code that always produces answers. I also think this should be done in a way that the answers aren’t suspect as bullshit. These codes are used to tackle all sorts of important problems by important people for important reasons. I get that. This points to a level of responsibility in assuring that the answers are defensible; that is they aren’t bullshit. This requires that we adhere to some fundamental principles. Results that violate fundamental principles should be unacceptable.

Difficulty is what wakes up the genius― Nassim Nicholas Taleb

How to Make a Hydrocode Robust Incorrectly

This could have been titled “how to get bullshit results” with a hydrocode. Is anyone catching a theme here? I will freely admit that the practical solution to important problems often pushes code developers to do something awful. The biggest culprit is the process of hydrodynamic turbulence. Since turbulence is not understood, people get away with this shit. Turbulence is fundamentally dissipative too. This means that if you say the flow is turbulent and this means you get more dissipation; the extra dissipation is justified. As one friend quipped, “If the ocean was as viscous as we make it, you could drive to Europe.” Nonetheless, there are bullshit ways to introduce turbulent dissipation and codes do it. There are also legitimate ways to introduce dissipation, but it requires more thought.

Unfortunately, we aren’t dealing with this sort of issue. We are dealing with something far less defensible. I think the root of it goes to a technique popular in finite element analysis. Its popularity is in no way based on correctness. This technique is called “element death.” Basically, if a finite element starts to become a problem it is eliminated. This could be from having a difficult shape (unphysical or distorted or short lengths). It could come from a difficult condition like a super high temperature or pressure. It could be a negative pressure or temperature. It is sort of cowardly and only treats the symptoms of the problem. It does jack shit about the cause of it. It is worse than that. It is absolutely destructive to any credibility as I will elaborate shortly.

The hydrocode(s) decided to mimic this functionality. If a material in a calculation becomes difficult, it is deleted. These are multimaterial hydrocodes that solve complex problems with many materials. These codes solve problems that encounter extreme conditions routinely. These problems are intrinsically difficult. Sometimes the material gets completely out of line with reasonable physical conditions. They often achieve conditions that are implausible even in extreme situations. These conditions wreak havoc with a calculation. One fatal mechanism is causing the timestep size to plummet making the calculation impossibly expensive.

The methodology simply decides to throw the material away if some limits are exceeded. This is protective and gets the code to run to the end. It gets the answer. It can completely annihilate the calculation’s credibility too. The reasons are numerous and relatively simple. In a nutshell, the foundations of computational modeling are being disregarded.

All opinions are not equal. Some are a very great deal more robust, sophisticated and well supported in logic and argument than others.— Douglas Adams

How to Make Hydrocode Robust in a Principled Way

It is important to acknowledge what the foundational principles are for computational modeling. The fundamental theorem was discovered by Peter Lax in the early 1950’s. The basic principle is that the numerical approximation produces the governing equations for the system plus some approximation error. This is often called a truncation error. The second requirement is that the approximation is stable. Of all the governing equations the conservation of mass is the most primal. Conservation of mass is unassailable and not ever the subject of debate. When it is disregarded the entire system of governing equations goes with it. The technical term is consistency. Without conservation of mass, the approximation is not consistent. The theorem being violated is absolute and cannot be quibbled with.

Antifragility is beyond resilience or robustness. The resilient resists shocks and stays the same; the antifragile gets better.— Nassim Nicholas Taleb

Hydrocodes are solving what mathematicians call hyperbolic conservation laws. This is usually mass, momentum, and energy, which are interrelated. When mass is disregarded momentum and energy go with them. Mathematically conservation laws are solved by what are known as weak solutions. A weak solution can be discontinuous and not smooth and support structures like shock waves. These solutions require the solutions to be conservative and be in conservation form (more work by Lax). Conservation form comes from naturally conserving these quantities in a calculation by construction. It can be done by other means, but those approaches don’t provide assurance of weak solutions.

You should never be surprised by or feel the need to explain why any physical system is in a high entropy state.― Brian Greene

The issue is that weak solutions are not unique. The way to provide weak solutions that are correct and unique is dissipation. All of these conditions were derived by Peter Lax and various collaborators. It is in these theorems that the principled answer to robustness can be found. The first principle is to regard conservation as essential. The second principle is to promote dissipation as the response to problems with the solution. Dissipation is usually considered to be less accurate in solutions.

You should call it entropy, for two reasons. In the first place your uncertainty function has been used in statistical mechanics under that name, so it already has a name. In the second place, and more important, no one really knows what entropy really is, so in a debate you will always have the advantage.– John von Neumann

This is the key thought, if you would consider removing mass from a calculation then reducing accuracy should be done instead. Once the material becomes compromised accuracy should be disregarded. The way to deal with these materials is to dissipate the issue. This way the problem can be spread out and diffused without sacrificing credibility. We can do this in a graded way. The key is to remove accuracy and replace it with dissipation. The underlying principle is that dissipation is physical. It is the mechanism of the second law of thermodynamics. The application of dissipation keeps solutions physical and credible. It just reduces accuracy. A loss of accuracy is vastly superior to a loss of consistency. Moreover, dissipation usually gives better stability too. You still end up with a physical solution that can be considered credible.

The law that entropy always increases, holds, I think, the supreme position among the laws of Nature. … if your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation.― Arthur Stanley Eddington

To Sum up the Argument

Disregarding the conservation of mass is not defensible. It is like employing amputation to treat infections when antibiotics are available. During the civil war infections led to huge numbers of amputations to save patients. This practice ended when antibiotics were found. While discard and element death are not as barbaric, they are unnecessary today. Moreover, we know the harm they do. They shred credibility as the results lose consistency with the most fundamental law of physics, the conservation of mass. Instead, adhere to the fundamentals and get robustness by utilizing the wisdom of the basic math.

Entropy is just a fancy way of saying: things fall apart.― Dan Brown

References

Lax, Peter D. Hyperbolic systems of conservation laws and the mathematical theory of shock waves. Society for Industrial and Applied Mathematics, 1973.

Lax, Peter D., and Robert D. Richtmyer. “Survey of the stability of linear finite difference equations.” In Selected Papers Volume I, pp. 125-151. Springer, New York, NY, 2005. (reprint of seminal 1956 paper in ommunications on pure and applied mathematics)

Lax, Peter, and Burton Wendroff. “Systems of conservation laws.” In Selected Papers Volume I, pp. 263-283. Springer, New York, NY, 2005. (reprint of the seminal 1960 paper in ommunications on pure and applied mathematics)

Harten, Amiram, James M. Hyman, Peter D. Lax, and Barbara Keyfitz. “On finite‐difference approximations and entropy conditions for shocks.” Communications on pure and applied mathematics 29, no. 3 (1976): 297-322.

Previous Writing On this Topic

https://williamjrider.wordpress.com/2017/06/30/tricks-of-the-trade-making-a-method-robust/
https://williamjrider.wordpress.com/2016/07/25/a-more-robust-less-fragile-stability-for-numerical-methods/
https://williamjrider.wordpress.com/2015/07/10/cfd-codes-should-improve-but-wont-why/
https://williamjrider.wordpress.com/2014/12/03/robustness-is-stability-stability-is-robustness-almost/
https://williamjrider.wordpress.com/2014/11/21/robust-physical-flexible-accurate-and-efficient/

How is This Election So Damn Close?

05 Saturday Oct 2024

Posted by Bill Rider in Uncategorized

≈ Leave a comment

Tags

donald-trump, maga, news, politics, trump

tl;dr

By any account, the upcoming November election should not be close, yet it is. On one side stands an incompetent, criminal grifter without a hint of personal integrity. On the other, the sitting vice president with a record of achievement. Yet we teeter on the knife’s edge of re-electing the person widely regarded as the worst president to ever serve. His track record alone should be disqualifying. Something more profound must be at stake to enable this paradox. The answer lies in a deep, long-standing sense of broad-based dysfunction permeating society. The country feels in crisis and in desperate need of a new direction. Americans are poised to change course, even if that change proves suicidal. It is essential to chart a new path that leads to a better future.

“There comes a time when one must take a position that is neither safe, nor politic, nor popular, but he must take it because conscience tells him it is right.” ― Martin Luther King Jr.

Why the Question?

Every morning for the past couple of months, I’ve awakened to the genuine terror that Donald Trump might be re-elected president. Trump was an atrocious president before, judged by many historians as the worst in American history. He is a man devoid of morality, capable of constant lies and criminal conduct. His money and political power have been the only barriers between him and prison. Still, he has been adjudged a felon for covering up his misdeeds, lying to avoid taxes and secure credit, and committing sexual assault. He is the very definition of a grifter. In addition to his mendacity, he is a committed anti-intellectual.

“There is a cult of ignorance in the United States, and there has always been. The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that ‘my ignorance is just as good as your knowledge.'” ― Isaac Asimov

If the prospect of this king of morons being president wasn’t enough, we have Project 2025 to terrify us further. Trump is allied with people who envision a future America as a neofascist state ruled theocratically. All of this should make Trump unelectable. Choosing Trump as president is tantamount to societal suicide. Indeed, many of his followers see Trump as the man who crashes the plane, like Flight 93 on September 11, 2001. They literally view Trump as the figure who will destroy the system of government. To me, this sounds like treason. Trump has even acted treasonously, fawning over dictators like Putin. Again, he should be unelectable; yet he is not, and he may very well win.

With all of this said, there is even more to oppose Trump. His entire movement is predicated on minority rule. It goes beyond the structural elements built into the Constitution (electoral college, Senate, gerrymandering, etc.). The movement is devoted to voter suppression and denying people access to the ballot. To top all of this off, Trump’s direct actions exceed these factors. We have January 6th and the attempt to overturn the result of an already minority-skewed election. It was criminal and treasonous. All the while, half the electorate ignores this. Trump’s disregard for protecting national secrets further damns him. It is criminal and irresponsible. As someone with more than 30 years of experience working with national secrets, I am disgusted. If I had done a tenth of what Trump did, I would expect to be in prison for most of my remaining life. Yet he receives no punishment from the law or the electorate.

On the topic of the law, one can look at the courts. Chiefly, you have the Supreme Court stacked by the GOP. They are corrupt and horribly out of touch with the public and any rational reading of the Constitution. They started by flooding the political environment with money through the appalling Citizens United ruling. Lately, the awful decisions have continued with multiple murderous rulings allowing guns to permeate society. Citizens and children are slaughtered in the wake. The Dobbs decision removed the right to abortion from women and returned the matter to the states. It is notable that states’ rights are only used by the GOP to deny people’s rights, never to expand them. People should shudder at the thought of what comes next. Finally, we have the presidential immunity decision. This may well be the worst Supreme Court ruling in more than a century. Trumpism is marked by an incompetent, corrupt, and out-of-control judiciary.

This is a phenomenon that must be understood.

How Can Anyone Vote for This Monster?

It’s crucial to comprehend what motivates Trump voters. The initial reaction is one of disbelief: How can anyone vote for such a horrible man? Even if you’re a hardcore conservative, it’s obvious that he is a vile, despicable person. Yet somehow, he exudes a charisma that charms these people. There is a core of Trump voters who are themselves despicable racists, sexists, and violent, awful people. This group comprises something like 20-30% of the population. They are the true deplorables Hilary mentioned.

The real key is understanding what animates the rest of his support. I strongly believe this stems from a core of anti-establishment sentiment. My neighbor, a rabid Trump supporter, seems to be a genuinely good person. Thus, it feels curious that he can support someone so atrocious. In conversation, a clue emerged: a deep hatred of the establishment. This idea is grounded in a lot of reality. The issue is that Trump won’t fix the establishment; he will just destroy it. It is this anti-establishment sentiment that the left needs to harness.

One of the real problems is the force that animates politics. In the last five elections, two people have defined the outcomes. In 2008 and 2012, Obama dominated the election. He was charismatic and a once-in-a-generation talent. Being a biracial Black man drove insanity on the right against him. That racism is the same force that pulled Trump into politics. His birtherism lie was founded in racism and began his voyage to the center of politics. Once you strip away the political talent and identity, Obama becomes quite ordinary. He was a center-right president who accomplished a fair amount. In the view of the right, he was a Black man and, as such, a socialist. Seeing him as anything other than middle-of-the-road and a force of the establishment is fiction.

“The rights of every man are diminished when the rights of one man are threatened.” ― John F. Kennedy

U.S. President Joe Biden speaks during a daily press briefing with Press Secretary Karine Jean-Pierre at The White House in Washington, U.S., October 4, 2024. REUTERS/Tom Brenner

The last three Democratic candidates have not been inspiring. The energy in voting for all of them is largely grounded in fear and revulsion of Donald Trump. The Democratic message is not compelling. It is largely pro-establishment. So we have had three elections driven by the character of Trump. There are the people who are Trump fans, who are mostly hopeless, lost, angry people. There are those attracted to his anti-establishment message. Then there are the hardcore Democrats combined with those disgusted by Trump.

The Bernie Sanders movement was the Democrats’ chance to grasp an anti-establishment message. Sanders lost to Hillary in 2016, and the Democrats became the establishment party. This became the moment when the stasis of American politics ossified. The ability to end Trump and MAGA’s stranglehold on reality could have been found if the Democrats had embraced some anti-establishment message. This is the path forward for a better future. My core belief is that the right wing cannot fix our problems. Their solutions are grounded in accelerating the forces undergirding our dysfunction. Many of these are associated with the way business and corporate governance are oriented.

The View from My Life

I am someone who sees huge problems within the nation. Our institutions are in distress. The establishment is failing the nation. The issue with Trump is that he won’t fix any of this; he will only make things worse. Trump will trash institutions and destroy or enable many of the forces that are already creating havoc. For example, Trump will only exacerbate the inequality in the nation. He will institute cruelty and hate as vehicles for change. He will enable the worst elements in society to find new depths of depravity. This will not make America great, it will only diminish the Nation.

I have worked for leading scientific institutions my entire professional life. Over the course of my career, these institutions have consistently declined into shells of their former glory. I have watched the edge the USA has in science and technology fade away. Today, it is arguable that we have lost our advantage. If we haven’t lost it, we will very soon. Our government and leaders have been the vehicles of this destruction. The destruction is quite bipartisan. In different ways, the trust my labs were granted by the nation is gone. Part of it is lack of funding and general suspicion of science from the right. There is a continual stream of investigations into efforts that makes the labs risk-averse and incapable of the failures needed for progress. Both the left and the right have contributed to this. From the left, we have lots of regulation and focus on things unrelated to science. They also have their own suspicions of certain areas of science.

People hold placards during a protest in support of Amazon workers in Union Square, New York on February 20, 2021. – New York state’s attorney general on February 17, 2021 sued Amazon, claiming the e-commerce giant failed to adequately protect its warehouse workers from risks during the Covid-19 pandemic. The move comes days after Amazon filed its own legal action seeking to block New York state Attorney General Letitia James from taking steps to enforce federal workplace safety regulations. (Photo by Kena Betancur / AFP)

The end result is the hollowing out of competence and the destruction of science. This has become a huge threat to our national security. We have also seen a rather perverse belief that the labs should be run like businesses. The right has been quite eager to do this, and the left has assisted. This is patently absurd. The principles that work for business are absolutely not the way to run a research lab. The new corporate governance has been a catastrophic failure. All it has done is accelerate the decline of the labs. As I will note later, the corporate approach has other issues too. These have also led to the Lab’s decline.

“You’re not to be so blind with patriotism that you can’t face reality. Wrong is wrong, no matter who does it or says it.” ― Malcolm X

We Have Big Problems

The key to capturing the anti-establishment vote is to appeal to the desire to fix things. In the absence of a fix, people move toward destruction. These days, the prospects for fixing anything seem remote. This is especially true given our divided government and structural blocks. The start toward solutions begins with admission that the problems exist. Virtually every American sees the problems as obvious and profound. Most of the problems are not amenable to half measures.

The key difference is whether one sees Trump as a solution or simply as making everything worse. In my view, he will make things much worse. The foundation of our problems is our corporate environment and vast inequality. We are approaching a level of disparity close to that of the gilded age, which is socially unsustainable. Trump will exacerbate this with tax cuts and policies that increase corporate greed. Our regulatory overreach is a direct result of the lack of corporate responsibility. A corporation will poison its own children to make a buck. They are regulated because they have no morals or ethics. Fixing this imbalance will not happen under Trump. He is the definition of greed and corporate graft. Corporations will be unleashed to gut the Nation and abuse the population.

Another cornerstone of our societal issues is the lack of trust. How can someone who lies reflexively, is selfish, and is a career criminal going to improve that? He won’t! The acceptance of Trump is based on the lack of trust and only amplifies it. His voters simply accept his rampant corruption as the norm. Trump has normalized a whole raft of behaviors that used to destroy any politician. He is a misogynist who generally treats women as sex objects. This includes his own daughter! He has been judged as a rapist. Many other women have accused him of sexual assault. He even admitted to touching women without consent on tape. Trump is the destruction of trust where the country needs repair.

“A paranoid is someone who knows a little of what’s going on.” ― William S. Burroughs

Finally, the country needs to recover competence. Trump was an incompetent President. He was constantly embarrassing. He sucked up to dictators, groveling in Putin’s presence. He does not read and has a minimal attention span. He utterly and completely lacks curiosity. All of these things are the hallmarks of incompetence as an executive, much less the top executive. Someone like this will not instill the competence in governance that the country badly needs. He will only further accelerate our decline into a shell of our former glory.

The Republicans are Trying to Hold onto a Past That is Gone

“I’m completely in favor of the separation of Church and State. … These two institutions screw us up enough on their own, so both of them together is certain death.” ― George Carlin

The greatest argument against Trump is found in their slogan “Make America Great Again.” They are looking to a past where America was the leading light in the world. This greatness was largely founded on our intact industrial base in a World destroyed by World War 2. The USA ruled because the rest of the World was in ruin. The MAGA people also fail to note that this era was racist with Jim Crow alive and well. Women had a shadow of their current rights. LBGTQ people were all in the closet. The greatness was largely enjoyed by white men, and the rest of the population was discriminated against. Thus the greatness was quite tarnished.

The key thing to realize is that there is truth in the USA’s decline. We are less than we were. I can see the decline clearly where I work. I’ve worked at two of the USA’s greatest government labs: Los Alamos and Sandia. Both labs are shadows of their former greatness. I have seen this decline throughout my entire career. It is also clear that the decline started back in the late 1970s. If you look at what triggered the American decline, one person stands out: Ronald Reagan.

“Every gun that is made, every warship launched, every rocket fired signifies in the final sense, a theft from those who hunger and are not fed, those who are cold and are not clothed. This world in arms is not spending money alone. It is spending the sweat of its laborers, the genius of its scientists, the hopes of its children. This is not a way of life at all in any true sense. Under the clouds of war, it is humanity hanging on a cross of iron.” ― Dwight D. Eisenhower

The “Reagan revolution” was the start of America’s decline. It was largely a reaction to the changes of the 1960s and the embrace of the evangelical movement by the right. In addition, we saw the embrace of the current corporate culture combined with animosity toward workers. Reagan accelerated the attack on the worker and laid the foundation of the homeless crisis. The neoliberal view of corporate greed took hold, and the engine of vast inequality went into overdrive. Reagan also marked the beginnings of a racist backlash and the culture wars. All of these elements have metastasized in MAGA. Trump is greed, culture wars, and racism personified. Reagan didn’t make America great; he began to dismantle greatness.

“Absolute power does not corrupt absolutely, absolute power attracts the corruptible.” ― Frank Herbert

The path to greatness lies in moving forward. MAGA simply looks backward. It also looks backward through a lens that is quite distorted. Technology isn’t going away. We need to learn to deal with it. We need a corporate culture that cares about the impact on society as much as profits. We need a corporate culture that cares about the well-being of its employees. The harms of unbridled greed are everywhere in society. We need a more equal society where care and compassion rule. Today we have an unequal society where cruelty is tolerated. Reagan started the march toward inequality and the acceptance of cruelty. Trump simply takes this to a new level. It is time for a different trend to take root.

The Democrats Aren’t Trying to Solve Things

“Remember, remember always, that all of us, and you and I especially, are descended from immigrants and revolutionists.” ― Franklin D. Roosevelt

How do we get out of the electoral impasse?

The Democrats (or more properly the liberal progressives) need to start looking to solve the obvious national problems. This means they need to stop simply being the anti-MAGA, anti-Trump party. They need to stop supporting our failing institutions and approach to governance. If you are liberal, you believe that we need to be governed. We also need to be governed well. Being governed well means an efficient government. It means a competent government that is a good value for the money. Today we do not have that.

A big part of our inefficient government is the regulatory state. As noted above, the foundation of regulation is the overreach of corporate greed. Rather than run businesses in a way that is good for society, corporations only look to profit. They will do all manner of damage to society for profit. It is in how they are managed; they are only about maximizing shareholder value. Regulation is the societal response to this. Instead, we need corporations that regulate themselves for the good of society.

This requires new laws and new governance. We need to change the accounting of corporations to make them responsible to their communities and workers too. Take the way Meta’s products harm society and especially children. In the name of maximizing value, they have created platforms that harm politics, children, and society. Nothing except bad press stands in the way. Instead of holding back a little for the good of society, they maximize profit and damage. Right now, regulation is the only answer. Regulation is horribly inefficient. Efficiency comes from the corporation doing the right thing and removing the need for regulation.

I will just note that the GOP’s answer is just removing regulation from the picture. This will enable profit. It will also enable corporations to further damage society. They will be allowed to pollute. They will be allowed to treat workers poorly. It will just fuel more inequality and all the problems we already have. The Democrats need to offer something better; something better than ex-post-facto regulation. A legal framework that rewards and encourages corporate social and societal responsibility.

We need to build trust across society. Empathy and compassion are key building blocks for trust. We need to tear down inequality. Trust comes from people relating to each other. Inequality makes people desperate and divided. We need to bring people together. GOP policies generate cruelty and meanness that undermine trust. Their tax and corporate policies generate more inequality. This generates lack of trust. As long as we can buy our way out of justice, people will not trust the law. Trump has used money to buy his way out of justice and accountability for his whole life. He is an unrepentant criminal as a result. He epitomizes the reasons no one trusts.

“Loyalty to country ALWAYS. Loyalty to government, when it deserves it.” ― Mark Twain

Finally, the Democrats need to fully embrace competence. This means turning away from bureaucracy. Nothing is less competent than a bureaucrat. We need to reinvigorate science and education. Trump is fueled by the lack of education. His popularity is a continual indictment of our educational system. This means adding robust vocational education. Colleges and universities need to become affordable. Right now, higher education is simply a vehicle for debt and a way for corporations to prey on people.

We need great laboratories. Over the past 40 years, we have allowed our great science labs to be destroyed. First the DoD labs, then NASA, and now the DOE labs are falling. They need to be built up. Bureaucracy needs to be removed. The current incompetent corporate-minded governance of the labs needs to go. It needs to be replaced by excellence in science. We need to empower scientists to fail, learn, and innovate. Today, none of these are really allowed. We have lots of rhetoric about these things, but the management really does the opposite.

“If by a “Liberal” they mean someone who looks ahead and not behind, someone who welcomes new ideas without rigid reactions, someone who cares about the welfare of the people-their health, their housing, their schools, their jobs, their civil rights and their civil liberties-someone who believes we can break through the stalemate and suspicions that grip us in our policies abroad, if that is what they mean by a “Liberal”, then I’m proud to say I’m a “Liberal.”

― John F. Kennedy

Code Verification Needs a Refresh

21 Saturday Sep 2024

Posted by Bill Rider in Uncategorized

≈ 2 Comments

tl;dr.

The practice of code verification has focused on finding bugs in codes. This is grounded in the proving that a code is correctly implementing a method. While this is useful and important, it does not inspire. Code verification can be used for far more. It can be the partner to method development and validation or application assessment. It can also provide expectations for code behavior and mesh requirements on applications. Together these steps can make code verification more relevant and inspiring. It can connect it to important scientific and engineering work pulling it away from computer science.

When you can measure what you are speaking about, and express it in numbers, you know something about it.

– Lord Kelvin

My Connection to Code Verification

Writing about code verification might seem like a scheme to reduce my already barren readership. All kidding aside, code verification is not the most compelling topic for most. This includes people making their living writing computational solvers. For me it is a topic of much greater gravity. While I am inspired by the topic, most people are not. The objective here is to widen its scope and importance. Critically, I have noticed the problems getting worse as code verification seems to fade from serious attention. This all points to a need for me to think about this topic deeply. It is time to consider a change to how code verification is talked about.

“If you can not measure it, you can not improve it.”

– Lord Kelvin

My starting point for thinking about code verification is to look at myself. Code verification is something I’ve been doing for more than 30 years. I did it before I knew it was called “code verification.” Originally, I did it to assist my development of improved methods in codes I worked on. I also used it to assure that my code was correct, but this was secondary. This work utilized test problems to measure the correctness and more importantly the quality of methods. As I continued to mature and grow in my scientific career I sought to enhance my craft. The key aspect of growth was utilizing verification to exactly measure method character and quality. It was through verification that I understood if a method passed muster.

“If failure is not an option, then neither is success.”

― Seth Godin

Eventually, I developed new problems to more acutely measure methods. I also developed problems to break methods and codes. When you break a method you help define its limitations. Over time I saw the power of code verification as I practiced it. This contrasted to how it was described by V&V experts. The huge advantage and utility of code verification I found in method development was absent. Code verification was relegated to correctness through code bug detection. In this mode code verification is a spectator to the real work of science. I know it can be so very much more.

“I have been struck again and again by how important measurement is to improving the human condition.”

– Bill Gates

The Problem with Code Verification

In the past year I’ve reviewed many different proposals in computational science. Almost all of them should be utilizing code verification integrally in their work. Almost all of them failed to do so. At best, code verification is given lip service because of proposal expectations. At worst it is completely ignored. The reason is that code verification does not set itself as a serious activity for scientific work. It is viewed as a trivial activity beneath mention in a research proposal. The fault lies with the V&V community’s narrative about it. (I’ve written before on the topic generally https://williamjrider.wordpress.com/2024/08/14/algorithms-are-the-best-way-to-improve-computing-power/)

“Program testing can be used to show the presence of bugs, but never to show their absence!”

― Edsger W. Dijkstra

Let’s take a look at the narrative chosen for code verification more closely. Code verification is discussed primarily as a manner to detect bugs in the code. The bugs are detected when the code does not act as a consistent solution of the governing equations in the manner desired. This comes when the exact solution to those governing equations does not match the order of accuracy designed for the method. This places code verification as part of software development and quality. This is definitely an important topic, but far from a captivating one. At the same time code verification is distanced from math, physics and aapplication space engineering. Thus, code verification does not feel like science.

This is the disconnect. To be focused upon in proposals and work code verification needs to be part of a scientific activity. It simply is not one right now. Of all the parts of V&V, it is the most distant from what the researcher cares about. More importantly, this is completely and utterly unnecessary. Code verification can be a much more holistic and integrated part of the scientific investigation. It can span all the way from software correctness to physics and application science. If the work involves development of better solution methodology, it can be the engine of measurement. Without measurement “better” cannot be determined and is left to bullshit and bluster.

“Change almost never fails because it’s too early. It almost always fails because it’s too late.”

― Seth Godin

What to do about it?

The way forward is to expand code verification to include activities that are more consequential. To constructively discuss the problem, the first thing to recognize that V&V is the scientific method for computational science. It is essential to have correct code. The software correctness and quality aspects of code verification remain important. If one is doing science with simulation, the errors made in simulation are more important. Code verification needs to contribute to error analysis and minimization. Another key part of simulation are choices about the methods used. Code verification can be harnessed to serve better methods. The key in this discussion is that the additional tasks are not discussed in what code verification is. This is an outright oversight.

Appreciate when things go awry. It makes for a better story to share later.

― Simon Sinek

Let’s discuss each of these elements in turn. First we should get to some technical details of code verification practice. The fundamental tool in code verification is using exact solutions to determine the rate of convergence of a method in a code. The objective is to show the code implementation produces the theoretical order of accuracy. This is usually accomplished by computing errors on different meshes.

The order of accuracy comes from numerical analysis of the truncation errors of a method. It is usually takes the form of a power of the mesh size. For example a first order method the error is proportional to the mesh size. For a second order method the error depends on the square of the mesh size. This all follows from the analysis and has the error vanishing as the mesh size goes to zero (see Oberkampf and Roy 2010)

The grounding of code verification is found in the work of Peter Lax. He discovered the fundamental theorem of numerical analysis (Lax and Richtmyer 1956). This theorem says that a method is a convergent approximation of the partial differential equation if it is consistent and is stable. Stability comes from getting an answer that does not fall apart into numerical garbage. Practically speaking, stability is assumed when the code produces a credible answer to problems. The trick of consistency is that the method reproduces the differential equation plus an ordered remainder. Now the trick of verification is that you invert this and use a convergent sequences to infer consistency. This is a bit of a leap of faith.

“Look for what you notice but no one else sees.”

― Rick Rubin

The additional elements for verification

The most important aspect to add to code verification is stronger connection to validation. Numerical error is an important element in validation and application results. Currently code verification is divorced from validation. This makes it ignorable in the scientific enterprise. To connect better, the errors in verification work need to be used to understand mesh requirements for solution features. This means that the exact solutions used need to reflect true aspects of the validation problem.

Current verification practice pushes this activity into the background of validation. In doing “bug hunting” code verification, the method of manufactured solutions (MMS) is invaluable. The problem is that MMS solutions usually bear no resemblance to validation problems. For people concerned with real problems MMS problems have no interest, nor guidance for their solutions. Instead verification problems should be chosen that feature phenomena and structures like those validated. Then the error expectations and mesh requirements can be determined. Code verification can then be used as simple pre-simulation work before validation ready calculations are done. Ultimately this will require the development of new verification problems. This is deep physics and mathematical work. Today this sort of work is rarely done.

The next big change in code verification is connecting code verification more actively to method-algorithm research. Code verification can be used to measure the error of a method directly. Again this requires a focus on error instead of convergence rate. The convergence rate is still relevant and needs to be verified. At the same time methods with the same convergence rate can have greatly different error magnitudes. For more realistic problems the order of accuracy does not determine the error. It has been shown that low order methods can out perform higher order methods in terms of error (see Greenough and Rider 2005).

“There is no such thing as a perfect method. Methods always can be improved upon.”

–  Walter Daiber

In all aspects of developing a method code verification is useful. The base of making sure the implementation is correct remains. The additional aspect that I am suggesting is the ability to assess the method dynamically. This should be done on a wide range of problems biased toward application-validation inspired problems. In terms of making this activity supported by those doing science, the application-validation inspired problems are essential. This is also where code verification fails most miserably. The best example of this failure can be found in shock wave calculations.

“If you can’t measure it, you can’t change it.”

– Peter Drucker

Let’s take a brief digression to how verification currently is practiced in shock wave methods. Invariably the only time you see detailed quantitative error analysis is on a smooth differentable prroblem. This problem has no shocks and can be used to show a method has the “right” order of accuracy. This is expected and common. The only value is the demonstration that a nth order method is indeed nth order. It has no practical value for the use of the codes.

“Measure what is measurable, and make measurable what is not so.”

– Galileo Galilei

Once a problem has a shock in it, the error analysis and convergence rates disappear from the work. Problems are only compared in the “eyeball norm” to an analytic or high resolution solution. The reason for this is that the convergence rate with a discontinuity is one or less. The reality being ignored is that error can be very different (see the paper by Greenough and Rider 2005). When I tried to publish a paper that used errors and convergence rates to assess the method with shock, the material needed to be deleted. As the associate editor told me bluntly, “if you want to publish this paper get that shit out of the paper!” (see Rider, Greenough and Kamm 2007)

Experts are the ones who think they know everything. Geniuses are the ones who know they don’t

― Simon Sinek

Why is this true? Part of the reason is the belief that the accuracy does not matter any longer. The failure is to recognize how different the errors can be. This has become accepted practice. Gary Sod introduced the canonical shock tube problem that bears his name. Sod’s shock tube has been called the “Hello World” problem for shock waves. In Sod’s 1978 paper the run time of different methods was given, but errors were never shown. The comparison with analytical solution to the problem was qualitative, the eyeball norm. Subsequently, this became the accepted practice. Almost no one ever computes the error or convergence rate for Sod’s problem or any other shocked problem.

“One accurate measurement is worth a thousand expert opinions.”

– Grace Hopper

As I have written and shown recently this is a rather profound oversight. The importance of the error level for a given method is actually far greater if the convergence rate is low. The lower the convergence rate, the more important the error is. Thus we are not displaying the errors created by methods in the conditions where it matters the most. This is a huge flaw in the accepted practice and a massive gap in the practice of code verification. It is something that needs to change.

“The Cul-de-Sac ( French for “dead end” ) … is a situation where you work and work and work and nothing much changes”

― Seth Godin

My own practical experience speaks volumes about the need for this. Virtually every practical application problem I have solved or been associated with converges at low order (first order or less). The accuracy of the methods under these circumstances mean the most to the practical use of simulation. Because of how we currently practice code verification applied work is not impacted. There is a tremendous opportunity to improve calculations using code verification. As I noted a couple of blog posts ago, the lower the convergence rate, the more important the error is (https://williamjrider.wordpress.com/2024/08/14/algorithms-are-the-best-way-to-improve-computing-power/). A low error method can end up being orders of magnitude more efficient. This can only be achieved if the way code verification is done and its scope increase. This will also draw it together with the full set of application and validation work.

More related content (https://williamjrider.wordpress.com/2017/12/01/is-the-code-part-of-the-model/, https://williamjrider.wordpress.com/2017/10/27/verification-and-numerical-analysis-are-inseparable/, https://williamjrider.wordpress.com/2015/01/29/verification-youre-doing-it-wrong/, https://williamjrider.wordpress.com/2014/05/14/important-details-about-verification-that-most-people-miss/,

https://williamjrider.wordpress.com/2014/01/31/whats-wrong-with-how-we-talk-about-verification/

“If it scares you, it might be a good thing to try.”

– Seth Godin

Roache, Patrick J. Verification and validation in computational science and engineering. Vol. 895. Albuquerque, NM: Hermosa, 1998.

Oberkampf, William L., and Christopher J. Roy. Verification and validation in scientific computing. Cambridge university press, 2010.

Lax, Peter D., and Robert D. Richtmyer. “Survey of the stability of linear finite difference equations.” In Selected Papers Volume I, pp. 125-151. Springer, New York, NY, 2005.

Roache, Patrick J. “Code verification by the method of manufactured solutions.” J. Fluids Eng. 124, no. 1 (2002): 4-10.

Greenough, J. A., and W. J. Rider. “A quantitative comparison of numerical methods for the compressible Euler equations: fifth-order WENO and piecewise-linear Godunov.” Journal of Computational Physics 196, no. 1 (2004): 259-281.

Rider, William J., Jeffrey A. Greenough, and James R. Kamm. “Accurate monotonicity-and extrema-preserving methods through adaptive nonlinear hybridizations.” Journal of Computational Physics 225, no. 2 (2007): 1827-1848.

Sod, Gary A. “A survey of several finite difference methods for systems of nonlinear hyperbolic conservation laws.” Journal of computational physics 27, no. 1 (1978): 1-31.

Algorithms Advance in Quantum Leaps

14 Saturday Sep 2024

Posted by Bill Rider in Uncategorized

≈ Leave a comment

tl;dr; Algorithms shape our world today. When a new algorithm is created it can transform a computational landscape. These changes happen in enormous leaps that take us by surprise. The latest changes in the artificial intelligence are the result of such a breakthrough. It is unlikely to be followed by another breakthrough soon reducing the seeming pace of change. For this reason the threats of doom and vast wealth are overblown. If we want more progress it is essential to understand how such breakthroughs happen and their limits.

“The purpose of computing is insight, not numbers.”

– Richard Hamming

We live in the age of the algorithm. In the past ten years this has leapt to the front of mind with social media and the online world. It has actually been true ever since the computer took hold of society. This began in the 1940’s with the first serious computers, and numerical mathematics. A new improved algorithm always drives the use of the computer forward as much as hardware. What people do not realize is that the improvements that get noticed are practically quantum in change. These algorithms get our attention.

“I am worried that algorithms are getting too prominent in the world. It started out that computer scientists were worried nobody was listening to us. Now I’m worried that too many people are listening.”

– Donald Knuth

Now that the internet has become central to our lives we need to understand this. One reason is understanding how algorithms create value for business and stock market valuations. How these sorts of advances fool people on the pace of change? We should also know how this breakthroughs are made. We need to understand how likely we are to see progress? How can we create an environment where advances are possible? How the way we fund and manage work actually destroys the ability to continue progress?

“You can harvest any data that you want, on anybody. You can infer any data that you like, and you can use it to manipulate them in any way that you choose. And you can roll out an algorithm that genuinely makes massive differences to people’s lives, both good and bad, without any checks and balances.”

– Hannah Fry

Two examples come to mind in recent year to illustrate these points. The first is the Google search algorithm, pagerank. The second is the transformer, which elevated large language models to the forefront of the public’s mind in the last two years. What both of these algorithms illustrate clearly is the pattern for algorithmic improvement. A quantum leap in performance and behavior followed by incremental changes. These incremental changes are basically fine tuning and optimization. They are welcome, but do not change the World. The key is realizing the impact of the quantum leap from an algorithm and putting it into proper perspective.

Google is an archetype

Google transformed search and the internet and ushered algorithms into the public eye. Finding things online used to be torture as early services tried to produce a “phone book” for the internet. I used Alta Vista, but Yahoo was another example. Then Google appeared and we never went back. Once you used Google the old indexes of the internet were done. It was like walking through a door. You shut the door and never looked back. This algorithm turned the company Google into a verb, household name and one of the most powerful forces on Earth. Behind it was an algorithm that blended graph theory and linear algebra into an engine of discovery. Today’s online world and its software are built on the foundation of Google.

“The Google algorithm was a significant development. I’ve had thank-you emails from people whose lives have been saved by information on a medical website or who have found the love of their life on a dating website.”

– Tim Berners-Lee

Google changed the internet introduced search and demonstrated the power of information. All of a sudden information was unveiled and shown to be power. Google unleashed the internet into a transformative engine for business, but society as well. The online world we know today owes its existence to Google. We need to acknowledge that Google today is a shadow of the algorithm of the past. Google has become a predatory monopoly and the epitome of “enshitification” of the internet. This is the process of getting worse over time. This is because Google is searching for profits over performance. Instead of giving us the best results they are selling spaces for money. This process is repeated across the internet undermining the power of the algorithms that created it.

“In comparison, Google is brilliant because it uses an algorithm that ranks Web pages by the number of links to them, with those links themselves valued by the number of links to their page of origin.”

– Michael Shermer

The Transformer and LLMs

The next glorious example of algorithmic power comes from Google (Brain) with the Transformer. Invented at Google in 2017 this algorithm has changed the world again. With a few tweaks and practical implementations OpenAI unleashed ChatGPT. This was a large language model (LLM) that ingested large swaths of the internet to teach it. The LLM can then produce results that were absolutely awe-inspiring. This was especially in comparison to what came before where suddenly the LLM could produce almost human like responses. Granted this is true if that human was a corporate dolt. Try to get ChatGPT to talk like a real fucking person! (just proved a person wrote this!)

“An algorithm must be seen to be believed.”

– Donald Knuth

These results were great even after OpenAI lobodomized ChatGPT with reinforcement learning that kept it from being politically incorrect. The LLM’s won’t curse or say racist or sexist stuff either. In the process the LLM becomes as lame as a conversation with your dullest coworker. The unrestrained ChatGPT was almost human in creativity, but also prone to sexist, racist and hate speech (like people). It is amazing to know how much creativity was sacrificed to make it corporately acceptable. It is worth thinking about and how this reflects on people. Does the wonder of creativity depend upon accepting our flaws?

Under the covers in the implementation of the foundation models at the core of ChatGPT is the transformer. The transformer has a couple of key elements. One if the ability to devour data in huge chunks perfectly fitting for modern GPU chips. This has allowed far more data to be used and transformed NVIDIA into a mulit-trillion dollar company overnight. This efficiency is only one of the two bits of magic. The real magic is the attention mechanism. This is what the LLM takes as instructions for its results. The transformer allows longer more complex instructions to be given. It also allows multiple instructions to guide its output. The attention mechanism has led to fundamentally different behavior from the LLMs. Together these elements demonstrate the power of algorithms.

“Science is what we understand well enough to explain to a computer. Art is everything else we do.”

– Donald Knuth

The real key to LLMs is NOT the computing available. A lot of capable computing helps and makes it easier. The real key to the huge leap in performance is the attention mechanism that changed the algorithm. This produced the qualitative change in how LLMs functioned. This produced the sort of results that made the difference. It was not the computers; it was the algorithms!

The world collectively lost their shit when ChatGPT went live. People everywhere freaked the fuck out! As noted above the impact could have easily been more profound without the restraint offered by reinforcement learning. Nonetheless feelings were unleashed that felt like we were on the cusp of exponential change. We are not. The reason why we are not is something key about the change. The real difference with these new LLMs was all predicated on the transformer algorithm’s character. Unless the breakthroughs of the transformer are repeated with new ideas, the exponential growth will not happen. Another change will happen, but it is not likely for a number of years from now.

A look at the history of computational science unveils that such changes happen more slowly. One cannot count on these algorithmic breakthroughs. They happen episodically with sudden leaps followed by periods of fallow growth. The fallow periods are optimization of the breakthrough and incremental change. As 2024 plays out I have become convinced that LLMs are like this. There will be no exponential growth into general AI that people fear. The transformer was the breakthrough and without another breakthrough we are on a pleateau of performance. Nonetheless like Google, ChatGPT was a world changing algorithm. Until a new algorithm is discovered, we will be on a slow path to change.

“So my favorite online dating website is OkCupid, not least because it was started by a group of mathematicians.”

– Hannah Fry

Computational Science and Quantum Leaps from Algorithms

To examine what this sort of algorithmic growth in performance we can look at examples from classical comptuational science. Linear algebra is an archetype of this sort of growth. Over a span of years from 1947 to 1985, the algorithmic performance matched the performance gains from hardware. This meant that Moore’s law for hardware was amplified by better algorithms. Moore’s law is the result of multiple technologies working together to create the exponential growth.

In the 1940’s linear algebra worked using dense matrix algorithms that scaled cubically with problem size. As it turned out most computational science applications were sparse structured matrices. These could be solved more efficiently with quadratic scaling. This was a huge difference. For a system with 1000 equations this is the difference of a million instead of a billion in terms of the work done and storage taken on the computer. Further advances happened with Krylov algorithms and ultimately multigrid where the scaling is linear (1000 in the above example). These are all huge speedups and advances. A key point is that the changes above occurred over the span of 40 years.

The nature of these changes is quantum in nature where the performance of the new algorithm leaps orders of magnitude. The new algorithm allows new problems to be solved and is efficient in ways the former algorithm is incapable of. This is exactly like what happened with the transformer. In between these advances the new algorithm is optimized and gets better. It does not change the fundamental performance. Nothing amazing happens until something is introduced that acts fundamentally differently. This is why there is a giant AI bubble. Unless another algorithmic advance is made, the LLM world will not change dramatically. The power and fears around AI is overblown. People do not understand that this moment is largely algorithmically driven.

These sorts of leaps in performance are not limited to linear algebra. In optimization a 1988 study showed a 43,000,000 times improvement in performance over a 15 year period. Of this improvement 1000 was due to computer improvements, but 43,000 was due to better algorithms. Another example is the profound change in hydrodynamic algorithms based on transport methods. The introduction of “limiters” in the early 1970’s allowed second-order methods to be used for the most difficult problems. Before the limiters the second-order methods produced oscillations that resulted in unphysical results. The difference was transformative. I have recently shown that the leap in performance is about a factor of 50 in three dimensions. Moreover the results also compare to the basic physical laws in ways the first-order methods cannot produce.

How do algorithms leap ahead?

“This is the real secret of life — to be completely engaged with what you are doing in the here and now. And instead of calling it work, realize it is play.” ― Alan Watts

Where do these algorithm breakthroughs come from? Some come out of pure inspiration where someone sees an entirely different way to solve a problem. Others come through the long slog through seeking efficiency. The deep analysis yields observations that are profound and lead to better approaches. Many are pure inspiration coming out of giving people the space to operate in a playful space. This playful space is largely absent in the modern business or government world. To play is to fail and to fail is to learn. Today we have everything planned and everyone should know that breakthroughs are not planned. We cannot play; we cannot fail; we cannot learn; breakthroughs are impossible.

“Our brains are built to benefit from play no matter what our age.”

– Theresa A. Kestly

The problems with algorithm advancements are everywhere in today’s environment. Lack of fundamental trust leads to constrained planning and lack of risk taking. Worse yet, failure is not allowed as the essential engine of learning and discovery. This sort of attitude is pervasive in the government and corporate system. Basic and applied research is both lacking funding and that funding is not free to go after problems.

In the corporate environment the breakthroughs often do not benefit the company where things are discovered. The transformer was discovered by Google (Brain), but the LLM breakthrough was made by OpenAI. Its greatest beneficiary is Google’s rival microsoft. A more natural way to harness the power of innovation is the government funding. There laboratories and universities can produce work that is in the public domain. At the same time the public domain is harmed by various information hiding policies and lack of transparency. We are not organized for success at these things as a society. We have destroyed most of the engines of innovation. Until these engines are restarted we will live in a fallow time.

“There is no innovation and creativity without failure. Period.”

― Brene Brown

I see this clearly at work. There we argue about whether to keep using 30, 40 and 50 year old algorithms rather than invest in the state of the art. They then convince themselves that it is good because their customers like the code. The code is modern because it is written in C++ instead of Fortran. The results feel good simply because they use the most modern computing hardware. Our “leadership” does not realize that this approach is getting substandard return on investment. If the algorithms were advancing the results would be vastly improved. Yet, there is little or no appetite to develop new algorithms or invest in research in finding them. This sort of research is too failure prone to fund.

“Good scientists will fight the system rather than learn to work with the system.”

– Richard Hamming

Page, Lawrence. The PageRank citation ranking: Bringing order to the web. Technical Report, 1999.

Vaswani, A. “Attention is all you need.” Advances in Neural Information Processing Systems (2017).Vaswani, A. “Attention is all you need.” Advances in Neural Information Processing Systems (2017).

Margolin, Len G., and William J. Rider. “A rationale for implicit turbulence modelling.” International Journal for Numerical Methods in Fluids 39, no. 9 (2002): 821-841.

Boris, Jay P., and David L. Book. “Flux-corrected transport. I. SHASTA, a fluid transport algorithm that works.” Journal of computational physics 11, no. 1 (1973): 38-69.

The True Cost of Safety and Security

02 Monday Sep 2024

Posted by Bill Rider in Uncategorized

≈ Leave a comment


“They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.”

― Benjamin Franklin,

The Trigger

The other day I headed into work for a face to face meeting. The meeting was an hour long. It was interesting and thought provoking. It also showed the utter disregard for the cost of actions in two ways. This meeting would cost me far more than an hour due to outright stupidity and lack of proper consideration. This sort of stupidity is rampant accross society today. It is destroying productivity, research and threatening our Nation’s security..

After the meeting I immediately got caught in a traffic jam trying to leave the Air Force base where my Lab is. This happens all too frequently and is maddening. The traffic was jammed up for more than an hour. It was also lunch hour, so more people than usual were on the road. It was a Friday, which mitigated some of the hassle since so many people work from home or don’t work Friday.

Why did this happen?

I heard from a friend that a motorcycle had run the security checks at the gates. This prompted the guards to institute a complete lockdown. This is the safest and most secure thing to do. Can’t take a risk, right? This is done without a thought about the costs. The thousands of people working on the base are frozen in place. As I’ve learned the costs of my time at work are rather extreme, about $350/hour. So if a 1000 people like me are put out for an hour this comes to $350,000. If it is 2000 people this is $700,000.

Is this worth the expense? No.

A rational response would be for the guards to chase the interloper down and arrest them. There is no reason to close the gates down. Surely they do so out of caution, extreme caution, that permiates society today. This caution always operates on the view that no cost be considered if safety or security is at risk. This attitude is absolutely irrational. It exacts costs from society at large that actually harm safety and security in the long run. It gets to the core of why we can’t accomplish great things today.

We choose the short term appearance of safety and security. This choice is destroying our long term safety and security as I will describe next.

“You tell them – you tell them there’s a cost…..Every decision we make in life, there’s always a cost.”

― Brad Meltzer,

It is everywhere one looks

If one looks around we see a lack of progress everywhere. We can’t build anything. Public works projects take forever (the big dig in Boston, or high speed rail). Government projects are all over cost and over time. Examples abound such as the F-35. Almost across the board our essential weapon’s systems are over cost and behind schedule. Under the covers the problem is prioritization of safety and security without thought to benefit compared with cost. For me the example of what my time costs is exhibit A. That cost is driven by out of control safety and security culture.

In my own life I am confronted with lots of deep security concerns that revolve around outlandish scenarios. The scenarios are plausible, but unlikely. The security people are granted carte blanche to take mitigations for the possible impacts. The cost on time and productivity are never considered. The programs funding research and useful work simply eat the costs. Worse yet, it would seem that safety and security professionals are actually rewarded for suggesting concerns. Their recommended mitigations are never put to the test of considering the cost for the benefit gained.

We might take the recent guidance federally regarding medical devices as an archetypical example. Medical devices are becoming increasingly complex and integrating new technology. A good example is a pacemaker that has bluetooth built in. The bluetooth increases the safety and benefit for the patient (my dad has one). The device can be checked often and remotely to monitor the patient and the health of the device. Yet the paranoia about bluetooth makes this a security concern. Another great example are bluetooth enabled hearing aids, which improve life for the hard of hearing.

What person in their right mind would accept a worse pacemaker simply to satisfy outlandish security concerns? Moreover, why would an employer ask someone to do this? This is the height of absurdity. You are either demanding someone risk their health or removing them from the workforce. Frankly I’m disgusted by this choice being imposed on someone. This is an absurdly low risk threat inducing a profoundly high consequence effect. Yes, something could happen possibly, but it is fantastically unlikely. It is not worth the cost of losing the efforts of the professionals removed from the workforce.

Much of this insanity calls back to the issues of trust discussed in the last post.

“As we care about more of humanity, we’re apt to mistake the harms around us for signs of how low the world has sunk rather than how high our standards have risen.”

― Steven Pinker

The TSA is exhibit 1

The lack of cost benefit considerations is perhaps most clear with the TSA’s practices. We have a situation where some asshole tries to light his shoes on fire 20 years ago on a plane and we still waste time over it. Richard Reid, who was an idiot, tried to blow up a plane by creating a shoe bomb. A fantastically stupid plot that got him imprisoned for life. In reaction people have to take off their shoes at security as well as limiting our fluid containers to 100 ml. We keep on doing this more than 20 years later with no end in sight. The cost of these measures on society is huge while the benefit is fleeting and highly questionable.

Let’s look at the cost more closely. For me, if I take 10 trips a year the extra time for these measures is perhaps 10 minutes each way. This tallies up to 3 extra hours a year. Now multiply this by 100 million people, and my $350 per hour rate and you get to $100 billion. This is undoubtably an over estimate, but it is still a huge cost nonetheless.

The time penalty is unambiguous. 300 million hours a year comes easily to 420 human life times. We habe been doing this for 20 years, so we are rapidly coming up on wasting 10,000 human lifetimes on this moronic safety measure. This along with what is likely more than a trillion dollars in lost productivity. All of this is because we can’t manage to examine the cost versus the benefit of this measure. It is also a perfect over-reaction to the act of one complete idiot by more idiots who weaponize safety and security.

We constantly hear this din of the bullshit that safety and security can be perfect with zero incidents. What a load of shit! Merely seeking out this outcome means exacting huge costs for fleeting benefits. I’m not talking about taking wild risks, but rather operating with common sense and reducing risks. Zero risk means zero progress and infinite cost. In many ways the desire for perfect safety and security is a power grab by those employed to do this sort of work. This desire is a disservice. They should be tasked with delivering good results with reasonable costs. Today we just write them a blank check. It is no wonder we can’t get work done and projects all come in late and over budget. This attitude exacts even greater costs on our long term safety and security.

As an example of the impact of this sort of lunacy consider an example. On any given day and for that day the safest thing to do is stay home and stay inside all day. You will shield yourself from driving, traffic, getting hit by a car, getting exposed to a virus and all sorts of other dangers. You will maximize your safety for that given day. If you run your life like this every single day, you will ruin your life. You will destroy your health, be lonely and fail to live. To live requires danger and risk. It requires putting yourself out there. Why do we think this safe at all costs attitude is right for society as a whole? It is not and the costs on all of us are piling up. We are not living like we should be.

Let’s get to the root of it

I’m not naive enough to believe this attitude comes out of the blue. A big reason is the loss of trust pervading society. We have had a huge regulatory response over the last 40-50 years that is the response to the lack of trust. Corporate behavior is a major reason for this. Under the mantra of maximizing shareholder value corporations will do all sorts of horrible things. Look at facebook for an example. They will commit all manners of harm to society to maximize clicks and ad revenue. Nothing except regulation stands in the way of doing harm to make money.

A proviso in the mantra of maximizing shareholder value is doing this within the confines of the law. The problem that has arisen over the past several decades is the capture of politics by money and by money from corporations. Increasingly we see corporations or those enriched by them as defining what the laws are. They are increasingly outside the reach of the law. The judiciary is aligned with this end. Most acutely through numerous Supreme Court decisions this process is accelerating. The most infamous of these decisions is Citizen’s United, which led to vast sums of corporate money distorting our politics. The only end of this process is an acceleration of loss of trust. Without a change this will end in violence or the end of democracy, or both. The same forces are dismantling regulation which was this bulwark and response to these forces to start with.

Puts the whole onus on prevention and none of the focus on improvement and progress. Progress is the main path to both security and safety. We are rapidly devolving into a society without enough trust to allow progress. Progress under the condition of trust is the way forward. Progress in science and culture has led to a better life for all. Medicine has eased suffering and extended lives. Science has given us a myriad of wonders like air travel and the internet. We have gained equality in culture for women and the LBGTQ community. Racial discrimination has faded from centrality culturally. More progress is needed, but crucially the progress made is at risk. All of this is threatened by the forces destroying the essential trust human society depends upon.

“What is progress? You might think that the question is so subjective and culturally relative as to be forever unanswerable. In fact, it’s one of the easier questions to answer. Most people agree that life is better than death. Health is better than sickness. Sustenance is better than hunger. Abundance is better than poverty. Peace is better than war. Safety is better than danger. Freedom is better than tyranny. Equal rights are better than bigotry and discrimination. Literacy is better than illiteracy. Knowledge is better than ignorance. Intelligence is better than dull-wittedness. Happiness is better than misery. Opportunities to enjoy family, friends, culture, and nature are better than drudgery and monotony. All these things can be measured. If they have increased over time, that is progress.”

― Steven Pinker

The costs are bigger than one can imagine

If I look closely at my life I can see the real cost of all this in the decay of the American research institutions. Over the past 40 years the great government laboratories have been destroyed by this dynamic. The lack of trust and inability to understand the benefits of research is crushing our science and technology edge. The Labs of the department of defense and NASA are shadows of their former glory. Remember that the internet came out of defense research. NASA started down the road to ruin after the moon landing then took final blows by the end of Reagan’s disasterous presidency. Now NASA is being brought further down by relying on Boeing for transport. Boeing is in the middle of riding maximizing shareholder value to the ruin of the company.

The DOE-NNSA Labs are a last bastion of American research. They are close to ruin. Over the course of my career, the Labs have been destroyed by the same attitudes. I distinctly remember the first ten years at Los Alamos being magical. The Lab was a wonderful crucible of knowledge, research and learning. Staff were generous with their time and expertise. I grew as a professional and flourished. Then it all ended. Like other institutions, fear and lack of trust entered. Actually it was already declining. Friends tell me that the Lab was even better in the years before I arrived. The generous spirit dried up and was replaced by suspicion and control. In the process research started to lose quality.

Breakthroughs no longer happened regularly with budget-money becoming the focus. No safety or security measure is too extreme. Cost be damned. Management became like business as private companies were the model of governance. The same attitudes revolving around maximizing shareholder value replace curiosity, inquiry and duty to the nation. Maximizing shareholder value has no meaning for the Labs, and yet it is the philosophy of governance. It has become toxic for companies (e.g., Boeing). It is idiotic for the Labs and a vehicle for catastrophe. Now the great

Labs of the USA are mere shadows of their former selves. We are all poorer for this. Recent studies have shown that the USA has completely lost its advantages in most science and technology. It has ceded its edge to China, India and Europe. We are to blame. The model of governance holds the murder weapon. Underneath this is the lack of trust infusing society. The pursuit of safety and security without regard for cost accelerates the process.

Let’s look at a couple of ways our fear and lack of trust play into this. Computer technology is the lifeblood of recent progress. We can see four distinct advances that shaped this period of time: Google search (its a verb now), the iPhone (smartphone), social media and large language models (i.e., ChatGPT). None of these came from the government labs. If you work at a government lab these advances are treated with fear and as risks. Their power is blunted by the fearful trust lacking management. Rather than harness the power of these breakthroughs, they are banned. They are castrated by rules. We see fear of technology and the inability to enable their power. Frankly we are a bunch of fucking morons and cowards. We have no leadership pushing back.

Let’s take this a step further. We see rules that seek to protect our work from prying eyes. Increasingly everything we do is classified in some way (lots of official use only that is now controlled unclassified information). This is just a way to hide things and not interact with the world. If we had a huge advantage this might make sense, but we don’t. We are not in the lead and these rules simply hold us back. They kill progress. Progress is what we need most of all now. The best thing all this hiding of information does is hide how incredibly incompetent we are. Increasingly we are hiding the embarrassingly backwards state of our technology. I am closing my career shaking my head on the collapse of our scientific supremacy.

“As people age, they confuse changes in themselves with changes in the world, and changes in the world with moral decline—the illusion of the good old days.” ― Steven Pinker

We are Lost Without Trust

24 Saturday Aug 2024

Posted by Bill Rider in Uncategorized

≈ Leave a comment

If we do not trust one another, we are already defeated.

– Allison Croggon

Most mornings I walk our dog, Duke, at a park near my house. The park is next to an elementary school. Here, I see direct evidence of how little trust Americans have in each other. I see the kids walking to school and if they are walking it is with parents. You even see parents with kids at bustops within eye shot of the house. You never see a kid walking alone to school. In fact this seems to be unthinkable today. If I think about myself all I remember is walking myself to school or later walking with my brother. Usually I would walk with friends the small distance to school.

The significant change is in social and societal trust. We no longer believe that children walking to school are safe. We fear all sorts of terrible things happening to them, many of which are figments of people’s imaginations and highly unlikely dangers. It’s instructive to compare the period from 1968 to 1976, when I walked to school, to the present day,when no one allows their child to walk to school. Regardless, this is a concerning sign for the health of our nation.Ultimately, we can’t avoid the fact that bad things happen and are inevitable (shit happens!). They aren’t blameworthy, but without trust,blame is readily assigned. Without trust people play it safe to avoid the blame.

“We are all mistaken sometimes; sometimes we do wrong things, things that have bad consequences. But it does not mean we are evil, or that we cannot be trusted ever afterward.”

― Alison Croggon

If you look at the United States today you see a nation where no one trusts each other. The impacts from this lack of trust are broad. It is important to look at what trust allows and its lack prohibits. Not trusting is expensive and it limits success. Those costs and impacts are hurting Americans left and right. I see it play out in my life. If we look around this damage is everywhere. It is evident in the politics today. It is evident in our personal lives too.

Fear is driving this change in behavior. Parents are terrified of sexual predators and random violence harming their children, despite the incredibly low probability of such events occurring. This reflects a common aspect of our low-trust society: we manage low-probability events at great cost. This phenomenon is widespread throughout society, as we incur enormous expenses to mitigate minuscule risks. In the case of children, this is ruining childhood. Socially, we see loneliness and isolation. For society as a whole, building or creating anything new becomes difficult and expensive.Everything costs more and takes longer due to the lack of trust.

I thoroughly enjoyed writing about a technical topic. It was both enjoyable and fulfilling, like cooking a delicious meal that you enjoy eating and even better when someone else appreciates it. However, there’s an underlying issue that’s been playing out behind the scenes. While there’s a clear benefit to conducting risky research, something is hindering progress.Ultimately, the reason for not realizing the benefits of risky research is a lack of trust. Risky research requires failure, and lots of it. Without trust, failure becomes unacceptable and suspicious. Without trust, people become cautious, and caution hinders progress. Caution leads to stagnation and decline, which is precisely what we observe across the country

“Trust is the glue of life. It’s the most essential ingredient in effective communication. It’s the foundational principle that holds all relationships.”

― Stephen R. Covey

Trust is best understood within our most intimate and important personal relationships. Whether it’s a romantic partner or friend, trust is foundational. When trust is lost in these relationships, they are at risk. If trust is not repaired, it can lead to the end of the relationship. Studies have shown that trust is built through several essential behaviors.

The first is authenticity, which involves presenting yourself as your true self. Faking your personality has the opposite effect and fosters suspicion. The second aspect is competence in areas relevant to the relationship. This could be athletic ability or intellectual prowess. Finally, trust requires demonstrated empathy, a deep care and concern for the well-being of others. The person you trust will understand your feelings and care about your welfare.

“Trust is the glue of life. It’s the most essential ingredient in effective communication. It’s the foundational principle that holds all relationships.”

― Stephen R. Covey

Stephen Covey’s The Speed of Trust provides valuable insights into the benefits of trust. The book demonstrates how trust can enhance efficiency and productivity. When trust exists, remarkable achievements are possible. Trust is contagious; when we are trusted, we trust others. Trust enables individuals to perform at their best, and organizations to achieve their highest potential. Conversely, a lack of trust is slow and costly. It is destructive. When we don’t trust, we tend to make mistakes and hinder progress. Lack of trust is the root of many fuck-ups.

The leaders who work most effectively, it seems to me, never say ‘I.’ And that’s not because they have trained themselves not to say ‘I.’ They don’t think ‘I.’ They think ‘we’; they think ‘team.’ …. This is what creates trust, what enables you to get the task done.

– Peter Drucker

The decline in American trust can be traced back to the 1970s. Several events shattered the spell of trust that had held the United States since the end of World War II. The upheavals of the 1960s had begun to erode trust with a generational divide, the civil rights movement, and a misguided war. The Nixon administration’s criminal actions exposed corruption at the highest levels of government. Nixon prioritized his own interests and power, seeking revenge against the culture he disliked. While Nixon’s religiosity may have distinguished him from Trump, it nonetheless reflects a decline in trust.

Other factors contributed to the unraveling of trust in the United States. The mid-1970s marked a peak in economic equality. Americans could comfortably achieve middle-class status with a single blue-collar income. People across the nation enjoyed a more level playing field, fostering empathy and trust. This shared experience and common culture allowed for authenticity to flourish. The nation was thriving and a global economic powerhouse, demonstrating competence. However, the energy crisis of the mid-1970s challenged these elements. The economy suffered, and blue-collar industries took a hit, further eroding trust.

“Never trust anyone, Daniel, especially the people you admire. Those are the ones who will make you suffer the worst blows.”

― Carlos Ruiz Zafón

The 1980s introduced new factors that undermined these trust drivers. The Reagan Revolution, characterized by a focus on business success through tax cuts and legal changes, significantly increased corporate wealth and power. The simultaneous assault on labor further weakened the ability of blue-collar jobs to provide a comfortable living. This marked the beginning of a widening economic inequality in the United States, which continues to grow today. This inequality erodes all aspects of societal and social trust, as people now live vastly different lives and hold radically different views of success. Consequently, people struggle to understand one another. This lack of understanding undermines empathy and destroys trust.

Other societal developments have accelerated the loss of trust. The terrorist attacks of September 11, 2001 led to a decline in trust and a rise in fear. The fear-based responses and societal changes that followed have persisted. Instead of progress toward a more inclusive society, division and bigotry are on the rise. The internet and the attention-driven economy have further exacerbated these trends. The cumulative effect of these factors is a massive political and cultural divide. The lack of trust now extends to the political system, threatening democracy itself and potentially spiraling further into an abyss.

“I’m not upset that you lied to me, I’m upset that from now on I can’t believe you.”

― Friedrich Nietzsche

Trust is cultivated through countless small acts. It was top of mind this past week and repeatedly demonstrated to me. I clearly distinguished between what was said privately and publicly. This inconsistency was painful to experience and significantly damaged trust in an important relationship. At work, I observe technical accuracy and competence being overshadowed by expediency. People hesitate to engage openly on topics due to fear of retaliation. All of this stems from and exacerbates a lack of trust

Building and fostering trust is paramount in all these situations. Trust in our relationships, with our coworkers, and among our fellow citizens is essential. With trust, things will improve, but it requires courage and effort. Trust is a product of strong character. It unleashes competence and grows alongside it. Trust is efficient and the foundation of success. We need leadership that guides us toward trust and away from fear and suspicion. This involves identifying the factors that have eroded trust and changing course. Many people benefit from these trust-destroying elements. To achieve trust, society needs to become more equitable with a deeper shared culture. We need a spirit that recognizes a future where trust prevails. Living, relating, and working in a place of trust is a more positive experience

Trust is the highest form of human motivation. It brings out the very best in people.

– Stephen Covey

Algorithms are the best way to improve computing power

14 Wednesday Aug 2024

Posted by Bill Rider in Uncategorized

≈ 5 Comments

A return to what I do best

For the first time in six years, I’m returning to writing about a topic within my professional field. This is where my true expertise lies, and frankly, it’s what I should be focusing on. If I were being cynical, I’d say this subject is entirely unrelated to work since it lacks any organizational support. It is clearly not our chosen strategy. Given that it’s neither a funded nor managed strategy, it’s essentially a hobby. Yet, it represents a significant missed opportunity for advancing several critical fields. Moreover, it highlights broader issues with our leadership and aversion to risk even when the rewards are large.

“Never was anything great achieved without danger.”

― Niccolo Machiavelli

Years ago when I blogged regularly, I often wrote one or two posts about upcoming talks. Some of my best work emerged from this process, which also enhanced the quality of my presentations. Writing is thinking; it forces deep reflection on a talk’s narrative and allows for refinement. By outlining ideas and considering supporting evidence, I could strengthen my arguments. Without this preparatory writing, my talks undoubtedly suffered. With this post, I hope to rectify this shortsighted but logical detour.

So, here goes!

“Don’t explain computers to laymen. Simpler to explain sex to a virgin.”

― Robert A. Heinlein

Algorithmic Impact and Moore’s Law

In recent years, the significance of algorithms in scientific computing has diminished considerably. Algorithms have historically been a substantial component of improving computing. They provide a performance boost beyond hardware acceleration. Unfortunately this decline in algorithmic impact coincides with the slowing of Moore’s Law. Moore’s law is the empirical observation that computing power doubles approximately every eighteen months leading to exponential increases. This rapid growth was fueled by a confluence of technological advancements integrated into hardware. However, this era of exponential growth ended about a decade ago. Instead of acknowledging this shift and adapting, it was met with an increased focus on hardware development. I’ve written extensively on this topic and won’t reiterate those points here.

“People who don’t take risks generally make about two big mistakes a year. People who do take risks generally make about two big mistakes a year.”

― Peter F. Drucker

Simultaneously, we’ve neglected advancements in algorithms. Improving algorithms is inherently unpredictable.Breakthroughs are sporadic, defying schedules and plans. They emerge from ingenious solutions to previously insurmountable challenges and necessitate risk-taking. Such endeavors offer no guaranteed returns, a requirement often demanded by project managers. Instead, progress occurs in significant leaps after extended periods of apparent stagnation. Rather than a steady upward trajectory, advancements arrive in unpredictable bursts. This aversion to risk and the pursuit of guaranteed outcomes hinders the realization of algorithmic breakthroughs.

What is an algorithm?

An algorithm can be thought of as a recipe that instructs a computer to complete a task. Sorting a list is a classic algorithmic problem. Once a correct method is established, efficiency becomes the primary concern.

Algorithm efficiency is measured in terms of degrees of freedom, such as the length of a list. It is often expressed as a constant multiplied by the list length raised to a power (or its logarithm). This power significantly impacts efficiency,especially as the list grows. Consider the difference in effort for sorting a list of 100 items using linear, log-linear, and quadratic algorithms: 100, 460, and 10,000 operations, respectively. For a list of 1000 items, these numbers become 1000, 6900, and 1,000,000. We see differences in performance grow larger. As the list size increases, the impact of the constant factor before the scaling term diminishes. Note that generally, a linear algorithm has a larger constant than a quadratic one.

“An algorithm must be seen to be believed.”

— Donald Ervin Knuth

As this example illustrates, algorithms are fundamentally powerful tools. Their efficiency scaling can dramatically reduce computational costs and accelerate calculations. This is just one of the many remarkable capabilities of algorithms. A large impact is they can significantly contributes to overall computing speed. Historically, the speedup achieved through algorithmic improvements has either matched or exceeded Moore’s Law. Even in the worst case, algorithms enhance and amplify the power of computer advancements. Our leaders seem to have ignored this. Certain incremental gains prime for project management with low risk are prioritized.

Algorithms for Scientific Computing

In the realm of scientific computing, the success of algorithms is most evident in linear algebra. For a long time during the early days of computing, algorithms kept pace with increasing computing speeds. This demonstrates that algorithms amplify the speed of computers. They complement each other, resulting in equally improved performance over a 40-year span.

Originally, linear algebra relied on dense algorithms with cubic work scaling (Gaussian elimination). These were replaced by relaxation and sparse-banded methods with quadratic scaling. Subsequently, Krylov methods, scaling to the three-halves or logarithmically with spectral methods, took over. Finally, in the mid-1980s, multigrid achieved linear scaling. Since then,there have been no further breakthroughs. Still from the 1940s to the mid-1980s, algorithms kept pace with hardware advancements. In this era the advances in hardware, which were massive was complimented by equal advances. In today’s words algorithms are a force multiplier.

“… model solved using linear programming would have taken 82 years to solve in 1988… Fifteen years later… this same model could be solved in roughly 1 minute, an improvement by a factor of roughly 43 million… a factor of roughly 1,000 was due to increased processor speed, … a factor of roughly 43,000 was due to improvements in algorithms!”

– Designing a Digital Future: Federally Funded R&D in Networking and Information Technology

Examples of algorithmic impact are prevalent throughout scientific computing. In optimization, interior point methods dramatically accelerated calculations, outpacing hardware advancements for a period. More recently, the influence of algorithms has become apparent in private sector research. A prime example is Google’s PageRank algorithm. This revolutionized internet search. Once you used Google to search you never went back to Altavista or Yahoo. In the process it also laid the foundation for one of the world’s most influential and prosperous companies. Today Google has a market cap in excess of 2 trillion dollars.

“When people design web pages, they often cater to the taste of the Google search algorithm rather than to the taste of any human being.”

— Yuval Noah Harari

More recently, another algorithm has revolutionized the tech world: the transformer. This breakthrough was instrumental in the recent developments of large language models (LLMs). These have reshaped the technology landscape in the past couple years. The transformer’s impact is multifaceted. Most superficially, it excels at consuming and processing data in vector form, aligning perfectly with modern GPU hardware. This synergy has propelled NVIDIA to unprecedented heights of corporate success (lifting it to trillion dollar market cap).

Less obvious, but equally significant, is the transformer’s influence on LLM behavior. Unlike previous models that processed data sequentially, the transformer operates on vector data chunks, enabling the network to consider larger contexts. This represents a quantum leap in LLM capabilities and behavior.

A cautionary tale emerges from the transformer’s history. Google pioneered the algorithm, but others reaped the primary benefits. This highlights a common challenge with algorithmic advancements: those making the initial breakthrough may not see the principal benefits. Moreover, the vision to develop an algorithm often differs from the vision to optimize its use. This presents a persistent hurdle for project managers. Project manager are relentlessly myopic.

“Computer Science is no more about computers than astronomy is about telescopes”

― Edsger Wybe Dijkstra

It is well known that the power of algorithms is on par with the impact of hardware improvements. However, a key distinction lies in the predictability of progress. Algorithmic advancements stem from discovery and inspiration. These are elements that defy the quarterly planning cycles prevalent in contemporary research. An intolerance for failure hinders algorithmic progress. As exemplified by the transformer, algorithms often benefit organizations beyond their originators. Success lies in adapting to the capabilities of these innovative tools.

Algorithms I really care about

My professional focus lies in developing methods to solve hyperbolic conservation laws. The nature of these equations offers significant potential for algorithmic improvements, a fact often overlooked in current research directions. This oversight stems from a lack of clarity about the true measures of success in numerical simulations. The fundamental objective is to produce highly accurate solutions while minimizing computational effort. This is to be achieved while maintaining robustness, flexibility, and physical correctness.

“The scientific method’s central motivation is the ubiquity of error – the awareness that mistakes and self-delusion can creep in absolutely anywhere and that the scientist’s effort is primarily expended in recognizing and rooting out error.”

– David Donoho et al. (2009)

Achieving an unambiguous measure of solution accuracy uses a process known as code verification. A common misconceptions about code verification is its focus on bug finding rather than precise error quantification. It is equally important to understand how computational effort reduces error. Mesh refinement is a standard approach adding more degrees of freedom. This increases the cost in a well defined way that depends on the dimensionality of the problem. For a one-dimensional explicit calculation, computational cost scales quadratically with decreasing mesh size due to the linear relationship between time step and mesh size. In two and three dimensions, this scaling becomes cubic and quartic, respectively.

Code verification reveals both precise error (given an exact solution) and convergence rate. For problems with discontinuities like shock waves, the convergence rate is inherently limited to first order, regardless of the method used. This rate often falls below one due to numerical behavior near linear discontinuities. For simplicity, we will focus on the implications of first-order convergence. Given a fixed convergence rate, error accuracy becomes paramount. Furthermore as the convergence rate diminishes the base algorithmic accuracy grows in impact.

While testing is standard practice in hyperbolic conservation law research, it is often inconsistent. Accuracy is typically reported for smooth problems where high-order accuracy can be achieved. However, once smoothness is lost and accuracy drops to first order or less, reporting error ceases. Notably the problems with shock waves are the reason we study these equations. The Sod shock tube is a common test case, but results are presented graphically without quantitative comparison. This reflects a common misconception that qualitative assessments suffice after shock formation, disregarding the significance of accuracy differences.

“What’s measured improves”

– Peter Drucker

Because the order of accuracy is limited to first order, even small differences in accuracy become more significant. For standard methods, these base accuracy differences can easily range from two to four times, dramatically impacting computational cost to achieve an error level. Minimizing the effort required to achieve a desired accuracy level is crucial. The reason is simple: accuracy matters more as the convergence rate decreases. The lower the convergence rate, the greater the impact of accuracy on overall performance.

The algorithmic payoff

Consider a method that halves the error for double the cost at a given mesh resolution. We break even with second-order accuracy, a mesh half the size of the original is required in one dimension. For third and fourth-order accuracy, the break-even points shift to two and three dimensions, respectively. These dynamics change entirely when considering the fixed first-order accuracy imposed by mathematical theory.

“The fundamental law of computer science: As machines become more powerful, the efficiency of algorithms grows more important, not less.“

– Nick Trefethen

In one dimension, the less accurate method is twice as costly for the same error. This factor escalates to four times in two dimensions and eight times in three dimensions. As accuracy disparities grow, the advantage of higher accuracy expands exponentially. A sixteen-fold error difference can lead to a staggering 65,000-fold cost advantage in 3D. Consequently, even significantly more expensive methods can offer substantial benefits. Essentially the error difference amortizes the algorithmic cost. Despite this potential, the field remains entrenched in decades-old,low-accuracy approaches. This stagnation is rooted in a fear of failure and short-term thinking, with long-term consequences.

If failure is not an option, then neither is success.

― Seth Godin

This entire dynamic is inextricably linked to a shift toward short-term focus and risk aversion. Long-term objectives are essential for algorithmic advancement, demanding vision and persistence. The capacity to withstand repeated failures while maintaining faith in eventual success is equally critical. Unfortunately, today’s obsession with short-term project management stifles progress at its inception. This myopic approach is profoundly detrimental to long-term advancement.

References

Lax, Peter D., and Robert D. Richtmyer. “Survey of the stability of linear finite difference equations.” Communications on pure and applied mathematics 9, no. 2 (1956): 267-293.

Majda, Andrew, and Stanley Osher. “Propagation of error into regions of smoothness for accurate difference approximations to hyperbolic equations.” Communications on Pure and Applied Mathematics 30, no. 6 (1977): 671-705.

Banks, Jeffrey W., T. Aslam, and William J. Rider. “On sub-linear convergence for linearly degenerate waves in capturing schemes.” Journal of Computational Physics 227, no. 14 (2008): 6985-7002.

Lax, Peter D. “Accuracy and resolution in the computation of solutions of linear and nonlinear equations.” In Recent advances in numerical analysis, pp. 107-117. Academic Press, 1978.

Page, Lawrence, Sergey Brin, Rajeev Motwani, and Terry Winograd. The pagerank citation ranking: Bring order to the web. Technical report, Stanford University, 1998.

Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. “Attention is all you need.” Advances in neural information processing systems 30 (2017).

Sod, Gary A. “A survey of several finite difference methods for systems of nonlinear hyperbolic conservation laws.” Journal of computational physics 27, no. 1 (1978): 1-31.

Greenough, J. A., and W. J. Rider. “A quantitative comparison of numerical methods for the compressible Euler equations: fifth-order WENO and piecewise-linear Godunov.” Journal of Computational Physics 196, no. 1 (2004): 259-281.

Rider, William J., Jeffrey A. Greenough, and James R. Kamm. “Accurate monotonicity-and extrema-preserving methods through adaptive nonlinear hybridizations.” Journal of Computational Physics 225, no. 2 (2007): 1827-1848.

Forced Social Justice Creates the Foundation for a Backlash

24 Wednesday Jul 2024

Posted by Bill Rider in Uncategorized

≈ Leave a comment

Tags

life, masculinity, mental-health, toxic-masculinity

“Injustice anywhere is a threat to justice everywhere.”

– Martin Luther King Jr.

The Seeds of Failure

It’s been incredibly difficult to write this post. It feels like I’m grappling with live wires, but this is an essential topic to address. Over the past 70 years, the United States has witnessed a series of social movements that have expanded the rights of many oppressed groups. However, the tactics employed by these movements have undergone dramatic shifts. Each movement faced vehement opposition from conservatives, and their successes have been mixed. We now stand at a crossroads. While there’s a renewed push for progress with new strategies, some of these methods have inadvertently sown the seeds of a larger failure. I believe these approaches to be counterproductive and have generated a significant backlash. The reasons behind this are logical and rooted in basic human psychology.

A prime example can be found with attacks on masculinity from the left. The #MeToo movement has shed light on important excesses surrounding masculinity. We have seen masculinity attacked and broadly reviled for good reasons. It has also been a particularly difficult time for young men. The reasons are varied and mostly tangentially related to #MeToo. However, it’s important to remember that healthy masculinity exists. Modern masculinity should be built on respect,empathy, and consent. The use of power and strength should be responsible and used to help those who are weaker.

The #MeToo movement primarily highlighted the misuse of power by some men. There masculinity is a force against the weak leading to sexual violence. Unfortunately, the critique of masculinity has to led many men to embrace a more negative and spiteful form. This is what some might call “pathological masculinity” or toxic masculinity. This is exemplified by figures like Donald Trump (though Joe Rogan, Dana White, and MMA culture can also be seen as reflections of this too). The lack of a compelling alternative from progressives has further complicated the issue. There needs to be a compelling althernative. Without a positive vision of masculinity, many men are drawn to the more aggressive and dangerous traditional form.

“When all Americans are treated as equal, no matter who they are or whom they love, we are all more free.”

— Barack Obama

Online Social Movements

Social movements today increasingly leverage online tactics to achieve equality. These tactics include online attacks, trolling, doxing, and social media campaigns. While these methods can diminish expressions of oppression, they are ultimately harmful. Cancellation, the complete removal of a person from public discourse for expressing a dissenting view, is particularly destructive. While it may feel like a victory for the movement, it sows the seeds of future problems.

Instead of fostering debate, these tactics shut down opposing ideas. This has several negative consequences.

  • First, it undermines the principle of free speech, a cornerstone of a healthy society.
  • Second, it casts violators as victims, garnering them sympathy.
  • Third, it avoids defeating ideas through logic and reason, relying on force instead.

This reliance on force is a tactic more commonly used by the political right. In many ways it defines the right. It creates enemies rather than allies, and silences those who might be open to progressive ideas. Fear, not persuasion, becomes the tool for change. This fear breeds resistance, leading to the very backlash we are witnessing today.

“Every single American — gay, straight, lesbian, bisexual, transgender — every single American deserves to be treated equally in the eyes of the law and in the eyes of our society. It’s a pretty simple proposition.”

— Barack Obama

This is not to say that conservatives do not use cancellation themselves. The whole book banning and “don’t say gay” approach is cancellation. Anything in the realm of sexuality and sex positivity is cancelled on the right. Just look at the ridiculous ways sex is “hidden” of social media (s3x, fugg, etc). It is outright censorship. They use force and institutions under their control to remove things they oppose. Their approach is terrible in all the same ways as described above. We would all be better off if this approach was abandoned whole cloth by both sides. The violation of free speech principles is an abdication of cherished American ideals. By the same token cancellation is the use of force where the battle should be for ideas and ideals. With the temporary victories of cancellation comes the foundation of backlash.

“The master’s tools will never dismantle the master’s house.”

– Audre Lorde

A Nuanced View

One area where leftist ideology loses ground to the right is its handling of historical figures. When we judge past figures by today’s progressive standards, it weakens the left’s position. Instead of honoring those who started the nation’s progress, the left inadvertently turns them into conservative symbols. This reinforces ideas that undermine progress. The Supreme Court’s use of originalism to restrict rights exemplifies this, treating the Founders’ ideas as eternally valid.

The real truth is that the Founding Fathers were considered extremely progressive in their own time. It’s ironic that their ideas are now used to hinder progress. While they certainly had flaws and made mistakes by modern standards, judging them solely through today’s lens paints an incomplete picture. We should view their nature in today’s terms as a testament to the progress we’ve made.

Essentially, yesterday’s radical progressives resemble today’s conservatives. This should be a source of optimism. It shows how far we’ve come. We should celebrate their contributions to our advancement rather than dismiss them. Additionally,this approach weakens the arguments of those on the left who seem overly critical of past figures. The left just looks like they are attacking and tearing down a hero. Instead any critique should be done with nuance and perspective.

The inability to consider nuance is a common thread across all forms of extremism, both left and right. Most issues surrounding personal rights involve a great deal of individual variation. There’s no one-size-fits-all solution for social problems, especially those related to gender, sexuality, or race. These topics are full of complexities and individual differences. Efforts to impose uniformity are doomed to fail. While the right is known for its one-size-fits-all solutions, it’s a failing strategy for the left as well. Nuance is key to truly accepting individuals as equals.

While I personally support most social movements, empathy and compassion are ultimately the drivers of progress.”Cancel culture” and online language policing are examples of force being used. Force is a tool of oppression, not progress. Progressive social justice cannot be achieved through coercion. Force and social justice are fundamentally incompatible. Empathy is fostered by shared experiences and the desire to understand and be understood by others.Simply forcing people to acknowledge another’s rights leads to a superficial adherence to equality, not lasting change.

True equality comes from believing everyone deserves basic rights. It arises from recognizing that everyone deserves the same rights you do, even if they seem different. The route to this is empathy and compassion growing within opponents of equality. Progressives should acknowledge the progress made while maintaining patience and perspective. There are still many areas needing improvement, and vulnerable populations remain. The right often demonizes specific groups for their own gain, as seen with the shift from targeting gay men as predators to attacking drag queens and transgender people. The gay population is largely accepted by society today, and new targets are needed to fuel the backlash.

A concerning trend in social movements today is the focus on amplifying negativity. Effective social change relies on empathy and compassion, not shame. Shame is a harmful tool, particularly when used to police women’s sexuality. Online shaming tactics, like cancellation and doxxing, are a form of bullying that can lead to real-world violence and destroy lives. A more constructive approach is needed. We must engage with those who hold opposing views, even those deemed problematic. Understanding their perspectives is crucial. Instead of simply dismissing opposing ideas, we should navigate the differences. This fosters growth and allows for the development of new perspectives that can garner wider support. The current strategy of online shaming only creates entrenched opposition and fuels backlash.

“Owning our story and loving ourselves through that process is the bravest thing we’ll ever do.”

— Brené Brown

A Personal Experience

I experienced the power of understanding firsthand during a virtual work meeting a few years ago. It was an icebreaker at the beginning of a meeting, and I was getting to know my new, younger manager. As a natural storyteller, I shared a Thanksgiving anecdote about serving a standing rib roast instead of a traditional turkey. This upset my son who I said felt “gypped.”

Immediately, I sensed a shift. My manager was visibly horrified. Shame washed over me. This was a term I’d used throughout my childhood, completely unaware of its offensive connotation towards the Romani people (often referred to as “gypsies”). I didn’t even know how to spell it correctly – I simply thought it meant “to be cheated.” I’ve come to realize that there was a lot of subtle racism in my upbringing casually offered by relatives.

In that moment, I felt unfairly judged as a racist or an ignorant person. Thankfully, my new manager listened openly. I explained my lack of awareness, and she, in turn, explained the term’s true meaning with compassion. We both approached the situation with empathy and a willingness to learn. We navigated the awkwardness gracefully, and it ultimately fostered a positive working relationship that later blossomed into friendship. This is the power of understanding, empathy, and compassion on both sides.

“Race, gender, religion, sexuality, we are all people and that’s it. We’re all people. We’re all equal.”

― Connor Franta

A crucial step towards sustainable progress is following the simple rule of “don’t be an asshole.” Especially online,encountering outrage should prompt you to question its effectiveness. Cancel culture often embodies this negativity and generates asshole behavior.

Understanding your own place in society, particularly if you come from a position of privilege, is essential. Intersectionality, the idea that we all have multiple social identities, is a valuable tool for this. These identities can be visible and obvious, or hidden and nuanced. For example, I am a white, middle-aged male. I have a high level of education and a great job in science. I identify as mostly straight, married, and polyamorous (although socially monogamous).

All these aspects contribute to who I am. Some provide advantages and conformity, while others challenge societal norms and even put me at a disadvantage. Recognizing these disadvantages fosters empathy for those who cannot hide their identities. This empathy is a powerful tool for progress, accessible to almost everyone. This accessibility comes from a place of authenticity, a desirable behavior for everyone to feel.

“We will not win our rights by staying quietly in our closets.”

— Harvey Milk

At its core, my view is that progress requires letting go of force and embracing empathy, understanding, and compassion.Social justice is achieved through better ideas and challenging traditional viewpoints. We need to change hearts and minds, not force compliance. Forceful tactics create a false sense of progress that quickly fades. Real social change is lasting and permanent.

Next week, I’ll return to a technical topic: the power of algorithms in advancing computational performance.

We don’t have leaders

13 Saturday Jul 2024

Posted by Bill Rider in Uncategorized

≈ 2 Comments

We don’t have leaders

“Leadership is solving problems. The day soldiers stop bringing you their problems is the day you have stopped leading them. They have either lost confidence that you can help or concluded you do not care. Either case is a failure of leadership.”

– Colin Powell.

Over the past few years, I’ve made a broad observation of “leaders” I encounter: they don’t lead. Instead, they seem to market the success of their role through positive messaging. This leadership style feels highly performative, offering little actual leadership. It’s like a reality TV show – they’re pretending to be leaders. Their communication is tinged with an almost uniformly positive message. If things were always going great, such positivity would be appropriate. However, the problem is that things are not going well. Therefore, the constant positivity in their messaging becomes toxic and inhibits any focus or attention on solving the numerous problems we face. Right now, the USA seems to be careening towards catastrophe.

The Elephant and Donkey in the Room

Over the past couple of weeks, we’ve been seeing this play out at the highest stakes possible: the US Presidential race. It’s replete with examples of deeply concerning leadership. Both political parties appear irresponsible. We have the recent example of Joe Biden’s continued candidacy. He’s clearly showing the effects of aging. In response, we see widespread gaslighting, with people telling us to ignore the obvious. Leaders are saying one thing in public and the opposite in private. Meanwhile, this allows Biden to maintain the status quo, protecting his ego. He’s acting just as selfishly as his opponent. He’s not doing what’s best for the nation and is failing to show leadership.

On the other side, we have an even greater demonstration of cowardice and failure. The entire Republican party has surrendered to Donald Trump. Their candidate can be most charitably described as a grifter. More accurately, he’s a habitual liar and convicted criminal. He’s engaged in numerous acts that would disqualify him from any position of authority. His first term as President was demonstrably incompetent. Were he not the President, he could never obtain a security clearance. Yet, the entire leadership of the GOP caved in the face of him. Everyone remaining in the GOP was too cowardly to stand up to Trump. They all prioritized their personal success and power over the good of the nation. They all essentially said, “Let’s support the crazy guy.”

“Being responsible sometimes means pissing people off.”

― Colin Powell,

In both cases, all the leaders have chosen their own success over the good of the nation. We see a systematic failure to confront objective reality because it is too difficult and risky. Obvious problems are ignored and minimized. The personal goals of the individuals in power overwhelm any sense of duty. Those in power turn out to be selfish narcissists. None of them are fit for leadership as a result. This behavior is not limited to the top of the power chain.

The Connection to the Covid-19 Pandemic

“One of the tests of leadership is the ability to recognize a problem before it becomes an emergency.”

—Arnold Glasow

All of this started to come to a head with Covid-19. However, to be honest, the trend had been building for a long time, even before that. President Trump’s constant dismissal of reality, claiming the virus would just disappear, exemplified this behavior. This pattern repeated itself down the chain of command and across organizations. While Covid-19 was the peak of this toxic positivity, many smaller issues are communicated in the same way.

This cognitive dissonance, the disconnect between what leaders say and reality, resulted in deaths during the pandemic. A national crisis was exacerbated by inaction. Even worse, the pattern of ignoring problems and inaction seems to be escalating. The success of leaders who practice this approach almost guarantees we’ll see more of it, until it ultimately leads to their downfall.

While the pandemic was the most high-profile example of abysmal leadership, it’s hardly unique. The inability to speak truth and focus on problems appears to be an epidemic itself. I see this constantly within the institutions I interact with. Another prominent example is Boeing, where deadly consequences arose from two crashes and recent near misses. We’re witnessing a once-great company in freefall, destroying its reputation with each calamity.

If we delve deeper into the reasons for this lack of focus on reality in leadership, money emerges as a central issue. In the case of the pandemic, there was a fear of spooking the economy. With Boeing, the focus was on protecting shareholder value by keeping stock prices from dropping. Time and again, the threat of bad news impacting finances seems to be a top priority for leaders. It would be naive to believe that Boeing is an isolated case. Boeing is a canary in the coal mine, a warning sign of danger ahead.

One key takeaway is that leadership positions often lead to wealth. Even lower-level managers are well-compensated compared to those they manage. Furthermore, the potential for promotion is highly attractive. Preserving these personal benefits by avoiding waves and keeping the status quo intact allows this entire dysfunctional system to persist. In essence, our leaders are incentivized not to lead and expose the incompetence that surrounds and surpasses them.

“Nearly all men can stand adversity, but if you want to test a man’s character, give him power.”

– Abraham Lincoln.

Money is the Only Thing We Care About

“Management is doing things right; leadership is doing the right things.” ― Peter Drucker

A key element is the rise of money as power. In today’s world, money is the heart of personal power. Leaders hold significantly more wealth and power than those they lead. This can lead many to cling to these positions far longer than is beneficial, neglecting important concerns. We’ve become a society where money is the sole measure of worth, overshadowing other values like quality, ethics, and humanity (e.g., Boeing). Financial gain is seen as the only measure of success and grants immense power. In the corporate world, shareholder value becomes the excuse for prioritizing profits over all else. This trend is a product of the neoliberal era.

This, in turn, fuels vast societal inequality, creating a leadership class with an existence entirely divorced from those they claim to lead. Leaders, naturally, are unwilling to relinquish their privilege. We also see the corrosive and dangerous aspects of concentrated wealth and power. Those in power view losing their leadership roles as a personal threat, and they actively work to maintain the status quo for their own benefit. Falling out of the leadership class translates to a significant decline in both economic standing and societal influence.

Preserving the status quo is paramount. One way to enact change is to begin addressing problems. Ignoring and perpetuating problems simply affirms the status quo. This, in many ways, explains the rightward shift. Conservatives generally favor the status quo and upholding tradition, which benefits those in power. Since power in the US is heavily tied to money, we see an alliance emerge: moneyed interests and conservatives working together to preserve the existing order.

Leading the charge for the GOP is the Supreme Court. They have transformed from a legal oracle into a partisan entity. The recent immunity ruling is arguably one of the worst decisions in history, destined to stand alongside Dred Scott in infamy. The outcome could very well lead to the dismantling of the Court and the nation itself. It’s a recipe for dictatorship. The Court relies on the Executive Branch to enforce its rulings. If the executive branch disagrees, they can now break the law with impunity. Essentially, power now resides with the President, not the law. While the Court seized significant power over the executive branch, they ultimately relinquished it all back to the President. The nation’s future hangs in the balance. We could very well descend into a de facto dictatorship, effectively losing any semblance of a functioning democracy.

Toxic Positivity

“The supreme quality of leadership is integrity.”

–Dwight Eisenhower

A significant portion of the problem stems from the overuse of toxic positivity in communication. Effective leadership hinges on clear communication and directing human effort towards shared goals. Leaders who filter out problems to create a narrative of perfect circumstances erode trust. When a leader assures you everything is great while you know it’s not, it raises a red flag. This destroys trust not only in that specific situation but also makes you question their honesty in general. This continuous erosion of trust contributes to the decline in societal trust as a whole. Each selective edit of reality feels like another betrayal, leading us to view leaders as habitual liars and fostering cynicism. While acknowledging problems can be difficult, it’s a crucial aspect of leadership. Bullshitting people with positivity is cowardly and destroys trust.

The issue of toxic positivity has profoundly impacted my life. In 2020, I had a close friend who demanded nothing but positivity during interactions. As a therapist, she dealt with people’s problems all day, so it’s understandable that she craved positivity outside of work. However, 2020 was a year of significant challenges, and the inability to share these burdens within our friendship caused me distress. Despite being part of my social circle, I eventually distanced myself due to the one-sided nature of the relationship. Her insistence on positivity came at the expense of balance, ultimately undermining the friendship.

Toxic positivity is a telltale sign of a leadership problem. Instead of honesty, we’re offered the soothing but meaningless platitudes of positivity. Problems are denied, and everything is portrayed as being under perfect control. When reality contradicts this narrative, problems fester or become hidden secrets that divide people. Leaders who resort to toxic positivity simply aren’t leading. They have no intention of tackling complex or time-consuming issues. Their strategy is to manipulate communication to maintain their leadership position and hope that problems remain hidden. This approach is infectious, and upper leaders often establish a “shoot the messenger” culture that discourages reporting bad news. This, in turn, perpetuates the cycle of toxic positivity throughout the organization.

One cannot simply dictate positivity. Life is inherently complex. The same principle applies to leaders; they cannot be effective without balance. True leadership requires a blend of acknowledging the good and the bad. Leaders who lack the ability to admit problems or recognize failures cannot lead effectively. They should celebrate successes and use them as springboards for growth. Identifying problems allows them to rally their followers to address them. It is through solving problems that leaders achieve greatness. Ultimately, the greatest success comes from transforming problems into opportunities. Unfortunately, this virtuous cycle seems to have been lost. Today, some leaders believe they can manipulate their way to success through messaging and pronounce problems solved simply by declaration.

“The first responsibility of a leader is to define reality. The last is to say thank you. In between, the leader is a servant.”

—Max DePree

Given the high stakes involved, what can be done? The philosophy of Victor Frankl, a Holocaust survivor, offers an answer. His concept of “tragic optimism” acknowledges that problems are inevitable. Death, for example, is a universal experience. We will all face tragedy repeatedly. However, tragic optimism encourages us to approach these inevitable challenges with the belief that they can be overcome. We can not only survive but also thrive in the face of tragedy. This requires acknowledging problems head-on. This is the foundation of great leadership. This philosophy can guide us towards a brighter future. However, we can only achieve this by rejecting the current generation of “leaders.” We should demand honest problem-solvers who act in the best interests of the institutions they lead. Only when we stop rewarding cowardice with power and riches can we truly turn the corner.

“My own definition of leadership is this: The capacity and the will to rally men and women to a common purpose and the character which inspires confidence.”

—General Montgomery

Next week, I will discuss the societal reckoning we are approaching in relation to social justice movements. While the causes may be just, I believe the current methods employed to achieve social justice are counterproductive. We need to re-evaluate our strategic approach to building a better society.

← Older posts
Newer posts →

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • March 2026
  • February 2026
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013

Categories

  • Uncategorized

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • The Regularized Singularity
    • Join 60 other subscribers
    • Already have a WordPress.com account? Log in now.
    • The Regularized Singularity
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...