• About The Regularized Singularity

The Regularized Singularity

~ The Eyes of a citizen; the voice of the silent

The Regularized Singularity

Monthly Archives: August 2024

We are Lost Without Trust

24 Saturday Aug 2024

Posted by Bill Rider in Uncategorized

≈ Leave a comment

If we do not trust one another, we are already defeated.

– Allison Croggon

Most mornings I walk our dog, Duke, at a park near my house. The park is next to an elementary school. Here, I see direct evidence of how little trust Americans have in each other. I see the kids walking to school and if they are walking it is with parents. You even see parents with kids at bustops within eye shot of the house. You never see a kid walking alone to school. In fact this seems to be unthinkable today. If I think about myself all I remember is walking myself to school or later walking with my brother. Usually I would walk with friends the small distance to school.

The significant change is in social and societal trust. We no longer believe that children walking to school are safe. We fear all sorts of terrible things happening to them, many of which are figments of people’s imaginations and highly unlikely dangers. It’s instructive to compare the period from 1968 to 1976, when I walked to school, to the present day,when no one allows their child to walk to school. Regardless, this is a concerning sign for the health of our nation.Ultimately, we can’t avoid the fact that bad things happen and are inevitable (shit happens!). They aren’t blameworthy, but without trust,blame is readily assigned. Without trust people play it safe to avoid the blame.

“We are all mistaken sometimes; sometimes we do wrong things, things that have bad consequences. But it does not mean we are evil, or that we cannot be trusted ever afterward.”

― Alison Croggon

If you look at the United States today you see a nation where no one trusts each other. The impacts from this lack of trust are broad. It is important to look at what trust allows and its lack prohibits. Not trusting is expensive and it limits success. Those costs and impacts are hurting Americans left and right. I see it play out in my life. If we look around this damage is everywhere. It is evident in the politics today. It is evident in our personal lives too.

Fear is driving this change in behavior. Parents are terrified of sexual predators and random violence harming their children, despite the incredibly low probability of such events occurring. This reflects a common aspect of our low-trust society: we manage low-probability events at great cost. This phenomenon is widespread throughout society, as we incur enormous expenses to mitigate minuscule risks. In the case of children, this is ruining childhood. Socially, we see loneliness and isolation. For society as a whole, building or creating anything new becomes difficult and expensive.Everything costs more and takes longer due to the lack of trust.

I thoroughly enjoyed writing about a technical topic. It was both enjoyable and fulfilling, like cooking a delicious meal that you enjoy eating and even better when someone else appreciates it. However, there’s an underlying issue that’s been playing out behind the scenes. While there’s a clear benefit to conducting risky research, something is hindering progress.Ultimately, the reason for not realizing the benefits of risky research is a lack of trust. Risky research requires failure, and lots of it. Without trust, failure becomes unacceptable and suspicious. Without trust, people become cautious, and caution hinders progress. Caution leads to stagnation and decline, which is precisely what we observe across the country

“Trust is the glue of life. It’s the most essential ingredient in effective communication. It’s the foundational principle that holds all relationships.”

― Stephen R. Covey

Trust is best understood within our most intimate and important personal relationships. Whether it’s a romantic partner or friend, trust is foundational. When trust is lost in these relationships, they are at risk. If trust is not repaired, it can lead to the end of the relationship. Studies have shown that trust is built through several essential behaviors.

The first is authenticity, which involves presenting yourself as your true self. Faking your personality has the opposite effect and fosters suspicion. The second aspect is competence in areas relevant to the relationship. This could be athletic ability or intellectual prowess. Finally, trust requires demonstrated empathy, a deep care and concern for the well-being of others. The person you trust will understand your feelings and care about your welfare.

“Trust is the glue of life. It’s the most essential ingredient in effective communication. It’s the foundational principle that holds all relationships.”

― Stephen R. Covey

Stephen Covey’s The Speed of Trust provides valuable insights into the benefits of trust. The book demonstrates how trust can enhance efficiency and productivity. When trust exists, remarkable achievements are possible. Trust is contagious; when we are trusted, we trust others. Trust enables individuals to perform at their best, and organizations to achieve their highest potential. Conversely, a lack of trust is slow and costly. It is destructive. When we don’t trust, we tend to make mistakes and hinder progress. Lack of trust is the root of many fuck-ups.

The leaders who work most effectively, it seems to me, never say ‘I.’ And that’s not because they have trained themselves not to say ‘I.’ They don’t think ‘I.’ They think ‘we’; they think ‘team.’ …. This is what creates trust, what enables you to get the task done.

– Peter Drucker

The decline in American trust can be traced back to the 1970s. Several events shattered the spell of trust that had held the United States since the end of World War II. The upheavals of the 1960s had begun to erode trust with a generational divide, the civil rights movement, and a misguided war. The Nixon administration’s criminal actions exposed corruption at the highest levels of government. Nixon prioritized his own interests and power, seeking revenge against the culture he disliked. While Nixon’s religiosity may have distinguished him from Trump, it nonetheless reflects a decline in trust.

Other factors contributed to the unraveling of trust in the United States. The mid-1970s marked a peak in economic equality. Americans could comfortably achieve middle-class status with a single blue-collar income. People across the nation enjoyed a more level playing field, fostering empathy and trust. This shared experience and common culture allowed for authenticity to flourish. The nation was thriving and a global economic powerhouse, demonstrating competence. However, the energy crisis of the mid-1970s challenged these elements. The economy suffered, and blue-collar industries took a hit, further eroding trust.

“Never trust anyone, Daniel, especially the people you admire. Those are the ones who will make you suffer the worst blows.”

― Carlos Ruiz Zafón

The 1980s introduced new factors that undermined these trust drivers. The Reagan Revolution, characterized by a focus on business success through tax cuts and legal changes, significantly increased corporate wealth and power. The simultaneous assault on labor further weakened the ability of blue-collar jobs to provide a comfortable living. This marked the beginning of a widening economic inequality in the United States, which continues to grow today. This inequality erodes all aspects of societal and social trust, as people now live vastly different lives and hold radically different views of success. Consequently, people struggle to understand one another. This lack of understanding undermines empathy and destroys trust.

Other societal developments have accelerated the loss of trust. The terrorist attacks of September 11, 2001 led to a decline in trust and a rise in fear. The fear-based responses and societal changes that followed have persisted. Instead of progress toward a more inclusive society, division and bigotry are on the rise. The internet and the attention-driven economy have further exacerbated these trends. The cumulative effect of these factors is a massive political and cultural divide. The lack of trust now extends to the political system, threatening democracy itself and potentially spiraling further into an abyss.

“I’m not upset that you lied to me, I’m upset that from now on I can’t believe you.”

― Friedrich Nietzsche

Trust is cultivated through countless small acts. It was top of mind this past week and repeatedly demonstrated to me. I clearly distinguished between what was said privately and publicly. This inconsistency was painful to experience and significantly damaged trust in an important relationship. At work, I observe technical accuracy and competence being overshadowed by expediency. People hesitate to engage openly on topics due to fear of retaliation. All of this stems from and exacerbates a lack of trust

Building and fostering trust is paramount in all these situations. Trust in our relationships, with our coworkers, and among our fellow citizens is essential. With trust, things will improve, but it requires courage and effort. Trust is a product of strong character. It unleashes competence and grows alongside it. Trust is efficient and the foundation of success. We need leadership that guides us toward trust and away from fear and suspicion. This involves identifying the factors that have eroded trust and changing course. Many people benefit from these trust-destroying elements. To achieve trust, society needs to become more equitable with a deeper shared culture. We need a spirit that recognizes a future where trust prevails. Living, relating, and working in a place of trust is a more positive experience

Trust is the highest form of human motivation. It brings out the very best in people.

– Stephen Covey

Algorithms are the best way to improve computing power

14 Wednesday Aug 2024

Posted by Bill Rider in Uncategorized

≈ 5 Comments

A return to what I do best

For the first time in six years, I’m returning to writing about a topic within my professional field. This is where my true expertise lies, and frankly, it’s what I should be focusing on. If I were being cynical, I’d say this subject is entirely unrelated to work since it lacks any organizational support. It is clearly not our chosen strategy. Given that it’s neither a funded nor managed strategy, it’s essentially a hobby. Yet, it represents a significant missed opportunity for advancing several critical fields. Moreover, it highlights broader issues with our leadership and aversion to risk even when the rewards are large.

“Never was anything great achieved without danger.”

― Niccolo Machiavelli

Years ago when I blogged regularly, I often wrote one or two posts about upcoming talks. Some of my best work emerged from this process, which also enhanced the quality of my presentations. Writing is thinking; it forces deep reflection on a talk’s narrative and allows for refinement. By outlining ideas and considering supporting evidence, I could strengthen my arguments. Without this preparatory writing, my talks undoubtedly suffered. With this post, I hope to rectify this shortsighted but logical detour.

So, here goes!

“Don’t explain computers to laymen. Simpler to explain sex to a virgin.”

― Robert A. Heinlein

Algorithmic Impact and Moore’s Law

In recent years, the significance of algorithms in scientific computing has diminished considerably. Algorithms have historically been a substantial component of improving computing. They provide a performance boost beyond hardware acceleration. Unfortunately this decline in algorithmic impact coincides with the slowing of Moore’s Law. Moore’s law is the empirical observation that computing power doubles approximately every eighteen months leading to exponential increases. This rapid growth was fueled by a confluence of technological advancements integrated into hardware. However, this era of exponential growth ended about a decade ago. Instead of acknowledging this shift and adapting, it was met with an increased focus on hardware development. I’ve written extensively on this topic and won’t reiterate those points here.

“People who don’t take risks generally make about two big mistakes a year. People who do take risks generally make about two big mistakes a year.”

― Peter F. Drucker

Simultaneously, we’ve neglected advancements in algorithms. Improving algorithms is inherently unpredictable.Breakthroughs are sporadic, defying schedules and plans. They emerge from ingenious solutions to previously insurmountable challenges and necessitate risk-taking. Such endeavors offer no guaranteed returns, a requirement often demanded by project managers. Instead, progress occurs in significant leaps after extended periods of apparent stagnation. Rather than a steady upward trajectory, advancements arrive in unpredictable bursts. This aversion to risk and the pursuit of guaranteed outcomes hinders the realization of algorithmic breakthroughs.

What is an algorithm?

An algorithm can be thought of as a recipe that instructs a computer to complete a task. Sorting a list is a classic algorithmic problem. Once a correct method is established, efficiency becomes the primary concern.

Algorithm efficiency is measured in terms of degrees of freedom, such as the length of a list. It is often expressed as a constant multiplied by the list length raised to a power (or its logarithm). This power significantly impacts efficiency,especially as the list grows. Consider the difference in effort for sorting a list of 100 items using linear, log-linear, and quadratic algorithms: 100, 460, and 10,000 operations, respectively. For a list of 1000 items, these numbers become 1000, 6900, and 1,000,000. We see differences in performance grow larger. As the list size increases, the impact of the constant factor before the scaling term diminishes. Note that generally, a linear algorithm has a larger constant than a quadratic one.

“An algorithm must be seen to be believed.”

— Donald Ervin Knuth

As this example illustrates, algorithms are fundamentally powerful tools. Their efficiency scaling can dramatically reduce computational costs and accelerate calculations. This is just one of the many remarkable capabilities of algorithms. A large impact is they can significantly contributes to overall computing speed. Historically, the speedup achieved through algorithmic improvements has either matched or exceeded Moore’s Law. Even in the worst case, algorithms enhance and amplify the power of computer advancements. Our leaders seem to have ignored this. Certain incremental gains prime for project management with low risk are prioritized.

Algorithms for Scientific Computing

In the realm of scientific computing, the success of algorithms is most evident in linear algebra. For a long time during the early days of computing, algorithms kept pace with increasing computing speeds. This demonstrates that algorithms amplify the speed of computers. They complement each other, resulting in equally improved performance over a 40-year span.

Originally, linear algebra relied on dense algorithms with cubic work scaling (Gaussian elimination). These were replaced by relaxation and sparse-banded methods with quadratic scaling. Subsequently, Krylov methods, scaling to the three-halves or logarithmically with spectral methods, took over. Finally, in the mid-1980s, multigrid achieved linear scaling. Since then,there have been no further breakthroughs. Still from the 1940s to the mid-1980s, algorithms kept pace with hardware advancements. In this era the advances in hardware, which were massive was complimented by equal advances. In today’s words algorithms are a force multiplier.

“… model solved using linear programming would have taken 82 years to solve in 1988… Fifteen years later… this same model could be solved in roughly 1 minute, an improvement by a factor of roughly 43 million… a factor of roughly 1,000 was due to increased processor speed, … a factor of roughly 43,000 was due to improvements in algorithms!”

– Designing a Digital Future: Federally Funded R&D in Networking and Information Technology

Examples of algorithmic impact are prevalent throughout scientific computing. In optimization, interior point methods dramatically accelerated calculations, outpacing hardware advancements for a period. More recently, the influence of algorithms has become apparent in private sector research. A prime example is Google’s PageRank algorithm. This revolutionized internet search. Once you used Google to search you never went back to Altavista or Yahoo. In the process it also laid the foundation for one of the world’s most influential and prosperous companies. Today Google has a market cap in excess of 2 trillion dollars.

“When people design web pages, they often cater to the taste of the Google search algorithm rather than to the taste of any human being.”

— Yuval Noah Harari

More recently, another algorithm has revolutionized the tech world: the transformer. This breakthrough was instrumental in the recent developments of large language models (LLMs). These have reshaped the technology landscape in the past couple years. The transformer’s impact is multifaceted. Most superficially, it excels at consuming and processing data in vector form, aligning perfectly with modern GPU hardware. This synergy has propelled NVIDIA to unprecedented heights of corporate success (lifting it to trillion dollar market cap).

Less obvious, but equally significant, is the transformer’s influence on LLM behavior. Unlike previous models that processed data sequentially, the transformer operates on vector data chunks, enabling the network to consider larger contexts. This represents a quantum leap in LLM capabilities and behavior.

A cautionary tale emerges from the transformer’s history. Google pioneered the algorithm, but others reaped the primary benefits. This highlights a common challenge with algorithmic advancements: those making the initial breakthrough may not see the principal benefits. Moreover, the vision to develop an algorithm often differs from the vision to optimize its use. This presents a persistent hurdle for project managers. Project manager are relentlessly myopic.

“Computer Science is no more about computers than astronomy is about telescopes”

― Edsger Wybe Dijkstra

It is well known that the power of algorithms is on par with the impact of hardware improvements. However, a key distinction lies in the predictability of progress. Algorithmic advancements stem from discovery and inspiration. These are elements that defy the quarterly planning cycles prevalent in contemporary research. An intolerance for failure hinders algorithmic progress. As exemplified by the transformer, algorithms often benefit organizations beyond their originators. Success lies in adapting to the capabilities of these innovative tools.

Algorithms I really care about

My professional focus lies in developing methods to solve hyperbolic conservation laws. The nature of these equations offers significant potential for algorithmic improvements, a fact often overlooked in current research directions. This oversight stems from a lack of clarity about the true measures of success in numerical simulations. The fundamental objective is to produce highly accurate solutions while minimizing computational effort. This is to be achieved while maintaining robustness, flexibility, and physical correctness.

“The scientific method’s central motivation is the ubiquity of error – the awareness that mistakes and self-delusion can creep in absolutely anywhere and that the scientist’s effort is primarily expended in recognizing and rooting out error.”

– David Donoho et al. (2009)

Achieving an unambiguous measure of solution accuracy uses a process known as code verification. A common misconceptions about code verification is its focus on bug finding rather than precise error quantification. It is equally important to understand how computational effort reduces error. Mesh refinement is a standard approach adding more degrees of freedom. This increases the cost in a well defined way that depends on the dimensionality of the problem. For a one-dimensional explicit calculation, computational cost scales quadratically with decreasing mesh size due to the linear relationship between time step and mesh size. In two and three dimensions, this scaling becomes cubic and quartic, respectively.

Code verification reveals both precise error (given an exact solution) and convergence rate. For problems with discontinuities like shock waves, the convergence rate is inherently limited to first order, regardless of the method used. This rate often falls below one due to numerical behavior near linear discontinuities. For simplicity, we will focus on the implications of first-order convergence. Given a fixed convergence rate, error accuracy becomes paramount. Furthermore as the convergence rate diminishes the base algorithmic accuracy grows in impact.

While testing is standard practice in hyperbolic conservation law research, it is often inconsistent. Accuracy is typically reported for smooth problems where high-order accuracy can be achieved. However, once smoothness is lost and accuracy drops to first order or less, reporting error ceases. Notably the problems with shock waves are the reason we study these equations. The Sod shock tube is a common test case, but results are presented graphically without quantitative comparison. This reflects a common misconception that qualitative assessments suffice after shock formation, disregarding the significance of accuracy differences.

“What’s measured improves”

– Peter Drucker

Because the order of accuracy is limited to first order, even small differences in accuracy become more significant. For standard methods, these base accuracy differences can easily range from two to four times, dramatically impacting computational cost to achieve an error level. Minimizing the effort required to achieve a desired accuracy level is crucial. The reason is simple: accuracy matters more as the convergence rate decreases. The lower the convergence rate, the greater the impact of accuracy on overall performance.

The algorithmic payoff

Consider a method that halves the error for double the cost at a given mesh resolution. We break even with second-order accuracy, a mesh half the size of the original is required in one dimension. For third and fourth-order accuracy, the break-even points shift to two and three dimensions, respectively. These dynamics change entirely when considering the fixed first-order accuracy imposed by mathematical theory.

“The fundamental law of computer science: As machines become more powerful, the efficiency of algorithms grows more important, not less.“

– Nick Trefethen

In one dimension, the less accurate method is twice as costly for the same error. This factor escalates to four times in two dimensions and eight times in three dimensions. As accuracy disparities grow, the advantage of higher accuracy expands exponentially. A sixteen-fold error difference can lead to a staggering 65,000-fold cost advantage in 3D. Consequently, even significantly more expensive methods can offer substantial benefits. Essentially the error difference amortizes the algorithmic cost. Despite this potential, the field remains entrenched in decades-old,low-accuracy approaches. This stagnation is rooted in a fear of failure and short-term thinking, with long-term consequences.

If failure is not an option, then neither is success.

― Seth Godin

This entire dynamic is inextricably linked to a shift toward short-term focus and risk aversion. Long-term objectives are essential for algorithmic advancement, demanding vision and persistence. The capacity to withstand repeated failures while maintaining faith in eventual success is equally critical. Unfortunately, today’s obsession with short-term project management stifles progress at its inception. This myopic approach is profoundly detrimental to long-term advancement.

References

Lax, Peter D., and Robert D. Richtmyer. “Survey of the stability of linear finite difference equations.” Communications on pure and applied mathematics 9, no. 2 (1956): 267-293.

Majda, Andrew, and Stanley Osher. “Propagation of error into regions of smoothness for accurate difference approximations to hyperbolic equations.” Communications on Pure and Applied Mathematics 30, no. 6 (1977): 671-705.

Banks, Jeffrey W., T. Aslam, and William J. Rider. “On sub-linear convergence for linearly degenerate waves in capturing schemes.” Journal of Computational Physics 227, no. 14 (2008): 6985-7002.

Lax, Peter D. “Accuracy and resolution in the computation of solutions of linear and nonlinear equations.” In Recent advances in numerical analysis, pp. 107-117. Academic Press, 1978.

Page, Lawrence, Sergey Brin, Rajeev Motwani, and Terry Winograd. The pagerank citation ranking: Bring order to the web. Technical report, Stanford University, 1998.

Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. “Attention is all you need.” Advances in neural information processing systems 30 (2017).

Sod, Gary A. “A survey of several finite difference methods for systems of nonlinear hyperbolic conservation laws.” Journal of computational physics 27, no. 1 (1978): 1-31.

Greenough, J. A., and W. J. Rider. “A quantitative comparison of numerical methods for the compressible Euler equations: fifth-order WENO and piecewise-linear Godunov.” Journal of Computational Physics 196, no. 1 (2004): 259-281.

Rider, William J., Jeffrey A. Greenough, and James R. Kamm. “Accurate monotonicity-and extrema-preserving methods through adaptive nonlinear hybridizations.” Journal of Computational Physics 225, no. 2 (2007): 1827-1848.

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • February 2026
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013

Categories

  • Uncategorized

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • The Regularized Singularity
    • Join 55 other subscribers
    • Already have a WordPress.com account? Log in now.
    • The Regularized Singularity
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...