• About The Regularized Singularity

The Regularized Singularity

~ The Eyes of a citizen; the voice of the silent

The Regularized Singularity

Category Archives: Uncategorized

Dealing with Bias and Calibration in Uncertainty Quantification

06 Friday Jan 2017

Posted by Bill Rider in Uncategorized

≈ Leave a comment

It is useless to attempt to reason a man out of a thing he was never reasoned into.

― Jonathan Swift

Most of the computer modeling and siClimateModelnestingmulation examples in existence are subject to bias in the solutions. This bias comes from numerical solution, modeling inadequacy, and bad assumptions to name a few of the sources. In contrast uncertainty quantification is usually applied in a statistical and clearly unbiased manner. This is a serious difference in perspective. The differences are clear. With bias the difference between simulation and reality is one sided and the deviation can be cured by calibrating parts of the model to compensate. Unbiased uncertainty is common in measurement error and ends up dominating the approach to UQ in simulations. The result is a mismatch between the dominant mode of uncertainty and how it is modeled. Coming up with a more nuanced and appropriate model that acknowledges and deals with bias appropriately would be great progress.

One of the archetypes of the modern modeling and simulation are climate simulations (and their brethren, weather). These simulations carry with them significant bias assclimate_modeling-ruddmanociated with lack of computational resolution. The computational mesh is always far too coarse for comfort, and the numerical errors are significant. There are also issues associated with initial conditions, energy balance and representing physics at and below the level of the grid. In both cases the models are invariably calibrated heavily. This calibration compensates for the lack of mesh resolution, lack of knowledge of initial data and physics as well as problems with representing the energy balance essential to the simulation (especially climate). A serious modeling deficiency is the merging of all of these uncertainties into the calibration with an associated loss of information.

We all see only that which we are trained to see.

― Robert Anton Wilson

pastedgraphic5The issues with calibration are profound. Without calibration the models are effectively useless. For these models to contribute to our societal knowledge and decision-making or raw scientific investigation, the calibration is an absolute necessity. Calibration depends entirely on existing data, and this carries a burden of applicability. How valid is the calibration when the simulation is probing outside the range of the data used to calibrate? We commonly include the intrinsic numerical bias in the calibration, and most commonly a turbulence or mixing model is adjusted to account for the numerical bias. A colleague familiar with ocean models quipped that if the ocean were as viscous as we modeled it, one could drive to London from New York. It is well known that numerical viscosity stabilizes calculation, and we can use numerical methods to model turbulence (implicit large eddy simulation), but this practice should at the very least make people uncomfortable. We are also left with the difficult matter of how to validate models that have been calibrated.

I just touched on large eddy simulation, which is a particularly difficult topic because numerical effects are always in play. The mesh itself is part of the model with classical LES. With implicit LES the numerical method itself provides the physical modeling, or sankaran_fig1_360some part of the model. This issue plays out in weather and climate modeling where the mesh is part of the model rather than independent aspect of it. It should surprise no one that LES was born from weather-climate modeling (at the time where the distinction didn’t exist). In other words the chosen mesh and the model are intimately linked. If the mesh is modified, the modeling must also be modified (recalibrated) to get the balancing of the solution correct. This tends to happen in simulations where an intimate balance is essential to the phenomena. In these cases there is a system that in one respect or another is in a nearly equilibrium state, and the deviations from this equilibrium are essential. Aspects of the modeling related to the scales of interest including the grid itself impact the equilibrium to a degree that an un-calibrated model is nearly useless.

If numerical methods are being used correctly and at a resolution where the solution can be considered remotely mesh converged, the numerical error is a pure bias error. A significant problem is the standard approach to solution verification that treats numerical error as unbiased. This is applied in the case where no evidence exists for the error being unbiased! Well-behaved numerical error is intrinsically biased. This is a significant issue because making a biased error, unbiased represents a significant loss of information. Those who either must or do calibrate their models to account for numerical error rarely explicitly estimate numerical error, but account for the bias as a matter of course. Ultimately the failure of the V&V community to correctly apply well-behaved numerical error as a one-sided bias is counter-productive. It is particularly problematic in the endeavor to deal proactively with the issues associated with calibration.

Science is about recognizing patterns. […] Everything depends on the ground rules of the observer: if someone refuses to look at obvious patterns because they consider a pattern should not be there, then they will see nothing but the reflection of their own prejudices.

― Christopher Knight

Let me outline how we should be dealing with well-behaved numerical error below. If one has a quantity of interest where a sequence of meshes produces the monotonic approacimgresh to a value (assuming the rest of the model is held fixed) then the error is well behaved. The sequence of solutions on the meshes can then be used to estimate the solution to the mathematical problem, that is the solution where the mesh resolution is infinite (absurd as it might be). Along with this estimate of the “perfect” solution, the error can be estimated for any of the meshes. For this well-behaved case the error is one sided, a bias between the ideal solution and the one with a mesh. Any fuzz in the estimate would be applied to the bias. In other words any uncertainty in the error estimate is centered about the extrapolated “perfect” solution, not the finite grid solutions. The problem with the current accepted methodology is that the error is given as a standard two-sided error bar that is appropriate for statistical errors. In other words we use a two-sided accounting for this error even though there is no evidence for it. This is a problem that should be corrected. I should note that many models (i.e., like climate or weather) invariably recalibrate after all mesh changes, which invalidates the entire verification exercise where the model aside from the grid should be fixed across the mesh sequence.

I plan to talk more about this issue next week along with a concrete suggestion about how to do better.

When we get to the heart of the matter at hand, dealing with uncertainty in calibrated models, we rapidly come to the conclusion that we need to keep two sets of books. If the first thing that comes to mind is, “that’s what criminals do,” you’re on the right track. You should feel uneasy about this conclusion, and we should all feel as sense of disease regarding this outcome. What do we put in these two books? In one case we have calibrated models, and we can rely upon this model to reliably interpolate the data it is calibrated with. So for quantities of interest used to calibrate a model, the model is basically useless, or perhaps it unveils uncertainty and inconsistency within the data used for calibration.

A model is valuable for inferring other things from simulation. It is good for looking at imagesquantities that cannot be measured. In this case the uncertainty must be approached carefully. The uncertainty in these values must almost invariably be larger than the quantities used for calibration. One needs to look at the modeling connections for these values and attack a reasonable approach to treating the quantities with an appropriate “grain of salt”. This includes numerical error, which I talked about above too. In the best case there is data available that was not used to calibrate the model. Maybe these are values that are not as highly prized or as important as those used to calibrate. The uncertainty between these measured data values and the simulation gives very strong indications regarding the uncertainty in the simulation. In other cases some of the data potentially available for calibration has been left out, and can be used for validating the calibrated model. This assumes that the hold-out data is sufficiently independent of the data used.

A truly massive issue with simulations is extrapolation of results beyond the data used for calibration. This is a common and important use of simulations. One should expect the uncertainty to grow substantially with the degree of extrapolation from data. A common and pedestrian source for seeing what this looks like is least square fitting of data. The variation and uncertainty in the calibrated range is the basis of the estimates, but depending on the nature of the calibrated range of the data and the degree of extrapolation, the uncertainty can grow to be very large. This makes perfect reasonable sense, as one departs from our knowledge and experience, we should expect the uncertainty in our knowledge to grow.

A second issue to consider is our second set of books where the calibration is not taken quite so generously. In this case the most honest approach to unchart-with-huge-error-barscertainty is to apply significant variation to the parameters used to calibrate the model. In addition we should include the numerical error in the uncertainty. In the case of deeply calibrated models these sources of uncertainty can be quite large and generally paint an overly pessimistic picture of the uncertainty. Conversely we have an extremely optimistic picture of uncertainty with calibration. The hope and best possible outcome is that these two views bound reality, and the true uncertainty lies between these extremes. For decision-making using simulation this bounding approach to uncertainty quantification should serve us well.

There are three types of lies — lies, damn lies, and statistics.”

― Benjamin Disraeli

 

Get Back In The Box

31 Saturday Dec 2016

Posted by Bill Rider in Uncategorized

≈ Leave a comment

Change almost never fails because it’s too early. It almost always fails because it’s too late.

– Seth Godin

I read a lot including books, papers, articles, online content, and whatever else I can get my hands on. My interests are wide and varied including everything from deep technical science articles to more intellectual takes on popular culture. Among my interests are business or management articles. These speak about various ways of getting the best results from employees using largely positive and empowering techniques. Somehow I never see the techniques espoused in these articles in practice. Increasingly, the articles I read about management and business are science fiction with an ever-widening gap between reality and the ideal. The same gap is present in the realm of politics and public policy. Many bi-partisan forces threaten to push us into an authoritarian future that crushes human spirit challenge the ideal and progressive changes needed to make society function better. Inside and outside of work we see the potential of people constricted to produce predictable results that comply with a sense of order and safety.download

When I read articles on excellence in management and business a big part of the message is employee empowerment and motivation. Empowered and motivated employees can be a huge benefit for a company (or by extension Lab, University, organization,…). Another way of expressing this common message is the encouragement of innovation and problem solving as a route to added value and high performance. Usually this is articulated as out of the box thinking, work and performance. Yet when I return to my reality, the writing seems dramatically out of touch and impossible to imagine being implemented where I work. Almost every thing my management does, and our “corporate” governance strives for is compliance, subservience, and in the box thinking. We are pushed to be predictable and downright pedestrian in everything we do. A large part of the ability to tolerate this environment is the articulation of standards of performance. Today standards of performance are defined not by excellence and achievement, but compliance and predictability. The result is the illusion of excellence and achievement when the reality is exactly the opposite. Remarkably like cattle moving to slaughter, we go along with it.

The greatest irony of the current era is the need to keep out of the box thinking under control, effectively putting it in the box. You can only be out of the box within strictly defined boundaries lest you create a situation that might not be completely under control. Of course this is a complete oxymoron and leads to the sort of ridiculous outcomes at work we all recognize. We are encouraged to be bold at work as long as we comply with all the rules and regulations. We can be bold in our thinking as long as no risks are taken. It is the theatre of the absurd. We can magically manage our way to getting all the reward without any of the risk. Bold outcomes automatically come with risk, and usually unpredictable results and unintended consequences. All of these things are completely outside the realm of the acceptable today. Our governance is all about predictably intended consequences and the entire system is devoted to control and safety. The bottom line is you can’t have the fruits of boldness, innovation and discovery without risking something and potentially courting disaster. If you don’t take risks, you don’t get the rewards, a maxim that our leaders don’t seem to understand.

One of the great sources for business articles is the well-written and respected Harvadrive_book-by-daniel-pink_danpinkdotcom1rd Business Review (HBR). I know my managers read many of the same things I do. They also read business books, sometimes in a faddish manner. Among these is Daniel Pink’s excellent “Drive”. When I read HBR I feel inspired, and hopeful (Seth Godin’s books are another source of frustration and inspiration). When I read Drive I was left yearning for a workplace that operated on the principles expressed there. Yet when I return to the reality of work these pieces of literature seem fictional, even more like science fiction. The reality of work today is almost completely orthogonal to these aspirational writings. How can my managers read these things, then turn around and operate the way they do? No one seems to actually think through what implementation of these ideas would look like in the workplace. With each passing year we fall further from the ideal, more toward a workplace that crushes dreams, and simply drives people into some sort of cardboard cutout variety of behavior without any real soul.

seth-godin-booksWhile work is the focus of my adult world, similar trends are at work on our children. School has become a similarly structured training ground for compliance and squalid mediocrity. Standardized testing is one route to this outcome where children are trained to take tests and no solve problems. Standardized testing becomes the perfect rubric for the soulless workplace that awaits them in the adult world. The rejection of fact and science by society as a whole is another way. We have a large segment of society who is suspicious of intellect. Too many people now view educated intellectuals as dangerous and their knowledge and facts are rejected whenever they disagree with the politically chosen philosophy. This attitude is a direct threat to the value of an educated populace. Under a system where intellect is devalued, education transforms into a means of training the population to obey authority and fall into line. The workplace is subject to the same trends, compliance and authority is prized along with predictability of results. The lack of value for intellect is also present within the sort of research institutions I work at. This is because it threatens predictability of results. As a result out of the box thinking is discouraged, and the entire system is geared to keep everyone in the box. We create systems oriented toward control and safety without realizing the price paid for rejecting exploration and risk. We all live a life less rich and less rewarding as a result, and by accumulating this over society, a broad-based diminishment of results.

Be genuine. Be remarkable. Be worth connecting with.

– Seth Godin

When I see my managers reading things like HBR or Drive, I’m left wondering about how they can square their actions with the distance from what they read? My wife likes to promote “Reality-based Management,” the practical application of principles within a pragmatic approach to achievement. This is good advice that I strive to apply. There is a limit to pragmatism when the forces within society continually push us away from every ideal. Pragmatism is a force for survival and making the best of a bad situation, but there is a breaking point. When does reality become so problematic that something must change? When does the disempowering force become so great that change must occur? Perhaps we are at this point. I find myself hoping for a wholesale rejection of the forces of compliance that enslave us. Unfortunately we have rejected progressive forces nationally, and embraced the slaveholders who seek to exploit and disempower us. We have accepted being disempowered in trade for safety. Make no mistake, we have handed those who abuse the populace with a yoke and whip, and a “mandate” to turn the screws on all of us. In return we all get to be safe, and live a less rich life through the controls such safety requires.

I have to admit to myself that many people prize control and safety above all else. They are willing to reject freedom and rewards if safety can be assured. This is exactly the trade that many Americans have made. Bold, exciting and rewarding lives are traded for safety and predictable outcomes. The same thing is happening for many companies and organizations and infests work with compliance through rules and regulations. We see this play out with the reactions to terrorism. Terrorism has paved the way turlo massive structures of control and societal safety. It also creates an apparatus for big brother to come to fruition in a way that makes Orwell more prescient than ever. The counter to such widespread safety and control is the diminished richness of life that is sacrificed to achieve it. Lives well-lived and bold outcomes are reduced in achieving safety. I’ve gotten to the point where this trade no longer seems worth it. What am I staying safe for? I am risking living a pathetic and empty life in trade for safety and security, so that I can die quietly. This is life in the box, and I want to live out of the box. I want to work out of the box too.

The core message of my work is get in the box and don’t make waves, just do what you’re told. The message from society as a whole may be exactly the same with order, structure and compliance being prized by a large portion of the population. Be happy with what you’ve got, everything is fine. I suspect that my management is just as disempowered as I am. More deeply the issues surrounding this problem are societal. Americans are epically disempowered with many people expressing this dysfunction politically. The horror show is playing out Nationally with the election of a historically unpopular and unqualified President simply because he isn’t part of the system. The population as a whole thinks things are a mess. For roughly half the people electing an unqualified, politically incorrect, outsider seems like the appropriate response. The deeper problem is that the sort of in the box forces are not partisan at all, the right does its thing and the left does another thing, but both seek to disempower the population as a whole.

Change almost never fails because it’s too early. It almost always fails because it’s too late.

– Seth Godin

Some part of Trump’s support comes from people who just want to burn the system to the ground. Another group of people exist on the left who want the same outcome, destroy the current system. Maybe Trump will destroy the system and create a future, but I seriously doubt it. I’m guessing more of a transition to kleptocratic rule where the government actively works to loot the country for the purpose of enriching a select few. I’d prefer a much more constructive and progressive path to the future where human potential is unleashed and unlocked. Ultimately a lack of progress in fixing the system will eventually lead to something extreme and potentially violent. The bottom line is that the forces enslaving us are driven by the sort of people represented by the leadership of both political parties. The ruling class has power and money with the intent of holding and expanding it and personal empowerment of common citizens is a threat to their urlauthority. The ruling business class and wealthy elite enjoy power through subtle subjugation of the vast populace. The populace accepts their subjugation in trade for promises of safety and security through the control of risk and danger.

For now, the message at work is get in the box by complying while not making waves and simply doing what you are told to do. No amount of reading about employee empowerment can fix the reality until there is a commitment to a different path. The management can talk till they are blue in the face about their principles, diversity, excellence, teamwork and the power of innovative out of the box thinking, but the reality is the opposite. The national reality is the same, bullshit about everyone mattering, and a truth where very few matter at all. We have handed the reins of power to those who put us in bondage, and we would have done the same if the democrats had won too. There will be real differences in what the bondage looks like, but the result is largely the same. Rather than breaking our chains, we have decided to make the bonds stronger. We can hope that people recognize the error and change course sooner rather vyxvbzwxthan later. As long as we continue to prize safety and security over possibility and potential, we can expect to be disempowered.

We have so much potential waiting to be unleashed by rejecting in the box thinking. To get there we need to reject over-whelming safety, control and compliance. We need to embrace risk and possibility with the faith that our talents can lead us to a greater future powered by innovative, inspired thinking and lives well lived by empowering everyone to get out of the box.

The best way to be missed when you’re gone is to stand for something when you’re here.

– Seth Godin

 

Word cloud for the past six months

22 Thursday Dec 2016

Posted by Bill Rider in Uncategorized

≈ Leave a comment

wordcloud

I took the last six months of posts and created a word cloud (http://www.wordclouds.com).  Enjoy!

Verification and Validation with Uncertainty Quantification is the Scientific Method

22 Thursday Dec 2016

Posted by Bill Rider in Uncategorized

≈ 6 Comments

tl;dr : VVUQ injects the fundamentals of the scientific method into modeling and simulation. The general lack of VVUQ in HPC should cause one to question how much actual science is being done.

Modeling and simulation has been hailed by many as a third way to do science taking its place next to theory and observation as one of the pillars of practice. I strongly believe that this proposition does not bear up to scrutiny. For this to be true the advent of modeling and simulation would need to change the scientific method is some fashion; it does not. This does not minimize the importance of scientific computing, but rather puts it into the proper context. Instead of being a new way to do science, it provides tools for doing partsthe_scientific_method_as_an_ongoing_process-svgof science differently. First and foremost modeling and simulation enhances our ability to make predictions and test theories. As with any tool, it needs to be used with care and skill. My proposition is that the modeling and simulation practice of verification and validation combined with uncertainty quantification (VVUQ) defines this care and skill. Moreover VVUQ provides an instantiation of the scientific method for modeling and simulation. An absence of emphasis on VVUQ in modeling and simulation programs should bring doubt and scrutiny on the level of scientific discourse involved. In order to see this one needs to examine the scientific method in a bit more detail.

The Scientific Method is a wonderful tool as long as you don’t care which way the outcome turns; however, this process fails the second one’s perception interferes with the interpretation of data. This is why I don’t take anything in life as an absolute…even if someone can “prove” it “scientifically.

― Cristina Marrero

To continue our conversation we need a serious discussion of the scientific method itself. What is it? What are its parts? Who does it, and what do they do? We can then map all the activities from VVUQ onto the scientific method, proving my supposition.

In science and society, the scientific method conjures a large degree of reverence. In human discourse few basic processes have the same degree of confidence and power. The two basic activities in science are theory and observation (experiment) along with some basic actions that power each, and drive the connection between these ways of doing science. We devise theories to help explain what we experience in reality. These theories are the result of asking deep questions and proposing hypothesized mechanisms for our experience. Ultimately these theories usually take on the form of principles and mathematical structure. A theory that explains a certain view of reality can then be tested by making a prediction about something reality that has not been observed. The strength of the prediction is determined by the degree of difference between the observation that formed the basis of the theory and the test of the prediction. The greater the difference in circumstance for the experiment, the stronger the test of the theory is. Ultimately there are a great number of details and quality assessments needed to put everything in context.

One thing that modeling and simulation does for science expands the ability to make predictions for complex and elaborate mathematical models. Many theories produmediocritydemotivatorce elaborate and complex mathematical models, which are difficult to solve and inhibit the effective scope of predictions. Scientific computing relaxes this limitations significantly, but only if sufficient care is taken with assuring the credibility of the simulations. The entire process of VVUQ serves to provide the assessment of the simulation so that they may confidently be used in the scientific process. Nothing about modeling and simulation changes the process of posing questions and accumulating evidence in favor of a hypothesis. It does change how that relaxing limitations on the testing of theory arrives at evidence. Theories that were not fully testable are now open to far more complete examination as they now may make broader predictions than classical approaches allowed.

Science has an unfortunate habit of discovering information politicians don’t want to hear, largely because it has some bearing on reality.

― Stephen L. Burns

The first part of VVUQ, the verification, is necessary to be confident that the simulation is a proper solution of the theoretical model, and suitable for further testing. The other element of verification is error estimation from the approximate solution. This is a vastly overlooked aspect of modeling aimagesnd simulation where the degree of approximate accuracy is rarely included in the overall assessment. In many cases the level of error is never addressed and studied as part of the uncertainty assessment. Thus verification plays two key roles in the scientific study using modeling and simulation. Verification acts to define the credibility of the approximate solution to the theory being tested, and an estimation of the approximation quality. Without an estimate of the numerical approximation, we possibly suffer from conflating this error with modeling imperfections, and obscuring the assessment of the validity of the model. One should be aware of the pernicious practice of simply avoiding error estimation by declarative statements of being mesh-converged. This declaration should be coupled with direct evidence of mesh convergence, and the explicit capacity to provide estimates of actual numerical error. Without such evidence the declaration should be rejected.

Verification should be a prerequisite for then examining the validity of the model, or validation. As mentioned that validation without first going through verification is prone to false positives or false negatives with a risk that numerical error will be confused with the true assessment of the theoretical model and its predictions. The issue of counting numerical error as modeling is deep and broad in modeling and simulation. A proper VVUQ process with a full breadth of uncertainty quantification must include it. Like any scientific endeavor the uncertainty quantification is needed to place the examination of models in a proper perspective. When the VVUQ process is slipshod and fails to account for the sources of error and uncertainty, the scientific process is damaged and the value of the simulation is shortchanged.

Science, my boy, is made up of mistakes, but they are mistakes which it is useful to make, because they lead little by little to the truth.

― Jules Verne

Of course, validation requires data from reality to be done. This data can come from images-1experiments or observation of the natural world. In keeping with the theme an important element of the data in the context of validation is its quality and a proper uncertainty assessment. Again this assessment is vital for its ability to put the whole comparison with simulations in context, and help define what a good or bad comparison might be. Data with small uncertainty demands a completely different comparison than large uncertainty. Similarly for the simulations where the level of uncertainty has a large impact on how to view results. When the uncertainty is unspecified either data or simulation are untethered and scientific conclusions or engineering judgments are threatened.

Cielo rotatorIt is no understatement to note that this perspective is utterly missing from the high performance computing world today and the foolish drive to exascale we find ourselves on. Current exascale programs are almost completely lacking any emphasis on VVUQ. This highlights the lack of science in our current exascale programs. They are rather naked and direct hardware-centric programs that show little or no interest in actual science, or applications. The whole program is completely hardware-focused. The holistic nature of modeling and simulation is ignored and the activities connecting modeling and simulation with reality are systematically starved of resources, focus and attention. It is not too hyperbolic to declare that our exascale programs are not about science.

The quest for absolute certainty is an immature, if not infantile, trait of thinking.

― Herbert Feign

The biggest issue in the modern view of project management for VVUQ is its injection of risk into work. We live in a world where spin and BS can easily be substituted for actual technical achievement. Doing VVUQ often results in failures by highlighting problems with modeling and simulation. One of the greatest skills in being good at VVUQ is honesty. Today it is frequently impossible to be honest about shortcomings because it is dude_wtfperceived as vulnerability. Stating weaknesses or limitations to anything cannot be tolerated in today’s political environment, and risks project existence because it is perceived as failure. Instead of an honest assessment of the state of knowledge and level of theoretical predictivity, today’s science prefers to make over-inflated claims and publish via press release. VVUQ runs counter to this practice if done correctly. Done properly VVUQ provides people using modeling and simulation for scientific or engineering work with a detailed assessment of credibility and fitness for purpose.

Scientific objectivity is not the absence of initial bias. It is attained by frank confession of it.

― Mortimer J. Adler

Just as science has a self-correcting nature in how the scientific method work, VVUQ is a means of self-correction for modeling and simulation. A proper and complete VVUQ assessment will produce good knowledge of strengths and weaknesses in modeling and where opportunities for improvement lie. A lack of VVUQ both highlights the lack of commitment to science in a project and its unsuitability for serious work. This assessment is quite damning to current HPC effort that have failed to include VVUQ in the efforts much less their emphasis. It is basically a deunderachievementdemotivatorclaration of intent by the program to seek results associated with spin and BS instead of a serious scientific or engineering effort. This end state is signaled by far more than merely a lack of VVUQ, but also the lack of serious application and modeling support. This simply compounds the lack of method and algorithm support that also plagues the program. The most cynical part of all of this is the centrality of application impact to the case made for the HPC programs. The pitch to the nation or the World is the utility of modeling and simulation to economic or physical security, yet the programs are structured to make sure this cannot happen, and will not be a viable outcome.

We may not yet know the right way to go, but we should at least stop going in the wrong direction.

― Stefan Molyneux

The current efforts seem to be under the impression that giant (unusable, inefficient, monstrous,…) computers will magically produce predictive, useful and scientifically meaningful solutions. I could easily declare those running these programs to be naïve and foolish, but this isn’t the case, the lack of breadth and balance in these programs is willful. People surely know better, so the reasons for the gaps are more complex. We have a complete and utter lack of brave, wise and courageous leadership in HPC. We know better, we just don’t do it.

Embracing Greater Complexity can Spur Progress in Modeling & Simulation

16 Friday Dec 2016

Posted by Bill Rider in Uncategorized

≈ Leave a comment

Complexity, therefore, results in flexibility. Increasing complexity always increases capability and adaptability.

― Jacob Lund Fisker

One of the more revealing aspects of a modeling and simulation activity is the character of every aspect of activity in terms of complexity, sophistication and emphasis. Examining the balance in terms of simplicity versus complexity, and the overall sophistication is immensely revealing. Typically the level of complexity for each aspect of an activity shows the predispositions of those involved. It also varies deeply among various philosophical groundings of the investigators. Quite often people have innate tendencies that contradict the best interests of the modeling and simulation activity. It is useful to breakup the modeling and simulation activity into a set of distinct parts to understand the texture of this more keenly.

Modeling and simulation is a deep field requiring the combination of a great number of different disciplines to be successful. It invariably requires computers to be used, so software, computer science and computer engineering is involved, but the core of value arises from the domain sciences and engineering. At the level of practical use we see need an emphasis on physics and engineering with a good bit of application-specific knowledge thrown in. Modeling activities can run the gambit between very specific technology applications to general topics like fluid or solid mechanics. The activities can be more focused on governing equations or the closure of these equations with measured physical data or elaborate modeling that coarse grains phenomenology into a lower computational cost form. It is the difference in modeling between an equation of state or coefficient of viscosity and a turbulence model, or deriving a modelUnknown-2 of a solid from a molecular dynamics simulation.

In between we see a blend of mathematics, engineering and physics providing the glue between the specific application-based modeling and the computer needed to run calculations on. As I said before, the emphasis in all of this reveals so much about the intensions of work. Today, the emphasis in modeling and simulation has been drawn away from this middle ground between the utility of modeling and simulation in applications, and the powerful computers needed to conduct the calculations. This middle ground defines the efficiency, correctness and power of modeling and simulations. A closer examination of current programs shows clearly that the applications are merely a marketing tool for buying super-powerful computers, and a way of fooling people into believing their purchase has real world value. Lost in the balance is any sense that modeling and simulation is a holistic body of work succeeding or failing on the degree of synergy derived from successful multidisciplinary collaborations. The result of the current program’s composition is a lack of equilibrium that is sapping the field of its vitality.

The current exascale emphasis is almost entirely computer hardware focused where the real world drivers are contrived and vacuous. Aside fr220px-Peter_Lax_in_Tokyo copy 2om using applications to superficially market the computers, the efforts are proportional to their proximity to the computer hardware. As a result large parts of the vital middle ground are languishing without effective support. Again we lose the middle ground that is the source of efficiency and enables the quality of the overall modeling and simulation. The creation of powerful models, solution methods, algorithms, and their instantiation in software all lack sufficient support. Each of these activities has vastly more potential than hardware to unleash capability, yet it remains without effective support. When one makes are careful examination of the program all the complexity and sophistication is centered on the hardware. The result has a simpler is better philosophy for the entire middle ground and those applications drawn into the marketing ploy.

Mathematics is the cheapest science. Unlike physics or chemistry, it does not require any expensive equipment. All one needs for mathematics is a pencil and paper.

― George Pólya

Examining for any emphasis on verification and validation can draw the same conclusions. There is none, support for V&V is non-existent. As I’ve said on several occasions, V&V is the scientific method embodied. If V&V is absent from the overall activity there is a lack of seriousness about scientific (or engineering) credibility and the scientific method in general. Lack of support and emphasis on V&V is extremely telling with respect to exascale. Any science or applied credibility in the resulting simulations are purely coincidental and not part of programmatic success. V&V spans the scientific enterprise and underpins the true seriousness of applicability and quality in the overall enterprise. If an activity lacks any sort of V&V focus, the true commitment to either application impact or quality results should be questioned strongly.

There is no discovery without risk and what you risk reveals what you value.

― Jeanette Winterson

Within any of these subsets of activities, the emphasis on simplicity can be immensely revealing regarding the philosophy of those involved. Turbulence modeling is a good object lesson in this principle. One can look at several approaches to studying turbulence that focus on great complexity in a single area: modeling for Reynolds averaged (RANS) flows, solution methods for astrophysics with the PPM method, or direct numerical simulation (DNS) using vast amounts of computer power, but in each area the rest of the study is simple. With RANS the combination of method, and computing sophistication is usually quite limited. Alternatively the PPM method is an immensely successfudag006l and complicated numerical method run with relatively simple models and simple meshes. DNS uses vast amounts of computer power on leading edge machines, but uses no model at all aside from the governing equations and very simple (albeit high-order) methods. As demands for credible simulations grow we need to embrace complexity in several directions for progress to be made.

Underpinning each of these examples are deep philosophical conclusions about the optimal way to study the difficult problem of turbulence. With RANS modeling there is the desire for practical engineering results and modeling driving a focus on modeling. With PPM difficult flows with shock waves drive a need to provide methods with good accuracy and great robustness tailored to precise difficulties of these flows. DNS is focused on numerical accuracy through vast meshes, computer power, and accurate numerical methods (which can be very fragile). In each case a single area is the focal point of complexity and the rest of the methodology pushes for simplicity. It is quite uncommon to find cases where the complexity is partaken in several aspects of a modeling and simulation study. There may be great benefits to do this and current research directions are undermining necessary progress.

Another important area in modeling and simulation is the analysis of the results of calculation. This rapidly gets into the business of verification, validation and uncertainty quantification (VVUQ). Again the typical study that produces refined results tends to eschew complexity in other areas. This failure to embrace complexity is holding modeling and simulation back. Some aspects of complexity are unnecessary for some application, or potentially detract from an emphasis on a more valuable complexity. For example simple meshes may unleash more complex and accurate numerical methods where geometric complexity for meshing has less value. Nonetheless, combined complexity may allow levels of quality in simulations to be achieved that currently elude us. A large part of the inhibition to embracing complexity is the risk it entails in project-based work. Again we see the current tendency to avoid project risk results in the diminishment of progress by shunning complexity where it is necessary. Put differently, it is easy to saturate the tolerance for risk in the current environment and design programs that fail for failing to attack problems with sufficient aggression.

The greatest risk is not taking any.

― Tim Fargo

For VVUQ various aspects of complexity can detract from focus significantly. For example great depth in meshing detail can potentially completely derail verification of calculations. Quite elaborate meshes are created with immense detail effectively using all the reasonable computing resources. Often such meshes cannot be trivially or meaningfully coarsened to provide well-grounded and connected simulations of the finer mesh. Then to make matters worse, the base mesh, which can be functionally refined results in a calculation too expensive to conduct. The end result is a lack of verification and error estimation, or more colloquially, a “V&V fail”. This state of affairs is so common as to transition from comedy to outright tragedy. The same dynamic often plays out with UQ work where the expensive model of interest and its cost of solution squeezes out computations needed to estimate uncertainty. A better course of action would view the uncertainty estimation holistically and balance numerical, modeling, and experimental error to find the best overall estimation of uncertainty. More importantly we could more easily produce assessments that are complete and don’t cut corners.

Cielo rotatorAnother key aspect of current practice in high performance computing is the tendency to highlight only the most expensive and large calculations in computer use policies. As a result the numerous smaller calculations necessary for the overall quality of simulation-based studies are discouraged. Often someone seeking to do a good credible job of simulating needs to conduct a large number of small calculations to support a larger calculation, yet the use of the big computers punishes such use. The results are large (impressive) calculations that lack any credibility. This problem is absolutely rampant in high performance computing. This is a direct result of a value system that prizes large meaningless calculations over small meaningful calculations. The credibility and meaning of the simulation based science and engineering is sacrificed to the altar of bigger is better. This value system has perverted large swaths of the modeling and simulation community, undermines VVUQ and ultimately leads to false confidence in the power of computers.

The same issue wrecks havoc on scenario uncertainty where the experimental result has intrinsic variability and no expectation of uniqueness should exist. For many such cases single experiments are conducted and viewed as the “right” answer. Instead such experiments should be viewed as a single sample from an ensemble of potential physical results. To compound matter these experiments are either real world events, terribly expensive or dangerous, or both. Doing replicate experiments is simply not going to happen. Modeling and simulation should be leaping into this void and provide information and analysis to cover this gap. Today our modeling and simulation capability is utterly and woefully inadequate to fill this role, and the reasons are multiple. A great degree of the blame lies in the basic philosophy of the modelers, the solution of a single well-posed problem where the reality is an ensemble of ill-posed problems and a distribution of answers.

Deeper issues exist with respect to the nature of equations being solved as a mean field theory. This mean field theory effectively removes many of the direct sources of solution variability from the simulation. Each of these complexities has tremendous value for enhancing the value of modeling and simulation, but is virtually unsupported by today’s research agenda. To support such an agenda we need a broad multidisciplinary focus including a complementary experimental program centered around understanding these distributional solutions. Physics and engineering modeling would need to evolve to support closing the equations, and the governing equations themselves would need to be fundamentally altered. Finally a phenomenal amount of applied mathematics would be needed to support appropriate rigor in the analysis of the models (equations), the methods of solutions, and the algorithms.

Instead of this forward looking program that might transform simulation and modeling, we have a backwards looking program obsessed with the computers and slighting everything that produces true value with their use. All of the highest value and most impactful activities for the real world are provided almost no support. The program is simply interested in putting yesterday’s models, methods, algorithms and codes on tomorrow’s computers. Worse yet, the computer hardware focus is the least effective and least efficient way to increase our grasp on the world through modeling and simulation. For an utterly naïve and uninformed person, the supercomputer is a clear product for 800px-Cray_Y-MP_GSFCmodeling and simulation. For the sophisticated and knowledgeable person, the computer is merely a tool, and the real product is the complete and assessed calculation tied to a full V&V pedigree.

To put this conclusion differently, high performance computing hardware is only necessary to do scientific computing that impacts the world. It is far from sufficient. The current programs are focusing on an important necessary element of modeling and simulation, but virtually ignoring a host of the sufficient activities. The consequence is a program that is incredibly inadequate to provide the value for society that it should promise.

Greatness and nearsightedness are incompatible. Meaningful achievement depends on lifting one’s sights and pushing toward the horizon.

― Daniel H. Pink

 

Can we overcome toxic culture before it destroys us?

09 Friday Dec 2016

Posted by Bill Rider in Uncategorized

≈ Leave a comment

Then the shit hit the fan.

― John Kenneth Galbraith

ways-to-know-if-your-law-firm-has-a-culture-problemI’m an unrelenting progressive. This holds true for politics, work and science where I always see a way for things to get better. I’m very uncomfortable with just sitting back and appreciating how things are. Many who I encounter see this as a degree of pessimism since I see the shortcomings in almost everything. I keenly disagree with this assessment. I see my point-of-view as optimism. It is optimism because I know things can always get better, always improve and constantly achieve a better end state. The people who I rub the wrong way are the proponents of the status quo, who see the current state of affairs as just fine. The difference in worldview is really between my deep reaching desires for a better world versus a world that is good enough already. Often the greatest enemy of getting to a better world is a culture that is a key element of the world, as it exists. Change comes whether culture wants it or not, and problems arise when the prevailing culture is unfit for these changes. Overcoming culture is the hardest part of change, and even when the culture is utterly toxic, it opposes changes that would make things better.

images
wellsfargo-750xx2715-1530-0-155_1474296054765_6143571_ver1-0
34318gs-large

I’ve spent a good bit of time recently contemplating the unremitting toxicity of our culture. We have suffered through a monumental presidential election with two abysmal candidates both despised by a majority of the electorate. The winner is an abomination of a human being clearly unfit for a public office worthy of respect. He is totally unqualified for the position he will hold, and will likely be the most corrupt person to ever hold the job. The loser was thoroughly qualified, potentially corrupt too, and would have had a images-1failed presidency because of the toxic political culture in general. We have reaped this entire legacy by allowing the public and political institutions to whither for decades. It is arguable that this erosion is the willful effort of those charged by the public with governing us. Among the institutions that are under siege and damaged in our current era are the research institutions where I work. These institutions have cultures from a bygone era, completely unfit for the modern world yet unmoving and not evolved in the face of new challenges.

This sentiment of dysfunction applies to the obviously toxic public culture, but the workplace culture too. In the workplace the toxicity is often cloaked in tidy professional wrapper, and seems wondrously nice, decent and completely OK. Often this professional wrapper shows itself as horribly passive aggressive behavior that the organization basically empowers and endorses. The problem is not the behavior of the people in the culture toward each other, but the nature of the attitude toward work. Quite often we have this layered approach that lends a well-behaved, friendly face on the complete 11disempowerment of employees. Increasingly the people working in the trenches are merely cannon fodder, and everything important to work happens with managers. Where I work the toxicity of the workplace and politics collide to produce a double whammy. We are under siege from a political climate that undermines institutions and a business-management culture that undermines the power of the worker.

Great leaders create great cultures regardless of the dominant culture in the organization.

― Bob Anderson

I’m reminded of the quote “culture eats strategy” (attributed to Peter Drucker) and wonder whether or not anything can be done to cure our problems without first addressing the toxicity of the underlying culture. I’ll hit upon a couple examples of the toxic cultures in the workplace and society in general. Both of these stand in opposition to a life well led. No amount of concrete strategy and clarity of thought can allow progress when the culture opposes it.

I am embedded in a horribly toxic workplace culture, which reflects a deeply toxic broader public culture. Our culture at work is polite, and reserved to be true, but toxic to all the principles our managers promote. Recently a high level manager espoused a set of high-level principles to support: diversity & inclusion, excellence, leadership, and partnership & collaboration. None of these principles is actually seen in reality and everything about how our culture operates opposes them. Truly leading and standing for the values espoused with such eloquence by identifying and removivyxvbzwxng the barriers to their actual reality would be a welcome remedy to the normal cynical response. Instead the reality is completely ignored and the fantasy of living to such values is promoted. It is not clear whether the manager knows the promoted values are fiction, or simply exists in a disconnected fantasy world. Either situation is utterly damning. The manager either knows the values are fiction, or they are so disconnected from reality that they believe the fiction. The end result is the same, no actions to remove the toxic culture are ever taken and the culture’s role in undermining values is not acknowledged.

In a starkly parallel sense we have an immensely toxic culture in our society today. The two toxic cultures certainly have connections, and the societal culture is far more destructive. We have all witnessed the most monumental political event of our livshutterstock_318051176-e1466434794601-800x430es resulting directly from the toxic culture playing out. The election of a thoroughly toxic human being as President is a great exemplar of the degree of dysfunction today. Our toxic culture is spilling over into societal decisions that may have grave implications for our combined future. One outcome of the toxic societal choice could be a sequence of events that will induce a crisis of monumental proportions. Such crises can be useful in fixing problems and destroying the toxic culture, and allowing its replacement by something better. Unfortunately such crises are painful, destructive and expensive. People are killed. Lives are ruined and pain is inflicted broadly. Perhaps this is the cost we must bear in the wake of allowing a toxic culture to fester and grow in our midst.

Reform is usually possible only once a sense of crisis takes hold…. In fact, crises are such valuable opportunities that a wise leader often prolongs a sense of emergency on purpose.

― Charles Ruhig

Cultures are usually developed, defined and encoded through the resolution of crisis. In these crises old cultures fade being replaced by a new culture that succeeds in assisting the resolution of the crisis. If the resolution of the crisis is viewed as a success, the culture becomes a monument to that success. People wishing to succeed adopt the cultural norms and re-enforce the culture’s hold. Over time such cultural touchstones become aged and incapable of dealing with modern reality. We see this problem in spades today either in the workplace or society-wide. The older culture in place cannot deal effectively with the realities of today. Changes in economics, technology and populations are creating a seimagest of challenges for older cultures, which these older cultures are unfit to manage. Seemingly we are being plunged headlong toward a crisis necessary to resolve the cultural inadequacies. The problem is that the crisis will be an immensely painful and horrible circumstance. We may simply have no choice, but to go through it, and hope we have the wisdom and strength to get to the other side of the abyss.

Crisis is Good. Crisis is a Messenger

― Bryant McGill

A crisis is a terrible thing to waste.

― Paul Romer

What can be done about undoing these toxic cultures without crisis? The usual remedy for a toxic culture is a crisis that demands effective action. This is an unpleasant prospect whether part of an organization or country, but it is the course we find ourselves on. One of the biggest problems with the toxic culture issue is its self-defeating nature. The toxic culture itself defends itself. Our politicians and managers are curlreatures whose success has been predicated on the toxic culture. These people are almost completely incapable of making the necessary decisions for avoiding the sorts of disasters that characterize a crisis. The toxic culture and those who succeed in them are unfit to resolve crises successfully. Our leaders are the most successful people in the toxic culture and act to defend such cultures in the face of overwhelming evidence that the culture is toxic. As such they do nothing to avoid the crisis even when it is obvious and make the eventual disaster inevitable.

Can we avoid this? I hope so, but I seriously doubt it. I fear that events will eventually unfold that will having us longing for the crisis to rescue us from the slow-motion zombie existence today’s current public-workplace cultures inflict on all of us.

The Chinese use two brush strokes to write the word ‘crisis.’ One brush stroke stands for danger; the other for opportunity. In a crisis, be aware of the danger–but recognize the opportunity.

― John F. Kennedy

We are ignoring the greatest needs & opportunities for improving computational science

02 Friday Dec 2016

Posted by Bill Rider in Uncategorized

≈ 1 Comment

 

We are lost, but we’re making good time.

― Star Trek V

Lately I’ve been doing a lot of thinking about the focus of research. Time and time again the entirety of our current focus seems to be driven by things that are not optimal. Little active or critical though has been applied to examining the best path forward. If progress is to be made a couple of questions should be central to our choices: is there a distinct opportunity to progress? Or would progress produce a large impact? Good choices would combine the opportunity for successful progress with impact and importance of the work. This simple principle in decision-making would make a huge difference at improving our choices.

The significant problems we face cannot be solved at the same level of thinking we were at when we created them.

– Albert Einstein

Examples of the two properties of opportunity and impact coming together abound in the history of science. The archetype of this would be the atomic bomb coming from the discovbomb.jpg_1718483346eries of basic scientific principles combined with overwhelming need in the socio-political worlds. At the end of the 19th century and beginning of the 20th century a massive revolution occurred in physics fundamentally changing our knowledge of the universe. The needs of global conflict pushed us to harness this knowledge to unleash the power of the atom. Ultimately the technology of atomic energy became a transformative political force probably stabilizing the world against massive conflict. More recently, computer technology has seen a similar set of events play out in a transformative way first scientifically, then in engineering and finally in profound societal impact we are just beginning to see unfold.

imagesIf we pull our focus into the ability of computational power to transform science, we can easily see the failure to recognize these elements in current ideas. We remain utterly tied to the pursuit of Moore’s law even as it lies in the morgue. Rather than examine the needs of progress, we remain tied to the route taken in the past. The focus of work has become ever more computer (machine) directed, and other more important and beneficial activities have withered from lack of attention. In the past I’ve pointed out the greater importance of modeling, methods, and algorithms in comparison to machines. Today we can look at another angle on this, the time it takes to produce useful computational results, or workflow.

Simultaneous to this unhealthy obsession, we ignore far great opportunities for progress sitting right in front of us. The most time consuming part of computational studies is rarely the execution of the computer code. The time consuming part of the solution to a problem is defining the model to be solved (often generating meshes), and analyzing the results from any solution. If one actually wishes to engage in rigorous V&V because the quality of the results really mattered, the focus would be dramatically different (working off the observation that V&V instills diminishing returns for speeding up computations). If one takes the view that V&V is simply the scientific method, the time demands only increase and dramatically and the gravity of engaging in time consuming activities only grows. What we suffer from is magical thinking on the part of those who “lead” computational science, by ignoring what should be done in favor of what can be more easily funded. This is not leadership, but rather the complete abdication of it.

When we look at the issue we are reminded of Amdahl’s law. Amdahl’s law basically establishes a law of has a program dominated by a single process eventually the parts you can’t speed up will eventually control the speed under optimization. Today we focus on speeding up the computation that isn’t even the dominant cost in computational science. We are putting almost no effort into speeding up the parts of computational science that take all the time. As a result the efforts put into improving computation will yield fleeting benefits to the actual conduct of science. This is a tragedy of lostUnknown-2opportunity. There is a common lack of appreciation for actual utility in research that arises from the naïve and simplistic view of how computational science is done. This view arises from the marketing of high performance computing work as basically only requiring a single magical calculation where science almost erupts spontaneously. Of course this never happens and the lack of scientific process in computational science is a pox on the field.

For engineering calculations with complex geometries, the time to develop a model often takes months. In many cases this time budget is dominated by mesh generation. There are aspects of trial and error where putative meshes are defined, tested and then refined. On top of this, the specification of the physical modeling of the problem is immensely time-consuming. Testing and running a computational model more quickly can come in handy as can faster mesh generation, but the human element in these practices are usually the choke point. We see precious little effort to do anything consequential to impact this part of the effort in computational science. For many problems this is the single largest component of the effort.

Once the model has been crafted and solved via computation, the results need to be analyzed and understood. Again, the human element in this practice is key. Effort in computing today for this purpose is concentrated in visualization technology. This mayjohn-von-neumann-2 be the simplest and clearest example of the overwhelmingly transparent superficiality of current research. Visualization is useful for marketing science, but produces stunningly little actual science or engineering. We are more interested in funding tools for marketing work than actually doing work. Tools for extracting useful engineering or scientific data from calculation usually languish. They have little “sex appeal” compared to flashy visualization, but carry all the impact on the results that matter. If one is really serious about V&V all of these issues are compounded dramatically. For doing hard-nosed V&V visualization has almost no value whatsoever.

If you are inefficient, you have a right to be afraid of the consequences.

― Murad S. Shah

Crays-Titan-SupercomputerIn the end all of this is evidence that current high performance computing programs have little interest in actual science or engineering. They are hardware focused because the people leading them like hardware; don’t care or understand science and engineering. The people running the show are little more than hardware-obsessed “fan boys” who care little about science. They succeed because of a track record of selling hardware-focused programs, not because it is the right thing to do. The role of computation is science should be central to our endeavor instead of a sideshow that receives little attention and less funding. Real leadership would provide a strong focus on completing important work that could impact the bottom line, doing better science with computational tools.

He who is not satisfied with a little, is satisfied with nothing.

― Epicurus

 

Dissipation isn’t bad or optional

24 Thursday Nov 2016

Posted by Bill Rider in Uncategorized

≈ Leave a comment

Von Neumann told Shannon to call his measure entropy, since “no one knows what entropy is, so in a debate you will always have the advantage.

― Jeremy Campbell

imagesToo often in seeing discourse about numerical methods, one gets the impression that dissipation is something to be avoided at all costs. Calculations are constantly under attack for being too dissipative. Rarely does one ever hear about calculations that are not dissipative enough. A reason for this is the tendency for too little dissipation to cause outright instability contrasted with too much dissipation with low-order methods. In between too little dissipation and instability are a wealth of unphysical solutions, oscillations and terrible computational results. These results may be all too common because of people’s standard disposition toward dissipation. The problem is that too few among the computational cognoscenti recognize that too little dissipation is as poisonous to results as too much (maybe more).

Why might I say that it is more problematic than too much dissipation? A big part of the reason is the nature physical realizability of solutions. A solution with too much dissipation is utterly physical in the sense that it can be found in nature. The solutions with too little dissipation more often than not are not found in nature. This is not because those solutions are unstable, but rather solutions that are stable, and have some dissipation; however, they simply aren’t dissipative enough to match natural law. What many do not recognize is that natural systems actually produce a large amount of dissipation without regard to the size of the mechanisms for explicit dissipative physics. This is both a profound physical truth, and the result of acute nonlinear focusing. It is important for numerical methods to recognize this necessity. Furthermore, this fact of nature reflects an uncomfortable coming together of modelling and numerical methods that many simply choose to ignore as an unpleasant reality.

In this house, we obey the laws of thermodynamics!

– Homer Simpson

Entropy stability is an increasingly important concept in the design of robust, accurate and convergent methods for solving systems defined by nonlinear conservation laws (see Tadmor 2016) The schemes are designed to automatically satisfy an entropy inequality that comes from the second law of thermodynamics, d S/d t \le 0. Implicit in the thinking about the satisfaction of the entropy inequality is a view that approaching the limit of $latex d S / d t = 0$ as viscosity becomes negligible (i.e., inviscid) is desirable. This isSupersonic-bullet-shadowgram-Settles.tif a grave error in thinking about the physical laws of direct interest, as the solution of conservation laws does not satisfy this limit when flows are inviscid. Instead the solutions of interest (i.e., weak solutions with discontinuities) in the inviscid limit approach a solution where the entropy production is proportional to variation in the large scale solution cubed, d S / d t \le C \left(\Delta u\right)^3. This scaling appears over and over in the solution of conservation laws including Burgers’ equation, the equations of compressible flow, MHD, and incompressible turbulence (Margolin & Rider, 2001). The seeming universality of these relations and their implications for numerical methods are discussed below in more detail, but follow the profound implications turbulence modelling are explored in detail for implicit LES modelling (our book edited by Grinstein, Margolin & Rider, 2007). Valid solutions will invariably produce the inequality, but the route to achievement varies greatly.

The satisfaction of the entropy inequality can be achieved in a number of ways and the one most worth avoiding is oscillations in the solution. Oscillatory solutions from nonlinear conservation laws are as common as they are problematic. In a sense, the proper solution is strong attractor for solutions and solutions will adjust to produce the necessary amount of dissipation in the solution. One vehicle for entropy production is oscillations in the solution field. Such oscillations are unphysical and can result in a host of issues undermining other physical aspects of the solution such as positivity of quantities such as density and pressure. They are to be avoided to whatever degree possible. If explicit action isn’t taken to avoid oscillations, one should expect them to appear.

There ain’t no such thing as a free lunch.

― Pierre Dos Utt

A more proactive approach to dissipation leading to entropy satisfaction is generally desirable. Another path toward entropy satisfaction is offered by numerical methods in control volume form. For second-order numerical methods the analysis of the approximation via the modified equation methodology unveils nonlinear dissipation terms that provide the necessary form for satisfying the entropy inequality via a nonlinearly dissipative term in the truncation error. This truncation error takes the form  C u_x u_{xx} , which integrates to replicate inviscid dissipation as a residual term in the “energy” equation, C\left(u_x\right)^3. This term comes directly from being in conservation form and disappears when the approximation is in non-conservative from. In large part the overly large success of these second-order methods is related to this character.

Other options to add this character to solutions may be achieved by an explicit nonlinear (artificial) viscosity or through a Riemann solver. The nonlinear hyperviscosities discussed before on this blog work well. One of the pathological misnomers in the community is the belief that the specific form of the viscosity matters. This thinking infests direct numerical simulation (DNS) as it perhaps should, but the reality is that the form of dissipation is largely immaterial to establishing physically relevant flows. In other words inertial range physics does not depend upon the actual form or value of viscosity its impact is limited to the small scales of the flow. Each approach has distinct benefits as well as shortcomings. The key thing to recognize is the necessity of taking some sort of conscious action to achieve this end. The benefits and pitfalls of different approaches are discussed and recommended actions are suggested.

Enforcing the proper sort of entropy production through Riemann solvers is another possibility. A Riemann solver is simply a way of upwinding for a system of equations. For linear interaction modes the upwinding is purely a function of the characteristic motion in the flow, and induces a simple linear dissipative effect. This shows up as a linear even-order truncation error in modified equation analysis where the dissipation coefficient is proportional to the absolute value of the characteristic speed. For nonlinear modesupersonic-bullet_660s in the flow, the characteristic speed is a function of the solution, which induces a set of entropy considerations. The simplest and most elegant condition is due to Lax, which says that the characteristics dictate that information flows into a shock. In a Lagrangian frame of reference for a right running shock this would look like, c_{\mbox{left}} > c_{\mbox{shock}} > c_{\mbox{right}} with c being the sound speed. It has a less clear, but equivalent form through a nonlinear sound speed, c(\rho) = c(\rho_0) + \frac{\Delta \rho}{\rho} \frac{\partial \rho c}{\partial \rho}. The differential term describes the fundamental derivative, which describes the nonlinear response of the sound speed to the solution itself. This same condition can be seen in a differential form and dictates some essential sign conventions in flows. The key is that these conditions have a degree of equivalence. The beauty is that the differential form lacks the simplicity of Lax’s condition, but establishes a clear connection to artificial viscosity.

The key to this entire discussion is realizing that dissipation is a fact of reality. Avoiding it is simply a demonstration of an inability to confront the non-ideal nature of the universe. This is simply contrary to progress and a sign of immaturity. Let’s just deal with reality.

The law that entropy always increases, holds, I think, the supreme position among the laws of Nature. If someone points out to you that your pet theory of the universe is in disagreement with Maxwell’s equations — then so much the worse for Maxwell’s equations. If it is found to be contradicted by observation — well, these experimentalists do bungle things sometimes. But if your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation.

– Sir Arthur Stanley Eddington

References

Tadmor, E. (2016). Entropy stable schemes. Handbook of Numerical Analysis.

Margolin, L. G., & Rider, W. J. (2002). A rationale for implicit turbulence modelling. International Journal for Numerical Methods in Fluids, 39(9), 821-841.

Grinstein, F. F., Margolin, L. G., & Rider, W. J. (Eds.). (2007). Implicit large eddy simulation: computing turbulent fluid dynamics. Cambridge university press.

Lax, Peter D. Hyperbolic systems of conservation laws and the mathematical theory of shock waves. Vol. 11. SIAM, 1973.

Harten, Amiram, James M. Hyman, Peter D. Lax, and Barbara Keyfitz. “On finite‐difference approximations and entropy conditions for shocks.” Communications on pure and applied mathematics 29, no. 3 (1976): 297-322.

Dukowicz, John K. “A general, non-iterative Riemann solver for Godunov’s method.” Journal of Computational Physics 61, no. 1 (1985): 119-137.

A Single Massive Calculation Isn’t Science; it is a tech demo

17 Thursday Nov 2016

Posted by Bill Rider in Uncategorized

≈ 2 Comments

People almost invariably arrive at their beliefs not on the basis of proof but on the basis of what they find attractive.

― Blaise Pascal

21SUPERCOMPUTERS1-master768When we hear about supercomputing, the media focus, press release is always talking about massive calculations. The bigger is always better with as many zeros as possible with some sort of exotic name for the rate of computation, mega, tera, peta, eta, zeta,… Up and to the right! The implicit proposition is that bigger the calculation, the better the science. This is quite simply complete and utter bullshit. These big calculations providing the media footprint for supercomputing and winning prizes are simply stunts, or more generously technology demonstrations, and not actual science. Scientific computation is a much more involved and thoughtful activity involving lots of different calculations many at a vastly smaller scale. Rarely, if ever, do the massive calculations come as a package including the sorts of evidence science is based upon. Real science has error analysis, uncertainty estimates, and in this sense the massive calculations produce a disservice to computational science by skewing the picture of what science using computers should look like.

This post aims to correct this rather improper vision, and replace it with a discussion of what computational science should be.

With a substantial amount of focus on the drive toward the first exascale supercomputer, it is high time to remind everyone that a single massive calculation is a stunt meant to sell the purchase of said computers, and not science. This week the supercomputing community is meeting in Salt Lake City for a trade show sc16logomasquerading as a scientific conference. It is simply another in a phalanx of echo chambers we seem to form with increasing regularity across every sector of society. I’m sure the cheerleaders for supercomputing will be crowing about the transformative power of these computers and the boon for science they represent. There will be celebrations of enormous calculations and pronouncements about their scientific value. There is a certain lack of political correctness to the truth about all this; it is mostly pure bullshit.

The entire enterprise pushing toward exascale is primarily a technology push program. It is a furious and futile attempt to stave off the death of Moore’s law. Moore’s law has TOP500-the-list-graphic-150x150provided an enormous gain in the power of computers for 50 years and enabled much of the transformative power of computing technology. The key point is that computers and software are just tools; they are incredibly useful tools, but tools nonetheless. Tools allow a human being to extend their own biological capabilities in a myriad of ways. Computers are marvelous at replicating and automating calculations and thought operations at speeds utterly impossible for humans. Everything useful done with these tools is utterly dependent on human beings to devise. My key critique about this approach to computing is the hollowing out of the investigation into devising better ways to use computers and focusing myopically on enhancing the speed of computation.

Truth is only relative to those that ignore hard evidence.

― A.E. Samaan

The core of my assertion that its mostly bullshit comes from looking at the scientific method and its application to these enormous calculations. The scientific method is fundamentally about understanding the World (and using this understanding via engineering). The World is observed either in its natural form, or through experiments deserrorbars2vised to unveil difficult to see phenomena. We then produce explanations or theories to describe what we see, and allow us to predict what we haven’t see yet. The degree of comparison between the theory and the observations confirms our degree of understanding. There is always a gap between our theory and our observations, and each is imperfect in its own way. Observations are intrinsically prone to a variety of errors, and theory is always imperfect. The solutions to theoretical models are also imperfect especially when solved via computation. Understanding these imperfections and the nature of the comparisons between theory and observation is essential to a comprehension of the state of our science.

Cielo rotatorAs I’ve stated before, the scientific method applied to scientific computing is embedded in the practice of verification and validation. Simply stated, a single massive calculation cannot be verified or validated (it could be, but not with current computational techniques and the development of such capability is a worthy research endeavor). The uncertainties in the solution and the model cannot be unveiled in a single calculation, and the comparison with observations cannot be put into a quantitative context. The proponents of our current approach to computing want you to believe that massive calculations have intrinsic scientific value. Why? Because they are so big, they have to be the truth. The problem with this thinking is that any single calculation does not contain steps necessary for determining the quality of the calculation, or putting any model comparison in context.

The context of any given calculation is determined by the structure of the errors associated with the computational modeling. For example it is important to understand the nature of any numerical errors, and producing an estimate of these errors. In some (many, most) cases a very good comparison between reality and a model is the result of calibration of uncertain model parameters. In many cases the choices for the modeling parameters are mesh dependent, which produces the uncomfortable outcome where a finer mesh produces a systematically worse comparison. This state of affairs is incredibly common, and generally an unadvertised feature.

An important meta-feature of the computing dialog is the skewing of computer size, design and abilities. For example, the term capability computer comes up where these computers can produce the largest calculations we see, the ones on press releasegesamthubschrauber-01s. These computers are generally the focus of all the attention and cost the most money. The dirty secret is that they are almost completely useless for science and engineering. They are technology demonstrations and little else. They do almost nothing of value to the myriad of programs reporting to use computations to do produce results. All of the utility to actual science and engineering come from the homely cousins of these supercomputers, the capacity computers. These computers are the workhorses of science and engineering because they are set up to do something useful. The capability computers are just show ponies, and perfect exemplars of the modern bullshit based science economy. I’m not OK with this; I’m here to do science and engineering. Are our so-called leaders OK with the focus of attention (and bulk of funding) being non-scientific, media-based, press release generators?

Crays-Titan-SupercomputerHow would we do a better job with science and high performance computing?

The starting point is the full embrace of the scientific method. Taken at face value the observational or experimental community is expected to provide observational uncertainties with their data. These uncertainties should be de-convolved between errors/uncertainties in raw measurement and any variability in the phenomena. Those of us using such measurements for validating codes should demand that observations always come with these uncertainties. By the same token, computational simulations have uncertainties from a variety numerical errors and modeling choices and assumptions that should be demanded. Each of these error sources needs to be characterized to put any comparison with observations/experimental data into context. Without knowledge of these uncertainties on both sides of the scientific process, any comparison is completely untethered.

If nothing else, the uncertainty in any aspect of this process provides a degree of confidence and impact of comparative differences. If a comparison between a model and data is poor, but the data has large uncertainties, the comparison suddenly becomes more palatable. On the other hand small uncertainties with the data would imply that the model is potentially too incorrect. This conclusion would be made once the modeling uncertainty has been explored. One reasonable case would be the identification of large numerical errors in the model’s solution. This is the case where a refined calculation might be genuinely justified. If the bias with a coarse grid is sufficient, a finer grid calculation could be a reasonable way of getting more agreement. Therimages-1e are certainly cases where exascale computing is enabling for model solutions with small enough error to make models useful. This case is rarely made or justified in any massive calculation rather being asserted by authority.

On the other hand numerical error could be a small contributor to the disagreement. In this case, which is incredibly common, a finer mesh does little to rectify model error or uncertainty. The lack of quality comparison is dominated by modeling error, or uncertainty about the parameterization of the models. Worse yet, the models are poor representations of the physics of interest. If the model is a poor representation solving it very accurately is a genuinely wasteful exercise, at least if your goal is scientific in nature. If you’re interested in colorful graphics and a marketing exercise, computer power is your friend, but don’t confuse this with science (or at least good science). The worst case of this issue is a dominant model form error. This is the case where the model is simply wrong, and incapable of reproducing the data. Today many examples exist where models we know are wrong are beat to death with a supercomputer. This does little to advance science, which needs to work at producing a new model that ameliorates the deficiencies in the old model. Unfortunately our supercomputing programs are sapping the vitality from our modeling programs. Even worse, many people seem to confuse computing power as a remedy to model form error.

Equidistributed error is probably the best goal of modeling and simulation that is a balance of numerical and modeling error/uncertainty. This would be the case where the combination of modeling error and uncertainty with a numerical solution has the smallest value. The standard exascale computing driven model would have the numerical error driven to be nearly zero without regard for the modeling error. This ends up being a small numerical error by fiat or proof by authority, proof by overwhelming power. Practically, this is foolhardy and technically indefensible. The issue is the inability to effectively hunt down modeling uncertainties under these conditions, which is hamstrung by the massive cal2-29s03culations. The most common practice is to assess the modeling uncertainty via some sort of sampling approach. This requires many calculations because of the high-dimensional nature of the problem. Sampling converges very slowly with any mean value for the modeling being proportional to the inverse square root of the number of samples and the measure of the variance of the solution.

Thus a single calculation will have an undefined variance. With a single massive calculation you have no knowledge of the uncertainty either modeling or numerical (at least without have some sort of embedded uncertainty methodology). Without assessing the uncertainty of the calculation you don’t have a scientific or engineering activity. For driving down the inherent uncertainties especially where the modeling uncertainty dominates, you are aided by smaller calculations that can be executed over and over as to drive down the uncertainty. These calculations are always done on capacity computers and never on capability computers. In fact if you try to use a capability computer to do one of these studies, you will be punished and get kicked off. In other words the rules of use enforced via the queuing policies are anti-scientific.

Supernove-Shocks-1The uncertainty structure can be approached at a high level, but to truly get to the bottom of the issue requires some technical depth. For example numerical error has many potential sources: discretization error (space, time, energy, … whatever we approximate in), linear algebra error, nonlinear solver error, round-off error, solution regularity and smoothness. Many classes of problems are not well posed and admit multiple physically valid solutions. In this case the whole concept of convergence under mesh refinement needs overhauling. Recently the concept of measure-valued (statistical) solutions has entered the fray. These are taxing on computer resources in the same manner as sampling approaches to uncertainty. Each of these sources requires specific and focused approaches to their estimation along with requisite fidelity.

Modeling uncertainty is similarly complex and elaborate. The hardest aspect to evaluate is the form of the physical model. In cases where multiple reasonable models exist, the issue is evaluating the model’s (or sub-model’s) influence on solutions. Models often have adjustable parameters that are unknown or subject to calibration. Most commonly the impact of these parameters and their values are investigated via sampling solutions, an expensive prospect. Similarly there are modeling issues that are purely random, or statistical in nature. The solution to the problem is simply not determinate. Again sampling the solution of a range of parameters that define such randomness is a common approach. All this sampling is very expensive and very difficult to accurately compute. All of our focus on exascale does little to enable good outcomes.

The last area of error is the experimental or observational error and uncertainty. This is important in defining the relative quality of modeling, and the sense and sensibility of using massive computing resources to solve models. We have several standard components in the structure of the error in experiments: the error in measuring a quantity, and then the variation in the actual measured quantity. In one case there is some intrinsic uncertainty in being able to measure something with complete precision. The second part of this is the variation of the actual value in the experiment. Turbulence is the archetype of this sort of phenomena. This uncertainty is intrinsically statistical, and the decomposition is essential to truly understand the nature of the world, and put modeling in proper and useful context.

dag006The bottom line is that science and engineering is evidence. To do things correctly you need to operate on an evidentiary basis. More often than not, high performance computing avoids this key scientific approach. Instead we see the basic decision-making operating via assumption. The assumption is that a bigger, more expensive calculation is always better and always serves the scientific interest. This view is as common as it is naïve. There are many and perhaps most cases where the greatest service of science is many smaller calculations. This hinges upon the overall structure of uncertainty in the simulations and whether it is dominated by approximation error, modeling form or lack of knowledge, and even the observational quality available. These matters are subtle and complex, and we all know that today neither subtle, nor complex sells.

What can be asserted without evidence can also be dismissed without evidence.

― Christopher Hitchens

 

Facts and Reality are Optional

09 Wednesday Nov 2016

Posted by Bill Rider in Uncategorized

≈ Leave a comment

There is nothing more frightful than ignorance in action.

― Johann Wolfgang von Goethe

Our political climate and capability as a nation to engage each other in meaningful, respectful conversations has plummeted to dismal lows. The best description of our 2016 political campaign is a “rolling dumpster fire.” At the core of all of our dysfunction is a critical break from fact-based discussion, confronting objective reality and the ascendency of emotion and spin into the fact-vacuum and alternative reality. One might think that working at a scientific-engineering Laboratory would free me from thtoilet-fireis appalling trend, but the same dynamic is acutely felt there too. The elements undermining facts and reality in our public life are infesting my work. Many institutions are failing society and contributing to the slow-motion disaster we have seen unfolding. We need to face this issue head-on and rebuild our important institutions and restore our functioning society, democracy and governance.

A big part of the public divorce from facts is the lack of respect and admiration for expertise. Just as the experts and the elite have become suspicious and suspect in the imagexsbroader public sphere, the same thing has happened in the conduct of science. In many ways the undermining of expertise in science is even worse and more corrosive. Increasingly, there is no tolerance or space for the intrusion of expertise into the conduct of scientific or engineering work. The way this tolerance manifests itself is subtle and poisonous. Expertise is tolerated and welcomed as long as it is confirmatory and positive. Expertise is not allowed to offer strong criticism or the slightest rebuke without regard for the shoddiness of work. If an expert does offer anything that seems critical or negative they can expect to be dismissed and never invited back to provide feedback again. Rather than welcome their service and attention, they are derided as troublemakers and malcontents. We see in every corner of the scientific and technical World a steady intrusion of mediocrity and outright bullshit into our discourse as a result.

Let’s give an example of how this plays out. I’ve seen this happen personally and witnessed it play out via external reviews I observed. I’ve been brought in

pileofshitto to review technical work for a large important project. The expected outcome was a “rubber stamp” that said the work was excellent, and offered no serious objections. Basically the management wanted me to sign off on the work as being awesome. Instead, I found a number of profound weaknesses in the work, and pointed these out along with some suggested corrective actions. These observations were dismissed and never addressed by the team conducting the work. It became perfectly clear that no such critical feedback was welcome and I wouldn’t be invited back. Worse yet, I was punished for my trouble. I was sent a very clear and unequivocal message, “don’t ever be critical of our work.” 

This personal example of dysfunction is simply the tip of the iceberg for an adversarial attitude toward critical feedback. . We have external review committees visit and treated the same. Most seasoned reviewers know that this is not to be a critical review. It is a light touch and everyone expects to get a glowing report. Any real issues are addressed on the down low and even that is treated with kid gloves. If any reviewer has the audacity to raise an important issue they can expect not to be ever invited back. The end result is the bullshit_everywhere-e1345505471862increasingly meaningless nature of any review, and the hollowing out of expertise’s seal of approval. In the process experts and expertise become covered in the bullshit they pedal and become diminished in the end.

This dynamic in review is widespread and fuels the rise of bullshit in public life as well as science and engineering. This propensity to bullshit is driven by a system that cannot deal with conflict or critical feedback. Moreover the system is tilted toward a preconceived result, all is well and no changes are necessary. When this is not the case one is confronted with engaging in conflict against these expectations, or simply getting in line with the bullshit. More and more the bullshit is winning the day. I’ve been personally punished for not towing the line and making a stink. I’ve seen others punished too. It is very clear that failing to provide the desired result bullshit will be punished. The punishments for honesty means that bullshit is on the rise as nothing exists to produce a drive toward quality and results. In the end bullshit is a lot less effort and rewarded a lot more highly.

At the end of the day we can see that the system starts to seriously erode integrity at every level. This is exactly what we are witnessing society-wide. Institutions across the spectrum of public and private life are losing their integrity. Such erosion of integrity in an environment that cannot deal with critical feedback produces a negative loop that feeds upon itself. Bullshit begets more bullshit until the whole thing collapses. We may have just witnessed what the collapse of our political system looks like. We had an election that was almost completely bullshit start to finish. We have elected a completely and shutterstock_318051176-e1466434794601-800x430utterly incompetent bullshit artist president. Donald Trump was completely unfit to hold office, but he is a consummate con man and bullshit artist. In a sense he is the emblem of the age and the perfect exemplar of our addiction to bullshit over substance.

I personally see myself as a person of substance and integrity. It is increasingly difficult to square who I am with the system I am embedded in. I am not a bullshitter, when I produce bullshit people notice, and I am embarrassed. I am a straight shooter who is committed to progress and excellence. I have a broad set of expertise in science and engineering with a deep desire to contribute to meaningful things. This fundamental nature is increasingly at odds with how the World operates today. I feel a deep drive on the part of the workplace to squash everything positive I stand for. In the place of standing up for my basic nature as a scientific expert, a member of the elite, if you will, I am expected to tow the line and produce bullshit. This bullshit is there to avoid dealing with real issues head on and avoid conflict. The very nature of things stands in opposition to progress and quality, which are threatened by the current milieu.

This gets to the heart of the discussion about what we are losing in this dynamic. We are losing progress society wide. When we allow bullshit to creep into every judgment we imagesmake, progress is sacrificed. We bury immediate conflict for long-term decline and plant the seeds for far more deep, widespread and damaging conflict. Such horrible conflict may be unfolding right in front of us in the nature of the political process. By finding our problems and being critical we identify where progress can be made, where work can be done to make the World better. By bullshitting our way through things, the problems persist and fester and progress is sacrificed.

In the current environment where expertise is suspect we see wrong beliefs persist without any real resistance. Falsehoods and myths stand shoulder to shoulder with trust and get treated with equivalence. In this atmosphere the sort of political movements founded completely on absolute bullshit can thrive. Make no mistake, Donald Trump is a master bullshitter, and completely lacks all substance, yet in today’s World he has complete viability. All of us are responsible because we have allowed bullshit to stand on even footing with fact. We have allowed the mechanisms and institutions standing in the way of such bullshit to be weakened and infested with bullshit too. It is time to stand up for truth, integrity and expertise as a shield against this assault against society.

Everything present in the political rise of Donald Trump is playing out in the dynamic at my workplace. It is not as extreme and its presence is subtle, but it is there. We have allowed bullshit to become ubiquitous and accepted. We turn away from calling bullshit out and demanding that real integrity be applied to our work. In the process we leadersimplicitly aid and abed the forces in society undermining progress toward a better future. The result of this acceptance of bullshit can be seen in the reduced production of innovation, and breakthrough work, but most acutely in the decay of these institutions.

We have lost the ability to demand difficult decisions to solve seemingly intractable problems. When we do not operate on facts, we can turn away from difficulties and soothe ourselves with falsehoods. Instead of identifying problems and working toward progressive solutions, the problems are minimized and allowed to fester. This is true in the broader public sphere as well as in our scientific environment. I have been actively discouraged from pointing out problems or being critical. The result is stagnation and the steady persistence of problematic states. Instead of working to solve weaknesses, we are urged to accept them or explain them away. This will ultimately yield a catastrophic outcome. At the National level we may have just witnessed such a catastrophe play out in plain view.

In the workplace I feel the key question to ask is “If we don’t look for problems, how can we do important work?” Progress depends on finding weakness and attacking it. This is the principle that I focus on. Confidence comes from being sure you know where to look for problems and up to the challenge of solving them. Empty positivity is a sign of weakness. Yet this is exactly what I am being asked to do at work. The resulting bullshit is a sign of weakness and lack of confidence is being able to constructively solve problems. The need to be positive all the time and avoid criticism is weakness, lack of drive, and lack of conviction in the possibility of progress. We need to refresh out commitment to be constructively critical in the knowledge and belief that we are equal to the task of making the World better. This means stamping out bullshit wherever we see it. There is a lot to do because today we are drowning in it.

With the benefit of time i have a couple projections for the future:

  1. The GOP and President Trump will do little or nothing to help the people that voted for them. The key to our democracy is whether they will take any responsibility. If history is our guide they will deflect the blame onto minorities, LBGT, women and
    U.S. Republican presidential candidate Trump speaks at a rally in Columbus

    U.S. Republican presidential candidate Donald Trump speaks at a rally in Columbus, Ohio, November 23, 2015. REUTERS/Jay LaPrete – RTX1VIY0

    everyone, but themselves. Will the people fall for the same con as they did when they elected these charlatans?

  2. Things will be very dark and dismal for an extended time, and we will spiral toward violence. This may be violence directed by the new ruling class against “enemies of the state”. It also may be violence directed toward the ruling class. Mark my words blood will be shed by Americans at the hands of other Americans.
  3. The only way out of this darkness is to work steadfastly to repair our institutions and figure out how to solve our problems in a collective manner for the benefit of all. I work for one of these institutions and we should be taking a long hard look at our role in the great unraveling we are in the midst of.

Facts are stubborn things; and whatever may be our wishes, our inclinations, or the dictates of our passion, they cannot alter the state of facts and evidence.

― John Adams

Footnote: I started writing this on Monday, and like almost everyone I thought the election would turn out differently. It was a genuinely shocking result that makes this topic all the more timely. Instead the results amplified the importance of this entire discussion immensely. The prospect of a President Trump fills me with dread because of the very issues discussed here. Trump exists in an alternative reality and his lack of presence in an objective reality will have real consequences. He is a reality TV star and professional buffoon. He is the most stunningly unqualified person to ever hold that office. I fear what is coming. I also feel the need to be resolved to pick up the pieces from the disaster that will likely unfold. We need to rebuild our institutions and reinstitute a knowledge/facts/reality based governance to guide society forward.

← Older posts
Newer posts →

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • March 2026
  • February 2026
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013

Categories

  • Uncategorized

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • The Regularized Singularity
    • Join 60 other subscribers
    • Already have a WordPress.com account? Log in now.
    • The Regularized Singularity
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...