• About The Regularized Singularity

The Regularized Singularity

~ The Eyes of a citizen; the voice of the silent

The Regularized Singularity

Category Archives: Uncategorized

Some Improvements Aren’t Obviously Better

26 Wednesday Nov 2014

Posted by Bill Rider in Uncategorized

≈ Leave a comment

Better never means better for everyone… It always means worse, for some.

― Margaret Atwood

A couple of days ago I went through the HLL flux function –Riemann solver, and a small improvement (https://williamjrider.wordpress.com/2014/11/24/the-power-of-simple-questions-the-hll-flux/). Today I’ll report on another improvement, that doesn’t appear to improve much at all, but its worth considering. This is work in progress and right now its not yielding anything too exciting.

Never confuse movement with action.

― Ernest Hemingway

Again, I’ll focus on the HLL flux and attack the issue of the final flux function’s sign preservation. The first thing to address is the propriety of the entire idea. There is a nice (short) paper on a closely related topic [Linde]. The bottom line is that the initial data may have a certain sign convention, and the dynamics induced in the Riemann problem can change that. So before deciding how to apply the Riemann solver and whether that application is appropriate one needs to realize the impact of the internal structure.

The problem I noted that under some conditions the dissipation in the flux can change the sign of the computed flux (when the velocity is much less than the sound speed, and the change in the equations variables is large enough). If you look at the fluxes in the mass or energy equation, they are the product of the velocity multiplying a positive definite quantity. The mass flux is the velocity multiplying the density, \rho u and the energy flux is u\left(\rho E + p\right) where \rho, E and p are the density, total energy and the pressure. If the velocity always has a sign in the Riemann solution, the resulting flux inherits that sign convention.

The HLL flux is F(U) = \frac{A+D}{\Delta}; A= S_R F_L –S_L F_R; D= S_R S_L ( U_R - U_L ); \Delta = S_R - S_L where S_L \le 0 and S_R \ge 0. If |S_{L,R}| >> |u| we can have problems. This is particularly true is U_R –U_L is large.

Without deviation from the norm, progress is not possible.

― Frank Zappa

To attack the issue of whether the sign change is a problem and might be unphysical, I look at the dynamics within the Riemann solution. This can be easily computed using the linearized solution to the Lagrangian Riemann solution, u_* = \frac{ W_L u_L+ W_R u_R - \left( p_R - p_L \right) }{W_L + W_R} where W_L=\rho_L c_L and W_R =\rho_R c_R are the Lagrangian wave speeds. If u_L, u_R and u_* all have the same sign, the flux will have that sign. Given this background I test the HLL flux for compatibility with the established sign convention. If the sign convention is violated I do one of a couple of things: set F\left(U\right) = 0 or F\left(U\right) = F_L if u_*>0 and F\left(U\right) = F_R if u_* < 0. Then I test it.

Restlessness is discontent — and discontent is the first necessity of progress. Show me a thoroughly satisfied man — and I will show you a failure.

― Thomas A. Edison

The issue definitely shows up a lot in computations. I set up a challenging problem where the density is constant in the initial data, and there is a million-to-one pressure jump. This produces a shock and contact discontinuity moving to the right. The density jump is nearly six because the shock is so strong (\gamma = 1.4). The HLL tends to smear the contact very strongly, and this is where the flux sign convention is violated.

We can see that an exact Riemann solver gives a sharp contact (the structure on the left side of the density peak). We also show the energy profile, which is also impacted by the idiosyncrasy discussed today. 198A7043-8203-484D-9783-230478DE500B0D09AE91-DBFC-4CF9-ABFD-C4ED8778E398

With HLL we get a smeared out contact, especially to the left. None of the changes to HLL flux discussed above really make much of a difference either.

AE8C288A-0537-4456-96A7-C2C03834CC10 D3F25468-8495-4FBA-9E43-16D4049689D2

But knowing that things could be worse should not stop us from trying to make them better.

― Sheryl Sandberg

E185DA3B-C42D-4DFA-9779-3F12AC49A8AC DA15F04E-18C5-4863-BF05-396BD9428C44

To do better is better than to be perfect.

― Toba Beta

[Linde] Linde, Timur, and Philip Roe. “On a mistaken notion of “proper upwinding”.” Journal of Computational Physics 142, no. 2 (1998): 611-614.

Simple Gets Complicated Fast

25 Tuesday Nov 2014

Posted by Bill Rider in Uncategorized

≈ Leave a comment

There’s no limit to how complicated things can get, on account of one thing always leading to another.

― E.B. White

Yesterday I looked at a simple question regarding Riemann solvers. The conclusion of this brief investigation was that more examination is warranted. A large part of the impetus for the question comes from a recent research emphasis on positivity preserving discretizations. This means that quantities that are physically positive like density and energy are garuanteed to be positive in numerical solutions.

These have always focused on the variables being solved for. The fluxes used to build the solution procedure have been examined as producing physically-admissible solutions. I noticed that in least one case, the momentum flux, the flux should be positive-definite and its numerical approximation might not be. I’ll provide a little background on the thought process that got me to the question. The point I’ll make is much more exotic and esoteric in nature.

Back when I was in Los Alamos I did work on the nature of the truncation error using a technique called “modified equations”. This technique uses the venerated Taylor series expansion to describe the order of the approximation error in a numerical method. Unlike many analysis methods, which are limited to linear equations, the modified equation method gives results for nonlinear equations. One of things I noticed in the process of analysis explains a common problem with upwind differencing: entropy violating solutions to expansion waves often called rarefaction shocks.apjs307031f1_lr

For normal fluid dynamics shocks occur upon compression, and rarefactions occur upon expansion. If an expanding flow has a shock wave it is unphysical. In cases where wave speed used in upwinding goes through zero, the dissipation in upwinding goes to zero as the dissipation is directly proportional to the wave speed. This happens at “sonic points” where the velocity is equal to the sound speed, u\pm c=0.

We can see what happens using the modified equation approach for upwinding. Take a general discretization for upwind differing in conservation form, \Delta x \partial_x f(u) \approx f_{j+1/2} - f_{j-1/2}. We use the upwind approximation f_{j+1/2} = \frac{1}{2} \left( f(u)_j + f_{j+1} \right) - \frac{1}{2} \left| \partial_u f\right|\left( u_{j+1}-u_j \right). We plug all of this into the equations and expand u in a Taylor series, u(x+j\Delta x) = u(x) + \Delta x \partial_x u + \frac{1}{2} (\Delta x)^2 \partial_{xx}u +\ldots.

When we plunge into the math, and simplify we get some really interesting results, \partial_x f(u) \approx \partial_x\left[ f(u) +\frac{\Delta x}{2}\left| \partial_u f\right| \partial_x u +\frac{(\Delta x)^2}{6}\left( \partial_{uu}f (u_x)^2 + \partial_u f u_{xx}\right)\right]. Here is the key to rarefaction shocks; when the wavespeed, \partial_u f is near zero, the dissipation is actually governed by a higher order term, \partial_{uu} f (\partial_x u)^2.It is also notable that the dissipation aids the upwind dissipation for compressions, and thus shock waves. Anti-dissipation is a shock wave would be utterly catastrophic.

Usually fluids are convex, \partial_{uu} f>0 thus when u_x>0 the term in question is anti-dissipative. This is intrinsically unphysical. The dissipation from upwinding from the lower order term proportional to \Delta x is not large enough to offset the anti-dissipation. This produces the conditions needed for a rarefactions shock wave. I’ve worried that these effects can consistently cause problems in solutions and undermine the entropy satisfying solutions. These terms were not considered in the basic theory of upwinding introduced by Harten, Hyman and Lax using modified equation analysis [HHL]. What makes matters worse is that the anti-dissipative terms will dominate in expansions when we take the upwind approximation to second-order.

The most complicated skill is to be simple.

― Dejan Stojanovic

[HLL] Harten, Amiram, James M. Hyman, Peter D. Lax, and Barbara Keyfitz. “On finite‐difference approximations and entropy conditions for shocks.” Communications on Pure and Applied Mathematics 29, no. 3 (1976): 297-322.

The Power of Simple Questions: The HLL Flux

24 Monday Nov 2014

Posted by Bill Rider in Uncategorized

≈ 1 Comment

Questions are infinitely superior to answers.

― Dan Sullivan

The quality of research hinges upon questions and their quality. Surprisingly simple questions can lead to discovery. I’m not claiming discovery here, but I’ll start with what seems like a simple question and attack it.

There are some questions that shouldn’t be asked until a person is mature enough to appreciate the answers.

― Anne Bishop

Should the flux from a Riemann solver obey certain sign preserving qualities? By this I mean that under some conditions the numerical flux used to integrate hyperbolic conservation laws should obey a sign convention. I decided that it was a reasonable question, but it need to be bounded. I found a good starting point.

What’s a Riemann solver? If you know already go ahead and skip to the next paragraph. If you don’t here is a simple explanation: if you bring two slabs of material together at different conditions and then let them interact, the resulting solution can be idealized as a Riemann solution. For example if I have two blocks of gas separated by a thin diaphragm then remove it, the resulting structures are described by the Riemann solution. This is also known as a “shock tube”. Godunov showed us how to use the Riemann solution to construct a numerical method [Godunov].

For the Euler equations the momentum flux \rho u^2 + p should be positive definite always (at least for gas dynamics). The density, \rho and the pressure, p are both positive. The remaining term is quadratic, thus the entire thing is positive. I reasoned that a negative flux would be unphysical. I worried that dissipative terms when added to the computation of the flux could make it negative.

Doing this generally is a challenge, but there is one Riemann solver that is simple and has a compact closed form, the HLL (for Harten-Lax-Van Leer) flux [HLL, HLLE]. This flux function is incredibly important because of its robustness and use as a “go to” flux for difficult problems [Quirk]. Its simplicity makes it a synch to implement. Its form is also so simple that it almost begs to be analyzed.

The basic form is F =A +D  where A (S_R-S_L)=S_R F\left( U_L \right) – S_L F \left( U_R \right) , (S_R -S_L) D =S_L S_R \left( U_R - U_L \right), S_L is negative definite, and S_R is positive definite bounding wave speeds. The subscripts L and R refer to the states to the left and right of the interface where the Riemann solution is sought. For the Euler equations these are always the acoustic eigenvalues associated with shocks and rarefactions in the solution. We can choose these so that S_L = \min\left(0, u-c\right) and S_R = \max\left(0, u+c\right) are as large as possible for the initial data. If all the wave speeds are moving to the left or right, the HLL formula simplifies quite readily to the “proper” upwinding, which is the selection of F\left(U_L\right) for rightward waves, and F\left(U_R\right) for leftward. Care must be taken to assure that any internal waves aren’t created in the Riemann solution that changes the directionality of the waves. If this is true, these changes must be incorporated in the estimates for S_L and S_R.

If we have a flux that will be positive definite, it isn’t too hard to see where we will have problems. If the combination of the wave speed sizes and the jump in the variable, U_R-U_L is too large it may overwhelm the fluxes resulting in a negative value. For the case of the momentum flux this will happen in strong rarefactions where the velocities are opposite in sign. There is a common problem to solve that test this known colloquially as the “1-1-1” problem. Here the density and energy are set to one and the velocities are equal and opposite. This creates a strong rarefaction that nearly creates a vacuum.

With the HLL flux the computed momentum flux is negative at the center of the domain. I believe this has some negative impacts on the solution. The manifests itself as the “heating” at the center of the expansion in the energy solution, and the “soft” profile at the center of the velocity profile. Both are indicative of significantly too diffuse solution.

3250594A-779E-4A57-8F36-DFA34BD2FCF8E90595CE-520D-4B7F-9A53-E97BC0937E6124A014A2-0973-4826-98DA-E572D26E017B

007BD594-5B54-42FA-8456-74AE20F28FB0

To counter this potentially “unphysical” response I detect the negative momentum flux and set the flux to zero overwriting the negative value. This changes the solution significantly by most notably removing the overheating from the center of the region, but leaving behind a small oscillation. The velocity field is now flat near the center of the domain while the changes in the density and pressure are subtler. With the original flux the density is slightly depressed near the center, with the modification the density is slightly elevated.

06DF847A-67A3-4B6A-BC2A-54D29AEEB5C44CD48521-509B-4C61-86B2-FE129EAD404FE2087096-36EE-4DE5-9109-26032268F4919E7CAF8B-7831-4696-B385-C5184B37B388

The scientist is not a person who gives the right answers, he’s one who asks the right questions.

― Claude Lévi-Strauss

I view these results are purely preliminary and promising. They need much more investigation to sort out their validity, and the appropriate means of modifying the flux. I believe that the present modification still yields an entropy-satisfying scheme. A couple of questions are worth thinking about moving forward. There are other cases where the sign of the HLL flux is demonstrably wrong, but not where the flux itself is signed in a definite way. This certainly happens with contract discontinuities, but existing methods exist to modify HLL to preserve contacts better [HLLEM]. Does this problem go beyond the case of contacts? Higher order truncation error terms produce both dissipative and anti-dissipative effect. How do these effects influence solution? In artificial viscosity, the method turns off dissipation in expansion. How would this type of approach work with Riemann solvers?

Judge a man by his questions rather than by his answers.

― Voltaire

[Godunov] Godunov, Sergei Konstantinovich. “A difference method for numerical calculation of discontinuous solutions of the equations of hydrodynamics.” Matematicheskii Sbornik 89, no. 3 (1959): 271-306.

[HLL] Harten, Amiram, Peter D. Lax, and Bram van Leer. “On upstream differencing and Godunov-type schemes for hyperbolic conservation laws.” SIAM review 25, no. 1 (1983): 35-61.

[HLLE] Einfeldt, Bernd. “On Godunov-type methods for gas dynamics.” SIAM Journal on Numerical Analysis 25, no. 2 (1988): 294-318.

[HLLEM] Einfeldt, Bernd, Claus-Dieter Munz, Philip L. Roe, and Björn Sjögreen. “On Godunov-type methods near low densities.” Journal of computational physics 92, no. 2 (1991): 273-295.

[Quirk] Quirk, James J. “A contribution to the great Riemann solver debate.” International Journal for Numerical Methods in Fluids 18, no. 6 (1994): 555-574.

Necessary, Sufficient and Balanced

23 Sunday Nov 2014

Posted by Bill Rider in Uncategorized

≈ Leave a comment

Computers get better faster than anything else ever. A child’s PlayStation today is more powerful than a military supercomputer from 1996.

— Erik Brynjolfsson

Columbia_Supercomputer_-_NASA_Advanced_Supercomputing_FacilityFor supercomputing to provide the value it promises for simulating phenomena, the methods in the codes must be convergent. The metric of weak scaling is utterly predicated on this being true. Despite its intrinsic importance to the actual relevance of high performance computing relatively little effort has been applied to making sure convergence is being achieved by codes. As such the work on supercomputing simply assumes that it happens, but does little to assure it. Actual convergence is largely an afterthought and receives little attention or work.

Don’t confuse symmetry with balance.

― Tom Robbins

intel_exaflop_needsThus the necessary and sufficient conditions are basically ignored. This is one of the simplest examples of the lack of balance I experience every day. In modern computational science the belief that faster supercomputers are better and valuable has become closer to an article of religious faith than a well-crafted scientific endeavor. The sort of balanced, well-rounded efforts that brought scientific computing to maturity have been sacrificed for an orgy of self-importance. China has the world’s fastest computer and reflexively we think there is a problem.tianhe-1-chinese-supercomputer-2-600x450

I am not saying it is utterly useless. It can play video games.

—Unnamed Chinese Academy of Sciences Professor

At least the Chinese have someone who is smart enough to come to an honest conclusion about their computer! It could be a problem, or it might not be a problem at all. Everything that determines whether it’s a problem has little or nothing to do with the actual computer. The important thing is whether we, or they do the things necessary to assure that the computer is useful.

There is nothing quite so useless, as doing with great efficiency, something that should not be done at all.

― Peter F. Drucker

I know we are doing a generally terrible job of it. I worry a lot more about how much the Chinese are investing in the science going into the computer relative to us. The quote above probably means that they understand how bullshit the “fastest supercomputer” metric actually is. The signs are that they are taking action to fix this. This means much more than the actual computer.four

Once upon a time applied mathematics was used to support the practical and effective use of computing. During the period of time from World War 2 to the early 1990’s applied math helped making scientific computing effective. It planted the seeds of the faith in faster computers we take for granted today. Over the past twenty or so years, this has waned and applied math has shrunk from impact. More and more computing simply works on autopilot to produce more computing power without doing what is important for utilizing this power effectively. Applied math is one of the fields necessary to do this.

Computer science is one of the worst things that ever happened to either computers or to science.

— Neil Gershenfeld

resizedimage300297-lsbemissionscroppedsmWhile necessary applied math isn’t sufficient. Sufficiency is achieved when the elements are applied together with science. The science of computing cannot remain fixed because computing is changing the physical scales we can access, and the fundamental nature of the questions we ask. The codes of twenty years ago can’t simply be used in the same way. It is much more than rewriting them or just refining a mesh. The physics in the codes needs to change to reflect the differences.

a huge simulation of the ‘exact’ equations…may be no more enlightening than the experiments that led to those equations…Solving the equations leads to a deeper understanding of the model itself. Solving is not the same as simulating.

—Philip Holmes

For example we ought to be getting ensembles of calculations from different initial conditions instead of single well-posed initial value problems. This is just like experiments, no two are really identical, and computations should be the same. In some cases this can lead to huge systematic changes in solutions. Reality produces vastly different outcomes from ostensibly identical initial conditions. This makes people really uncomfortable, but science and simulations could provide immense insight into this. Our current attitudes are holding us back from realizing this.

Single calculations will never be “the right answer” for hard problems.

—Tim Trucano

Right now this makes scientists immensely uncomfortable because the necessary science isn’t in place. Developing understanding of this physically and mathematically is needed for confidence. It is also needed to get the most out of the computers we are buying. Instead we simply value the mere existence of these computers and demonstrate their utility through a sequence of computing stunts of virtually no scientific value.

To me, this is not an information age. It’s an age of networked intelligence, it’s an age of vast promise.

—Don Tapscott

Beyond the science, the whole basis of computing is still grounded in models of computing from twenty or thirty years ago (i.e., mainframes). While computing has undergone a massive transformation and become a transformational social technology scientific computing has remained stuck in the past. Science is only beginning to touch the possibilities of computing. In many ways the high performance computing world is even further behind than much of the rest of the scientific world in utilizing of the potential computing as it exists today. the-internet-of-things

All these computers, all these handhelds, all these cell phones, all these laptops, all these servers — what we’re getting out of all these connections is we’re getting one machine. … We’re constructing a single, global machine.

—Kevin Kelly

sc14-logoA chief culprit is the combination of the industry and its government partners who remain tied to the same stale model for two or three decades. At the core the cost has been intellectual vitality. The implicit assumption of convergence, and the lack of deeper intellectual investment in new ideas has conspired to strand the community in the past. The annual Supercomputing conference is a monument to this self-imposed mediocrity. It’s a trade show through and through, and in terms of technical content a truly terrible meeting (I remember pissing the Livermore CTO off when pointing this out).SC13_Floor

You can’t solve a problem with the management of technology with more technology.

—Bill Joy

The opportunities provided by the modern world of computing are immense. Scientific computing should be at the cutting edge, and instead remains stranded in the past. The reason is the lack of intellectual vitality that a balanced effort would provide. The starting point was a failure to attend to the necessary and sufficient efforts to assure success. Too much effort is put toward making “big iron” function, and too little effort in making it useful.

We’ve got 21st century technology and speed colliding head-on with 20th and 19th century institutions, rules and cultures.

— Amory Lovins

Math’s role in computational science?

22 Saturday Nov 2014

Posted by Bill Rider in Uncategorized

≈ Leave a comment

 Mathematics is the art of explanation.

― Paul Lockhart

In projects I work on mathematics plays a key role. Too often the math doesn’t provide nearly enough impact because it can’t handle the complexity of applications. math144_forget-it-anecdotalOne of the big issues is the proper role of math in the computational projects. The more applied the project gets, the less capacity math has to impact it. Things simply shouldn’t be this way. Math should always be able to compliment a project.

This begs a set of questions to consider. For example what sort of proofs are useful? My contention is that proofs need to show explanatory or constructive power.  What do I mean by this?

 

[…] provability is a weaker notion than truth

― Douglas R. Hofstadter

proofA proof that is explanatory gives conditions that describe the results achieved in computation. Convergence rates observed in computations are often well described by mathematical theory. When a code gives results of a certain convergence rate, a mathematical proof that explains why is welcome and beneficial. It is even better if it gives conditions where things break down, or get better. The key is we see something in actual computations, and math provides a structured, logical and defensible explanation of what we see.

How is it that there are so many minds that are incapable of understanding mathematics? … the skeleton of our understanding, …

― Henri Poincaré

Constructive power is similar, but even better. Here the mathematics gives us the power to build new methods, improved algorithms or better performance. It provides concrete direction to the code and the capacity to make well-structured decisions. With theory behind us we can define methods that can successfully improve our solutions. With mathematics behind us, codes can make huge strides forward. Without mathematics it is often a matter or trial and error.

mathToo often mathematics is done that simply assumes that others are “smart” enough to squeeze utility from the work. A darker interpretation of this attitude is that people who don’t care if it is useful, or used. I can’t tolerate that attitude. This isn’t to say that math without application shouldn’t be done, but rather it shouldn’t seek support from computational science.

Robust, physical, flexible, accurate and efficient

21 Friday Nov 2014

Posted by Bill Rider in Uncategorized

≈ 4 Comments

In working on an algorithm (or a code) we are well advised to think carefully about requirements and priorities. It is my experience that these can be stated clearly as a set of adjectives about the method or code that form the title of this post. Moreover the order of these adjectives forms the order of the priorities from the users of the code. The priorities of those funding code development are quite often the direct opposite!

There is nothing quite so useless as doing with great efficiency something that should not be done at all.

– Peter Drucker

n05_TR018003None of these priorities can be ignored. For example if the efficiency becomes too poor, the code won’t be used because time is money. A code that is too inaccurate won’t be used no matter how robust it is (these go together, with accurate and robust being a sort of “holy grail”).

 Extraordinary claims require extraordinary evidence.

― Carl Sagan

The problem I’d like to raise your attention to be that the people handing out money to do the work seem inverts these priorities. This creates a distinct problem with making the work useful and impactful. The over-riding concern is the high-performance computing imperative, which is encoded in the call for efficiency. There is an operating assumption that all of the other characteristics are well in hand, and its just a matter of getting a computer (big and) fast enough to crush our problems out of sight and out of mind. Ironically, the meeting devoted to this dim-sighted worldview is this week, SC14. Thankfully, I’m not there.

All opinions are not equal. Some are a very great deal more robust, sophisticated and well supported in logic and argument than others.

― Douglas Adams

antifragile1Robust. A robust code runs to completion. Robustness in its most refined and crudest sense is stability. The refined sense of robustness is the numerical stability that is so keenly important, but it is so much more. It gives an answer come hell or high water even if that answer is complete crap. Nothing upsets your users more than no answer; a bad answer is better than none at all. Making a code robust is hard work and difficult especially if you have morals and standards. It is an imperative.

Physical. Getting physical answers is often thought of as being equal to robustness, and it should be. It isn’t so this needs to be a separate category. In a sense physical answers are a better source of robustness, that is, an upgrade. An example is that the density of a material must remain positive-definite, or velocities remain sub-luminal, and things like that. A deeper take on physicality involves quantities staying inside bounds imposed, or satisfaction of the second law of thermodynamics.Supersonic_Bullet_Shadowgraph

Robustness and physicality of solutions with a code are ideally defined by the basic properties of an algorithm. Upwind differencing is a good example where this happens (ideally). The reality is that achieving these goals usually takes a lot of tests and checks with corrective actions where the robustness or physicality is risked or violated. This makes for ugly algorithms and ugly codes. It makes purists very unhappy. Making them happy is the place where the engineering becomes science (or math).

A computational study is unlikely to lead to real scientific progress unless the software environment is convenient enough to encourage one to vary parameters, modify the problem, play around.

– Nick Trefethen

Flexible. You can write your code to solve one problem, if the problem is important enough, and you solve it well enough it might be a success. If your code can solve a lot of problems robustly (and physically) you might be even more successful. A code that is a jack-of-all-trades is usually a huge success. This requires a lot of thought, experience and effort to pull off. This usually gets into issues of meshing, material modeling, input, output, and generality in initial and boundary conditions. Usually the code gets messy in the process.

All that it is reasonable to ask for in a scientific calculation is stability, not accuracy. – Nick Trefethen

Accuracy. This is where things get interesting. For the most part accuracy is in conflict with every single characteristic we discuss. It is a huge challenge to both ends of the spectrum to robustness and efficiency. Accuracy makes codes more prone to failure (loss of robustness) and the failures have more modes. Accuracy is expensive and makes codes run longer. It can either cause more communication, or more complexity (or both). It makes numerical linear algebra harder and far more fragile. It makes boundary conditions, and initial conditions harder and encourages much more simplification of everything else. If it can be achieved accuracy is fantastic, but it is still a huge challenge to pull of.

The fundamental law of computer science: As machines become more powerful, the efficiency of algorithms grows more important, not less.

– Nick Trefethen

IBM_Blue_Gene_P_supercomputerEfficiency. The code runs fast and uses the computers well. This is always hard to do, a beautiful piece of code that clearly describes an algorithm turns into a giant plate of spaghetti, but runs like the wind. To get performance you end up throwing out that wonderful inheritance hierarchy you were so proud of. To get performance you get rid of all those options that you put into the code. This requirement is also in conflict with everything else. It is also the focus of the funding agencies. Almost no one is thinking productively about how all of this (doesn’t) fit together. We just assume that faster supercomputers are awesome and better.

Q.E.D.

God help us.

 

The Seven Deadly Sins of Secrecy

20 Thursday Nov 2014

Posted by Bill Rider in Uncategorized

≈ 1 Comment

A secret’s worth depends on the people from whom it must be kept.

― Carlos Ruiz Zafan

Secrecy is a necessary element in the conduct of National Security. Some information is either too dangerous to too many to be allowed to be freely shared. It needs to be effective. The mis-classification of information is a threat because it undermines the cases where classification is necessary and appropriate.Boschsevendeadlysins

Secrecy also carries a massive cost that should be factored into the consideration. When something is secret is isn’t subject to the sort of review and consideration that public information is. As a result aspects like the quality of the related work suffer, or achieving quality is more expensive. The United States kept a large number of documents classified for decades and hidden from view in spite of having no classified information and having great historical value. Other documents were classified solely due to their embarrassing nature to the government. Some of these were the same.

Man is not what he thinks he is, he is what he hides.

― André Malraux

capitol-building-from-gala-300x200It isn’t a secret that the United States has engaged in a veritable orgy of classification since 9/11. What is less well known is the massive implied classification through other data categories such as “official use only (OUO)”. This designation is itself largely unregulated as such is quite prone to abuse.

The smart way to keep people passive and obedient is to strictly limit the spectrum of acceptable opinion, but allow very lively debate within that spectrum….

― Noam Chomsky

OUO is also used to manage things like export control. Despite its importance, the law managing export control is terrible. It is poorly written, poorly managed, and its application is primarily driven by fear rather than rational thought. It is too important to treat this way, and I believe its implementation is actually a threat to our security. It might be useful to describe explicitly the ways that secrecy and classification are abused. Some of these sins are a nuisance, and some of them border on unethical, immoral or illegal.Unknown

The ones with no imagination are always the quickest to justify themselves

― Haruki Murakami

Legitimacy (pride). Some information seeks legitimacy through being declared classified in some manner. For example, shoddy work can be implied to be legit through its classification.

It is in the nature of the human being to seek a justification for his actions.

― Aleksandr Solzhenitsyn,

Importance (envy). Some information or the work related to the information is implied to be more important because it is classified. I see this a lot. It is a way of making the case that your work is so important that it needs to be protected.

I have grown to love secrecy. It seems to be the one thing that can make modern life mysterious or marvelous to us. The commonest thing is delightful if only one hides it.

― Oscar Wilde

Hiding (greed). Some information is deliberately hidden through classification, the words to remember here are “need to know”. This is used to hide things that people don’t want too many eyes on. When I encounter this it disgusts me.

It is almost always the cover-up rather than the event that causes trouble.

–Howard Baker

Cover-up (lust). Fortunately, I have not seen this form in person, but it often involves things that are embarrassing or illegal. You’ve seen this in the news, I’m sure it happens a lot more than we think.

 Withholding information is the essence of tyranny. Control of the flow of information is the tool of the dictatorship.

― Bruce Cavil

Control (wrath). This is a common use of classification. It is a valid reason in many cases, but it is also misused when it is applied to keep communication from happening. Another place where the words “need to know” appear.

Safety (gluttony). This is the “cover your ass” version of classification. Basically, you’re not sure so you declare it to be classified because it’s the “safe” thing to do. At some level there isn’t anything wrong with this if you rectify the situation promptly. In many areas there is clear guidance that allows a better final determination to be made. In other areas, like OUO there is no clear guidance, and the safety sin reigns through the fear associated with awfully written laws.

The awareness of the ambiguity of one’s highest achievements – as well as one’s deepest failures – is a definite symptom of maturity.

― Paul Tillich

Ambiguity (sloth). This goes to the heart of the management of classified documents. In the example of export control we have allowed ambiguity and resultant fear to rule for years. There is no standard and no clear guidance. As a result the application of classification is uneven and ultimately self-conflicting.  seven-deadly-sins

Do we ever really get a blank page?

19 Wednesday Nov 2014

Posted by Bill Rider in Uncategorized

≈ Leave a comment

A blank canvas…has unlimited possibilities.

― Stephanie Perkins

Clean the SlateAgain something at work has inspired me to write. It’s a persistent theme among authors, artists and scientists regarding the concept of the fresh start (blank page, empty canvas, original idea). I think its worth considering how truly “fresh” these really are. This idea came up during a technical planning meeting where one of participants viewed this new project as being offered a blank page.

Are we every really offered a blank page? Or is there more to it?

Blank billboardOnce we stepped over that threshold, conflict erupted over the choices available with little conclusion. A large part of the issue was the axioms each person was working with. Across the board we all took a different set of decisions to be axiomatic. At some time in the past these “axioms” were choices, and became axiomatic through success. Someone’s past success becomes the model for future success, and the choices that led to that success become unstated decisions we are generally completely unaware of. These form the foundation of future work and often become culturally iconic in nature.

blank-page1Take the basic framework for discretization as an operative example: at Sandia this is the finite element method; at Los Alamos it is finite volumes. At Sandia we talk “elements”, at Los Alamos it is “cells”. From there we continued further down the proverbial rabbit hole to discuss what sort of elements (tets or hexes). Sandia is a hex shop, causing all sorts of headaches, but enabling other things, or simply the way a difficult problem was tackled. Tets would improve some things, but produce other problems. For some ,the decisions are flexible, for others there isn’t a choice, the use of a certain type of element is virtually axiomatic. None of these things allows a blank slate, all of them are deeply informed and biased toward specific decisions of made in some cases decades ago.

It’s so fine and yet so terrible to stand in front of a blank canvas.

– Paul Cezanne

 

If the decision was a success it stands a chance of ultimately becoming axiomatic. If it wasn’t a success, you probably don’t even know about it much less the details of what went wrong. Failures fade away after a few years. The institutional memory is crumbling right in front of us. One might even say that our Labs are developing a sort of organizational Alzheimer’s. For this reason the low risk path is to follow in the footsteps of the successes. In some cases the failure was idiosyncratic or not remotely related to the choice, but other effects. This leads to deep sustained problems with progress, and the options available to deal with deep problems.

 

Creativity is always a leap of faith. You’re faced with a blank page, blank easel, or an empty stage.

– Julia Cameron

 

5-lpmotb_ch034_005The other day I risked a lot by comparing the choices we’ve collectively made in the past as “original sin”. In other words what is computing’s original sin? Of course this is a dangerous path to tread, but the concept is important. We don’t have a blank slate; our choices are shaped, if not made by decisions of the past. We are living, if not suffering due to decisions made years or decades ago. This is true in computing as much as any other area.

 

Human material existence is limited by ideas, not by stuff

–Steven Pinker

The key to the original advances in computing was first the drive to use computers to solve important problems. Once important problems were solved, the method of solution provided the proof it could be done. Others could then in good faith follow in those footsteps and build upon that experience. At Los Alamos in the 40’s and early 50’s this happened in a chain of Von Neumann, to Richtmyer to Lax and Harlow. Each person’s work built upon the others progress. One might consider Von Neumann to have worked with a blank page, but he was building upon the work of Richardson as well as Courant, Friedrichs and Lewy. Their work was itself based upon the efforts of Newton, Gauss, Hilbert and countless others.

 

You might not write well every day, but you can always edit a bad page. You can’t edit a blank page.

– Jodi Picoult

 

The key is that we are all shaped by history and the success (and failures) of the past. We are shaped by our culture and biases. We are shaped by whom we meet and what we experience. The blank page is merely a vehicle for us to produce a record of this influence. Most of the time we aren’t even conscious of all the implicit decisions we commit in creative process.

 

Habits

Good habits are worth being fanatical about.

― John Irving

 9781400069286_custom-401a0d258f36abc0afccb673d3bab1de7926e20e-s99-c85In case you’re wondering about my writing habit and blog. I can explain a bit more. If you aren’t, stop reading. In the sense of authorship I force myself to face the blank page every day as an exercise in self-improvement. I read Charles Durhigg’s book “Habits” and realized that I needed better habits. I thought about what would make me better and set about building them up. I have a list of things to do every day, “write” “exercise” “walk” “meditate” “read” and so on.

 

We become what we repeatedly do.

― Sean Covey

 

Walking-DogThe blog is a concrete way of putting the writing to work. Usually, I have an idea the night before, and draft most of the thoughts during my morning dog walk (dogs make good motivators for walks). I still need to craft (hopefully) coherent words and sentences forming the theme. The blog allows me to publish the writing with a minimal effort, and forces me to take editing a bit more seriously. The whole thing is an effort to improve my writing both in style and ease of production.

 

A man who can’t bear to share his habits is a man who needs to quit them.

― Stephen King

 

The topics are things that I’m working on, or thinking about, or simply pisses me off. It’s a way of working them out in more detail and trying to produce a logical structure for the thought process. My wife thinks its good because I don’t bug her with this “shit” anymore, but in all honesty it just makes room for different “shit” to bug her about!

 

 

Reviews are a (necessary) pain

18 Tuesday Nov 2014

Posted by Bill Rider in Uncategorized

≈ Leave a comment

I much prefer the sharpest criticism of a single intelligent man to the thoughtless approval of the masses.

― Johannes Kepler

phd100107sFor some reason I’m having more “WTF” moments at work lately. Perhaps something is up, or I’m just paying attention to things. Yesterday we had a discussion about reviews, and people’s intense desire to avoid them. The topic came up because there have been numerous efforts to encourage and amplify technical review recently. There are a couple of reasons for this, mostly positive, but a tinge of negativity lies just below the surface. It might be useful to peel back the layers a bit and look at the dark underbelly.

The pleasure of criticizing takes away from us the pleasure of being moved by some very fine things.

― Jean de La Bruyère

First, the reasons for an increased emphasis on reviews should be examined. It turns out that most of the problems are lying on the surface. The general assumption is that peer review is one of the cornerstones of quality in science. It is a powerful mechanism for communication in both directions; the reviewers are experts you’d like to promote your ideas with, and the reviewers usually have something useful to say to you, at least if they are doing it right. Nonetheless, as many of you know, peer review can be emotionally draining, and painful to go through. A second, less positive aspect is the organizational desire to escape embarrassment from either shoddy or fraudulent work, which should be smoked out via peer review. A third aspect that also comes from the “dark side” of peer review is a sort of smoke and mirrors of using it to craft a veneer of due diligence and implied quality (more on this later). These are reviews that are mandated by organizations, and of course, such a mandate shouldn’t be needed to have this happen, but as such the mandate is actually a sign that a problem exists. Technical organizations should “know” that technical review is essential to its fundamental health.

phd111214sI touched on this topic a couple of weeks ago (https://williamjrider.wordpress.com/2014/11/02/why-does-vv-get-me-in-trouble/), but classic peer review has many problems. Some of these are due to abuse of the anonymous nature and inadequate policing of this abuse by editorial boards. The difficult part of the review for the reviewer isn’t the critique as most papers always have weaknesses, but rather balancing it with appropriate praise. In all honesty, I personally struggle with this. It is the balance between doing a fair and complete job of reviewing while not being unfairly harsh in criticism. Despite my conscious efforts to deal with the problem, I probably fail to hit the mark. As I note later, no one ever taught me how to do a review; I discovered it through osmosis.

A more difficult topic is the organizational imperative to avoid embarrassment. Mandated reviews are a terrible way to handle this problem, and a terrible reason for reviews. The need to have work reviewed should flow from the basic duty of scientists to communicate their work to peers and receive feedback. The mandated review for the purpose of ferreting out fraud or garbage is unnecessarily confrontational. It puts a negative spin on the entire topic of review. The real core of the issue is management, which should be the responsible agent in knowing what it going on in the first place. A review isn’t a police action, and using it as such undermines the purpose of review in subtle and pernicious ways. Given the sorry state of peer review, these are hits it can’t take.

 People ask you for criticism, but they only want praise.

― W. Somerset Maugham

Unknown-1The biggest problems with peer reviews are “bullshit reviews”. These are reviews that are mandated by organizations for the organization. These always get graded and the grades have consequences. The review teams know this thus the reviews are always on a curve, a very generous curve. Any and all criticism is completely muted and soft because of the repercussions. Any harsh critique even if warranted puts the reviewers (and their compensation for the review at risk). As a result of this dynamic, these reviews are quite close to a complete waste of time.

UnknownBecause of the risk associated with the entire process, the organizations approach the review in an overly risk-averse manner, and control the whole thing. It ends up being all spin, and little content. Together with the dynamic created with the reviewers, the whole thing spirals into a wasteful mess that does no one any good. Even worse, the whole process has a corrosive impact on the perception of reviews. They end up having no up side; it is all down side and nothing useful comes out of them. All of this even though the risk from the reviews has been removed through a thoroughly incestuous process.

Don’t criticize what you can’t understand.

― Bob Dylan

US_House_CommitteeAn element in the overall dynamic is the societal image of external review as a sideshow meant to embarrass. The congressional hearing is emblematic of the worst sort of review. The whole point is grandstanding and/or destroying those being reviewed. Given this societal model, it is no wonder that reviews have a bad name. No one likes to be invited to their own execution.

When virtues are pointed out first, flaws seem less insurmountable.

― Judith Martin

What can be done about this? The answers are simple, but complex within the Peer review cartoonenvironment we find ourselves. First of all, people should be trained or educated in conducting, accepting and responding to reviews. Despite its importance to the scientific process, we are never trained how to conduct, accept or responds to a review (response happens a bit in a typical graduate education). Today, it is a purely experiential process. Next, we should stop including the scoring of reviews in any organizational “score”. Instead the quality of the review including the production or hard-hitting critique should be expected as a normal part of organizational functioning.

It’s easy to attack and destroy an act of creation. It’s a lot more difficult to perform one.

― Chuck Palahniuk

People, projects and organizations willing and capable of undergoing honest, critical review are usually much better than those who aren’t. The unwilling or softball review is itself is a better indicator of problems than a negative review itself. Peer review is essential for science and we must fix it. It is an essential element in our quality process that we cannot afford to remain completely broken.

 

Progress and the Social Contract

17 Monday Nov 2014

Posted by Bill Rider in Uncategorized

≈ 1 Comment

UnknownI’m a progressive. In almost every way that I can imagine, I favor progress over the status quo. This is true for science, music, art, and literature, among other things. The one place where I tend to be status quo are work and personal relationships that form the foundation for my progressive attitudes. These foundations are formed by several “social contracts” that serve to define the roles and expectations. Without this foundation, the progress I so cherish is threatened because people naturally retreat to conservatism for stability.

Without deviation from the norm, progress is not possible.

― Frank Zappa

imagesWhat I’ve come to realize is that the shortsighted, short is demolishing many of these social contracts –term thinking dominating our governance. Our social contracts are the basis of trust and faith in our institutions whether they are the rule of government, or the place we work. In each case we are left with a severe corrosion of the intrinsic faith once granted these cornerstones of public life. The cost is enormous, and may have created a self-perpetuating cycle of loss of trust precipitating more acts that undermine trust.

 Stagnation is self-abdication.

― Ryan Talbot

Take the Labs where I’ve worked. At one time the Lab’s were trusted with the (nuclear) defense of the Nation. This happened in a time of immense threat and danger, yet the oversight was minimal. A substantial resource was given to the Labs to pursue the mission, and the Labs performed marvelously. The Labs fulfilled their social contract with the Nation, and similarly the Labs created a social contract with its employees. Serve here, and we will take care of you. You will be given engaging work, paid well, and ultimately allowed to retire comfortably. Shape your scientific explorations in service of the National security mission, and you will be provided resources. Beyond the direct success in the nuclear work, the scientific work was part of the Nation’s preeminence internationally and produced much of the foundation for great economic success. We have almost systematically destroyed everything good about these Labs.

It takes strength and courage to admit the truth.

―Rick Riordan

Even before the Cold War ended, this social contract began to unravel. The trust eroded and the money came with increasing strings attached. Similarly, the social contract with the employees became too “expensive” to fulfill. Over time the lack of trust and the associated “accountability” has spiraled out of control (we will spend ten dollars to save one). This has precipitated the no risk, no failure allowed environment that is choking innovation and progress out of our work. Increasingly the support for a career at the Labs is being removed, and it’s turning into just another job (not a bad one, but nothing special either).

These developments are paralleled by changes across the economy. They are manifestations of the short-term quarterly return mentality ruling industry. Research and development without immediate impact on the bottom line are increasingly missing from industrial research (missing from government research too). Employees are commodities whose life and career prospects is none of the business concern. The Labs benchmark themselves to these industries and share these attitudes because it benefits the short-term balance sheet.

400-06172676What gets lost? Almost everything. Progress, quality, security, you name it. Our short-term balance sheet looks better, but our long-term prospects look dismal. The scary thing is that these developments help drive conservative thinking, which in turn drives these developments. As much as anything this could explain our Nation’s 50-year march to the right. We have taken the virtuous cycle we were granted, and developed a viscous cycle. It is a cycle that we need to get out of before it crushes our future.

Any defensiveness is a sign of failure. You can’t move forward if you are defensive.

― Bryant McGill,

images-1We got here through overconfidence and loss of trust can we get out of it by combining realism with trust in each other. Right now, the signs are particularly bad with nothing looking like realism, or trust being part of the current public discourse on anything.  images-2

← Older posts
Newer posts →

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • March 2026
  • February 2026
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013

Categories

  • Uncategorized

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • The Regularized Singularity
    • Join 60 other subscribers
    • Already have a WordPress.com account? Log in now.
    • The Regularized Singularity
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...