• About The Regularized Singularity

The Regularized Singularity

~ The Eyes of a citizen; the voice of the silent

The Regularized Singularity

Monthly Archives: January 2018

Curing the Plague of Meetings

26 Friday Jan 2018

Posted by Bill Rider in Uncategorized

≈ Leave a comment

If you had to identify, in one word, the reason why the human race has not achieved, and never will achieve, its full potential, that word would be “meetings.”

― Dave Barry

dt170331 copyMeetings. Meetings, Meetings. Meetings suck. Meetings are awful. Meeting are soul sucking, time wasters. Meetings are a good way to “work” without actually working. Meetings absolutely deserve the bad rap they get. Most people think that meetings should be abolished. One of the most dreaded workplace events is a day that is completely full of meetings. These days invariably feel like complete losses, draining all productive energy from what ought to be a day full of promise. I say this as an unabashed extrovert knowing that the introvert is going to feel overwhelmed by the prospect.

Meetings are a symptom of bad organization. The fewer meetings the better.

– Peter Drucker

Project Quickstart – Tenstep with regard to Round Table Business Meeting

All of this is true, and yet meetings are important, even essential to a properly functioning workplace. As such, meetings need to be the focus of real effort to fix while minimizing unnecessary time spent there. Meetings are a vital humanizing element in collective, collaborative work. Deep engagement with people is enriching, educational, and necessary for fulfilling work. Making meetings better would produce immense benefits in quality, productivity and satisfaction in work.

Meetings are at the heart of an effective organization, and each meeting is an opportunity to clarify issues, set new directions, sharpen focus, create alignment, and move objectives forward.

― Paul Axtell

dt131003 copyIf there is one thing that unifies people at work, it is meetings, and how much we despise them. Workplace culture is full of meetings and most of them are genuinely awful. Poorly run meetings are a veritable plague in the workplace. Meetings are also an essential human element in work, and work is a completely human and social endeavor. A large part of the problem is the relative difficulty of running a meeting well, which exceeds the talent and will of most people (managers). It is actually very hard to do this well. We have now gotten to the point where all of us almost reflexively expect a meeting to be awful and plan accordingly. For my own part, I take something to read, or my computer to do actual work, or the old stand-by of passing time (i.e., fucking off) on my handy dandy iPhone. I’ve even resorted to the newest meeting past-time of texting another meeting attendee to talk about how shitty the meeting is. All of this can be avoided by taking meetings more seriously and crafting time that is well spent. If this can’t be done the meeting should be cancelled until the time is well spent.

The least productive people are usually the ones who are most in favor of holding meetings.

― Thomas Sewell

There are a few uniform things that can be done to improve the impact of meetings on the workplace. If a meeting is mandatory, it will almost surely suck. It will almost always suck hard. No meeting should ever be mandatory, ever. By forcing people to go to mandatory meetings, those running the meeting have no reason to make the meeting enjoyable, useful or engaging. They are not competing for your time, and this allows your time to be abused. A meeting should always be trying to make you want to be there, and honestly compete for your time. A fundamental notion that makes all meetings better is a strong sense that you know why you are at a meeting, and how you are participating. There is no reason for attendance to a meeting where you passively absorb information without any active role. If this is the only way to get the information, we highlight deeper problems that are all too common! Everyone should have an active role in the meeting’s life. If someone is not active, they probably don’t need to be there.

dt150619 copy

Meetings at work present great opportunities to showcase your talent. Do not let them go to waste.

― Abhishek Ratna

There are a lot of types of meetings, and generally speaking all of them are terrible, and they don’t need to be. None of them really have to be awful, but they are. Some of the reasons are a tremendously deep issue with the modern workplace. It is only a small over reach to say that better meetings would go a huge distance to improve the average workplace and provide untold benefits in terms of productivity and morale. So, to set the stage, let’s talk about the general types of meetings that most of us encounter:

  • Conferences, Talks and symposiums
  • Informational Meetings
  • Organizational Meetings
  • Project Meetings
  • Reviews
  • Phone, Skype, Video Meetings
  • Working meetings
  • Training Meetings

All of these meetings can stand some serious improvement that would have immense benefits.

maxresdefault

Meetings are indispensable when you don’t want to do anything.

–John Kenneth Galbraith

The key common step to a good meeting is planning and attention to the value of people’s time. Part of the planning is a commitment to engagement with the meeting attendees. Do those running the meeting know how to convert the attendees to participants? Part of the meeting is engaging people as social animals and building connections and bonds. The worst thing is a meeting that a person attends solely because they are supposed to be there. Too often our meetings drain energy and make people feel utterly powerless. A person should walk out of a meeting energized and empowered. Instead, meeting are energy and morale sucking machines. A large part of the meeting’s benefit should be a feeling of community and bonding with others. Collaborations and connections should arise naturally from a well run meeting. All of this seems difficult and it is, but anything less does not honor the time of those attending and the great expense their time represents. In the end, the meeting should be a valuable expenditure of time. More than simply valuable, the meeting should produce something better, a stronger human connection and common purpose of all those attending. If the meeting isn’t a better expenditure of people’s time, it probably shouldn’t happen.

dt121215 copy

A meeting consists of a group of people who have little to say – until after the meeting.

― P.K. Shaw

phd042417s copyConferences, Talks and symposiums. This is a form of meeting that generally works pretty well. The conference has a huge advantage as a form of meeting. Time spend at a conference is almost always time well spent. Even at their worst, a conference should be a banquet of new information and exposure to new ideas. Of course, they can be done very poorly and the benefits can be undermined by poor execution and lack of attention to detail.phd091813s copyConversely, a conference’s benefits can be magnified by careful and professional planning and execution. One way to augment a conference significantly is find really great keynote speakers to set the tone, provide energy and engage the audience. A thoughtful and thought-provoking talk delivered by an expert who is a great speaker can propel a conference to new heights and send people away with renewed energy. Conferences can also go to greater lengths to make the format and approach welcoming to greater audience participation especially getting the audience to ask questions and stay awake and aware. It’s too easy to tune out these days with a phone or laptop. Good time keeping and attention to the schedule is another way of making a conference work to the greatest benefit. This means staying on time and on schedule. It means paying attention to scheduling so that the best talks don’t compete with each other if there are multiple sessions. It means not letting speaker filibuster through the Q&A period. All of these maxims hold for a talk given in the work hours, just on a smaller and specific scale. There the setting, time of the talk and the time keeping all help to make the experience better. Another hugely beneficial aspect of meetings is food and drink. Sharing food or drink at a meeting is a wonderful way for people to bond and seek greater depth of connection. This sort of engagement can help to foster collaboration and greater information exchange. It engages with the innate human social element that meeting should foster (I will note that my workplace has mostly outlawed food and drink helping to make our meetings suck more uniformly). Too often aspects of the talk or conference that would make the great expense of people’s time worthwhile are skimped on undermining and diminishing the value.

Highly engaged teams have highly engaged leaders. Leaders must be about presence not productivity. Make meetings a no phone zone.

― Janna Cachola

 

Informational Meetings. The informational meeting is one of the worst abuses of people’s time. Lots of these meetings are mandatory, and force people to waste time witnessing evidence of what kind of shit show they are part of. This is very often a one-way exchange where people are expected to just sit and absorb. The information content is often poorly packaged, and ham handed in delivery. The talks usually are humorless and lack any soul. The sins are all compounded with a general lack of audience engagement. Their greatest feature is a really good and completely work appropriate time wasting exercise. You are at work and not working at all. You aren’t learning much either, it is almost always some sort of management BS delivered in a politically correct manner. Most of the time the best option is to completely eliminate these meetings. If these meetings are held, those conducting them should spend some real effort into making them worthwhile can valuable. They should seek a format that engages the audience and encourages genuine participation.

When you kill time, remember that it has no resurrection.

― A.W. Tozer

Organizational Meetings. The information’s meeting’s close relative is the organizational meeting. Often this is an informational meeting is disguise. This sort of meeting is called for an organization of some size to get together and hear the management give them some sort of spiel. These meeting happen at various organizational levels and almost all of them are awful. Time wasting drivel is the norm. Corporate or organizational policies, work milestones, and cheesy awards abound. Since these meeting is more personal than the pure informational meeting there is some soul and benefit to them. The biggest sin in these meetings is the faux engagement. Do the managers running these meetings really want questions, and are they really listening to the audience. Will they actually do anything with the feedback? More often than not, the questions and answers are handled professionally then forgotten. The management generally has no interest in really hearing people’s opinions and doing anything with their views, it is mostly a hollow feel good maneuver. Honest and genuine engagement is needed and these days management needs to prove that its more than just a show.

People who enjoy meetings should not be in charge of anything.

― Thomas Sowell

Project Meetings. In many places this is the most common meeting type. It is also tending to be one of the best meeting types where everyone is active and participating. The meeting involves people working to common ends and promotes genuine connection between efforts. These can take a variety of forms such as the stand-up meeting where everyone participates by construction. An important function of the project meeting is active listening. While this form of meeting tends to be good, it still needs planning and effort to keep it positive. If the project meeting is not good, it probably reflects quite fully on the project itself. Some sort of restructuring of the project is a cure. What are the signs that a project meeting is bad? If lots of people are sitting like potted plants and not engaged with the meeting, the project is probably not healthy. The project meeting should be time well spent, if they aren’t engaged, they should be doing something else.

Integrity is telling myself the truth. And honesty is telling the truth to other people.

― Spencer Johnson

downloadReviews. A review meeting is akin to a project meeting, but has an edge that makes it worse. Reviews often teem with political context and fear. A common form is a project team, reviewers and then stakeholders. The project team presents work to the reviewers, and if things are working well, the reviewers ask lots of questions. The stakeholders sit nervously and watch rarely participating. The spirit of the review is the thing that determines whether the engagement is positive and productive. The core value about which value revolves is honesty and trust. If honesty and trust are high, those being reviewed are forthcoming and their work is presented in a way where everyone learns and benefits. If the reviewers are confident in their charge and role, they can ask probing questions and provide value to the project and the stakeholders. Under the best of circumstances, the audience of stakeholders can be profitably engaged in deepening the discussion, and themselves learn greater context for the work. Too often, the environment is so charged that honesty is not encouraged, and the project team tends to hide unpleasant things. If reviewers do not trust the reception for a truly probing and critical review, they will pull their punches and the engagement will be needlessly and harmfully moderated. A sign that neither trust nor honesty is present comes from an anxious and uninvolved audience.

I think there needs to be a meeting to set an agenda for more meetings about meetings.

― Jonah Goldberg

dt130731 copy

Phone, Skype, Video Meetings. These meetings are convenient and often encouraged as part of a cost saving strategy. Because of the nature of the medium these meetings are often terrible. Most often it turns into a series of monologs usually best suited for reporting work. Such meetings are rarely good places to hear about work. This comes from two truths: the people on the phone are often disengaged and listening while attending to other things. It is difficult to participate in any dynamic discussion, it happens, but it is rare. Most of the content is limited to the spoken word, and lacks body language and visual content. The result is much less information being transmitted, along with a low bandwidth of listening. For the most part these meeting should be done away with. If someone has something really interesting and very timely it might be useful, but only if we are sure the audience is paying real attention. Without dynamic participation one cannot be sure the attention is actually being paid.

dt140816 copy

Working meetings. These are the best meetings, hands down. They are informal, voluntary and dynamic. The people are there because they want to get something done that requires collaboration. If other types of meetings could incorporate the approach and dynamic of a working meeting, all of them would improve dramatically. Quite often these meetings are deep on communication and low on hierarchical transmission. Everyone in the meeting is usually engaged and active. People are rarely passive. They are there because they want to be there, or they need to be there. In many ways all of meeting could benefit mightily by examining working meetings, and adopting their characteristics more broadly.

Training Meetings. The use of a meeting to conduct training is common, as they are bad. These meetings could be improved greatly by adopting the principles from education. A good training is educational. Again dynamic, engaged meeting attendees are a benefit. If they are viewed as students, good outcomes can be had. Far too often the training is delivered in a hollow mandatory tone that provides little real value for these receiving it. We have a lot of soulless compliance training that simply pollutes the workplace with time wasting. Compliance is often associated with hot-button issues where the organization has no interest in engaging the employees. They are simply forced to do things because those in power say so. A real discussion on this sort of training is likely to be difficult and cast doubt. The conversations are difficult and likely to be confrontational. It is easier to passively waste people’s time and get it over with. This attitude is some blend of mediocrity and cowardice that has a corrosive impact on the workplace.

One source of frustration in the workplace is the frequent mismatch between what people must do and what people can do. When what they must do exceeds their capabilities, the result is anxiety. When what they must do falls short of their capabilities, the result is boredom. But when the match is just right, the results can be glorious. This is the essence of flow.

― Daniel H. Pink

dt121226 copyBetter meetings are a mechanism where our workplaces have an immense ability to improve. A broad principle is that a meeting needs to have a purpose and desired outcome that is well known and communicated to all participants. The meeting should engage everyone attending, and no one should be a potted plant, or otherwise engaged. Everyone’s time is valuable and expensive, the meeting should be structured and executed in a manner fitting its costs. A simple way of testing the waters are people’s attitudes toward the meeting and whether they are positive or negative. Do they want to go? Are they looking forward to it? Do they know why the meeting is happening? Is there an outcome that they are invested in? If these questions are answered honestly, those calling the meeting will know a lot and they should act accordingly.

The cure for bad meetings is recognition of their badness, and a commitment to making the effort necessary to improve them. Few things have a greater capacity to make the workplace better, more productive and improve morale.

When employees feel valued, and are more productive and engaged, they create a culture that can truly be a strategic advantage in today’s competitive market.

― Michael Hyatt

Total Variation Diminishing (TVD) Schemes; Their Essential Contribution to Progress in Methods

19 Friday Jan 2018

Posted by Bill Rider in Uncategorized

≈ 4 Comments

Mathematics is the door and key to the sciences.

— Roger Bacon

images-2It is time to return to great papers of the past. The past has clear lessons about how progress can be achieved. Here, I will discuss a trio of papers that came at a critical juncture in the history of numerically solving hyperbolic conservation laws. In a sense, these papers were nothing new, but provided a systematic explanation and skillful articulation of the progress at that time. In a deep sense these papers represent applied math at its zenith, providing a structural explanation along with proof to accompany progress made by others. These papers helped mark the transition of modern methods from heuristic ideas to broad adoption and common use. Interestingly, the depth of applied mathematics ended up paving the way for broader adoption in the engineering world. This episode also provides a cautionary lesson about what holds higher order methods back from broader acceptance, and the relatively limited progress since.

The three papers I will focus on are:

Harten, Ami. “High resolution schemes for hyperbolic conservation laws.” Journal of computational physics 49, no. 3 (1983): 357-393.

Harten, Ami. “On a class of high resolution total-variation-stable finite-difference schemes.” SIAM Journal on Numerical Analysis 21, no. 1 (1984): 1-23.

Sweby, Peter K. “High resolution schemes using flux limiters for hyperbolic conservation laws.” SIAM journal on numerical analysis 21, no. 5 (1984): 995-1011.

The first two are by the late Ami Harten providing a proof of the monotone behavior seen with the heuristic methods existing at that time. The proofs provided some confidence to many that had been lacking from the truly innovative, but largely heuristic invention of the methods. The third paper by Peter Sweby provided a clear narrative and an important graphical tool for understanding these methods and displaying limiters, the nonlinear mechanism that produced the great results. The “Sweby diagram” was the reduction of these complex nonlinear methods to a nonlinear function. The limiter was then a switch between two commonly used classical methods. The diagram produced a simple way of seeing whether any given limiter was going to give second-order non-oscillatory results. Together these three papers paved the way for common adoption of these methods.

Mathematics is the art of giving the same name to different things.

– Henri Poincaré

Bram Van Leer
Bram Van Leer
Jay Boris
Jay Boris

In the 1970’s three researchers principally invented these nonlinear methods, Jay Boris, Bram Van Leer, and Vladimir Kolgan.  Of these three Boris and Van Leer achieved fame and great professional success. The methods were developed heuristically and worked very well. Each of these methods explicitly worked to overcome Godunov’s barrier theorem that says a second-order linear method cannot be monotone. Both made the methods nonlinear through adapting the approximation based on the local structure of the solution. Interestingly Boris and Van Leer were physicists, Kolgan was an engineer (Van Leer went on to work extensively in engineering). Kolgan was a Russian in the Soviet Union and died before his discovery could take its rightful place next to Boris and Van Leer (Van Leer has gone to great effort to correct the official record).

[Mathematics] is security. Certainty. Truth. Beauty. Insight. Structure. Architecture. I see mathematics, the part of human knowledge that I call mathematics, as one thing—one great, glorious thing. Whether it is differential topology, or functional analysis, or homological algebra, it is all one thing. … They are intimately interconnected, they are all facets of the same thing. That interconnection, that architecture, is secure truth and is beauty. That’s what mathematics is to me.

― Paul R. Halmos

The problem with all these methods was a lack of mathematical certainty on the quality of results along with proofs and structured explanations of their success. This made the broader community a bit suspicious of the results. In a flux corrected transport (FCT, Boris’ invention) commemorative volume this suspicion is noted. At conferences, there were questions raised about the results that implied that the solutions were faked. The breakthrough with these new methods was that good, too good to be true. Then the explanations came and made a strong connection to theory. The behavior seen in the results had a strong justification in mathematics, and the trust in the methodology grew. Acceptance came on the heals of this trust and widespread adoption.

Harten and others continued to search for even better methods after introducing TVD schemes. The broad category of essentially non-oscillatory (ENO) methods was invented. It has been a broad research success, but never experienced the wide spread adoption that these other methods enjoyed. Broadly speaking, the TVD methods are used in virtually every production code for solving hyperbolic conservation laws. In the physics world, many use Van Leer’s approach and engineering uses Harten-Sweby’s formalism broadly. FCT is used somewhat in the physics world, but its adoption is far less common. Part of the reason for this disparity comes down to the power of mathematical proof and the faith it gives. The lack of success of follow-on methods to get adoption and have success comes from the lack of strong theory with its requisite confidence. Faith, confidence and systematic explanation are all provided by well executed applied mathematics.

What is TVD the theory and how does it work?

(Note: WordPress’ Latex capability continues to frustrate, I cannot get them to typeset so if you can read TeX the equations will make sense)

In a nutshell, TVD is a way of extending the behavior of monotone methods (upwind for the purposes of this discussion) to high-order nonlinear methods. Upwind methods have the benefit of positive coefficients in their stencil. If we write this down for a scalar advection equation, u_t + a u_x = 0 , we get the following form, $u_j^{n+1} = u_j^n – C_{j-1/2} \left( u_j^n – u_{j-1}^n \right) + D_{j+1/2} \left(u_{j+1}^n – u_j^n \right) $. The key for the methods is the positivity of the functions  C_{j-1/2} \ge 0 and D_{j+1/2} \ge 0. For example, an upwind method will give constants for these functions, $latex  C_{j-1/2}  = a \Delta t/\Delta x = \nu $ and D_{j+1/2} = 0 for a > 0. The coefficient is the famous CFL (Courant-Friedrichs-Lewy) number. For the TVD methods, these functions become nonlinear functions of the solution itself, but satisfy the inequalities. Harten had done other work that connected monotone methods to entropy satisfying (i.e., physically relevant solutions), which then implies that TVD methods would be a route to similar results (this would seem to be true, but definitive proofs are lacking). Still the connections are all there and close enough to provide faith in the methodology. This is where Sweby’s work comes in and provides a crucial tool for broad acceptance of this methodology.

200px-LimiterRegionWhat Sweby did was provide a wonderful narrative description of TVD methods, and a graphical manner to depict them. In the form that Sweby described, TVD methods were a nonlinear combination of classical methods: upwind, Lax-Wendroff and Beam Warming. The limiter was drawn out of the formulation and parameterized by the ratio of local finite differences. The limiter is a way to take an upwind method and modify it with some part of the selection of second-order methods and satisfy the inequalities needed to be TVD. This technical specification took the following form, $ C_{j-1/2}  = \nu \left( 1 + 1/2\nu(1-\nu) \phi\ledt(r_{j-1/2}\right) \right) $ and D_{j+1/2} =1/2\nu(1-\nu) \phi\left(r_{r+1/2}\right) for a > 0 and $r_{j-1/2} = \frac{ u_{j}^{n} – u_{j-1}^{n} }{ u_{j-1}^{n}  – u_{j-2}^{n}} $. This produced a beautiful and simple diagram that usefully displayed how any given method compared to others. This graphical means was probably the essential step for broad acceptance (my opinion, but for visual people it was essential and a lot of technical folks are visual).

Beyond the power of applied mathematics, other aspects of the technical problem have contributed to the subsequent lack of progress. The biggest issue is the quantum leap in performance from first- to second-order accuracy. The second order methods produce results that seem turbulent because first-order methods produce a truncation error that laminarizes flows. The second-order method produces results for complex problems that have the look and feel of real flows (this may also be quantitatively true, but the jury is out). Important flows are turbulent, high energy with very large Reynolds numbers. First-order schemes cannot produce these realistically at all. Second-order methods can, and for this reason the new schemes unleashed utility upon the World. With these methods, the solutions took on the look, feel and nature of reality. For this reason, these schemes became essential for codes.

The second reason is the robustness of these methods. First-order monotone methods like upwind are terribly robust. These methods produce physically admissible solutions and do not fail often. Codes run problems to completion. The reason is their extremely dissipative nature. This makes them very attractive for difficult problems and almost guarantees a solution for the calculation. The same dissipation also destroys almost every structure in the solution and smears out all the details that matter. You get answer, but an answer that is fuzzy and inaccurate. These first order methods end up being as extremely expensive when accuracy is desired. Harten’s TVD methods provided a systematic connection of the new second-order methods to the old reliable first-order methods. The new methods were almost as reliable as the first-order methods, but got rid of much of the smearing dissipation that plagued them. Having a structured and expertly produced explanation for the behavior of these methods with clear connections to things people already knew produced rapid adoption by practitioners.

Mathematics is the cheapest science. Unlike physics or chemistry, it does not require any expensive equipment. All one needs for mathematics is a pencil and paper.

― George Pólya

The follow-on efforts with higher than second-order methods have lacked these clear wins. It is clear that going past second-order does not provide the same sort of quantum leap in results. The clear connection and expectations of robustness is also lacking. The problems do not stop there. The essentially non-oscillatory methods select the least oscillatory local approximation, which also happens to be quite dissipative by its very nature. Quite often the high-order method is actually not threatening oscillations at all yet a less accurate approximation is chosen needlessly reducing accuracy. Furthermore, the adaptive approximation selection can preferentially choose unstable approximation in an evolutionary sense, which can result in catastrophe. The tendency to produce the worst of both Worlds has doomed their success and broad adoption. Who wants dissipative and fragile? No one! No production code would make these choices, ever!

Recent efforts have sought to rectify this shortcoming. Weighted ENO methods (WENO) have provided far less intrinsically dissipative methods that also enhance the accuracy. These methods are still relatively dissipative compared to the best TVD methods and invoke their expensive approximations needlessly in regions of the solution where the nonlinear mechanisms are unnecessary. Efforts have produced positivity preserving methods that avoid the production of inherently unphysical results with high-order methods. These developments are certainly a step in the right direction. The current environment of producing new legacy codes is killing any other the energy to stewart these methods into broad adoption. The expense, overly dissipative nature and relatively small payoff all stand in the way.

What might help in making progress past second-order methods?

The first thing to note is that TVD methods are mixed in their order of accuracy. They are second-order in a very loose sense and only when one takes the most liberal norm for computations (L1 for you nerds out there). For the worst-case error, TVD methods are still first-order (L-infinity, and multiple dimensions). This is a pretty grim picture until one also realizes that for nonlinear PDEs with general solutions, first-order accuracy is all you get anyway unless you are willing to track all discontinuities. These same conditions hold for high-order methods we might like to adopt. The accuracy from the new methods is always quite limited and puts a severe constraint on the efficiency of the methods, and a challenge to development and progress. The effort that it takes to get full accuracy for nonlinear problems is quite large, and if this accuracy is not realized, the effort is not worth it. We do know that some basic elements of high-order methods yield substantial benefits, but these benefits are limited (an example are high-order edge values used in the piecewise parabolic method – PPM).

I asked myself, what worked so well for TVD? To me there is a clear and unambiguous connection to what worked in the past. The past was defined by the combination of upwind, Lax-Wendroff, and Beam-Warming methods. These methods along with largely ad hoc stabilization mechanisms provided the backbone of production codes preceding the introduction of these methods. Now TVD schemes form the backbone of production codes. It would seem that new higher order methods should preserve this sort of connection. ENO and WENO methods did not do this, which partially explains their lack of adoption. My suggestion would be a design of methods where one uses a high-order method that can be shown to be TVD, or the high-order method closest to a chosen TVD scheme. This selection would be high-order accurate by construction, but would also produce oscillations at third-order. This is not the design principle that ENO methods use where the unproven assertion is oscillations at the order of approximation. The tradeoff between these two principles is larger potential oscillations with less dissipation and a more unambiguous connection to the backbone TVD methods.

1. Everyone is entitled to their opinion about the things they read (or watch, or listen to, or taste, or whatever). They’re also entitled to express them online.

2. Sometimes those opinions will be ones you don’t like.

3. Sometimes those opinions won’t be very nice.

4. The people expressing those may be (but are not always) assholes.

5. However, if your solution to this “problem” is to vex, annoy, threaten or harrass them, you are almost certainlya bigger asshole.

6. You may also be twelve.

7. You are not responsible for anyone else’s actions or karma, but you are responsible for your own.

8. So leave them alone and go about your own life.

[Bad Reviews: I Can Handle Them, and So Should You(Blog post, July 17, 2012)]

― John Scalzi

pileofshitMy own connection to this work is a nice way of rounding out this discussion. When I started looking at modern numerical methods, I started to look at the selection of approaches. FCT was the first thing I hit upon and tried. Compared to the classical methods I was using, it was clearly better, but its lack of theory was deeply unsatisfying. FCT would occasionally do weird things. TVD methods had the theory and this made is far more appealing to my technically immature mind. After the fact, I tried to project FCT methods onto the TVD theory. I wrote a paper documenting this effort. It was my first paper in the field. Unknowingly, I walked into a veritable mine field and complete shit show. All three of my reviewers were very well-known contributors to the field (I know it is supposed to be anonymous, and the shit show that unveiled itself, unveiled the reviewers too).

The end result was that the paper was never published. This decision occurred five years after it was submitted, and I had simply moved on. My first review was from Ami Harten who basically said this paper is awesome and publish it. He signed the review and sent me some lecture notes on the same topic. I was over the moon, and did call Ami and talk briefly. Six months later my second review came in. It was as different as possible from Ami’s. It didn’t say this exactly, but in a nutshell, it said the paper was a piece of shit. It still remains the nastiest and most visceral review I’ve ever gotten. It was technically flawless on one hand and thoroughly unprofessional in tone on the other. My third review came a year later and was largely editorial in nature. I revised the paper and resubmitted. While all this unfolded Ami died, and the journal it was submitted to descended into chaos partially due to the end of the cold war and its research largess. When it emerged from chaos, I decided to publish the work was largely pointless and not worth the effort.

Some commentary about why this shit show happened is worth explaining. It is all related to the holy war between two armed camps that arose via the invention of these methods and who gets the credit. The paper was attempting to bridge the FCT and TVD worlds, and stepped into the bitter fighting around previous publications. In retrospect, it is pretty clear that FCT was first, and others like Kolgan and Van Leer came after. Their methodologies and approaches were also fully independent, and the full similarity was not clear at the time. While the fullness of time sees these approaches are utterly complementary, at the time of development it was seen as a competition. It was definitely not a collaborative endeavor, and the professional disagreements were bitter. They poisoned the field and people took sides viewing the other side with vitriolic fury. A friend and associate editor of the Journal of Computational Physics quipped that this was one of the nastiest sub-communities in the Journal, and why did I insist on working in this area. It is also one of the most important areas in computational physics working on a very difficult problem. The whole field also hinges upon expert judgement and resists a firm quantitative standard of acceptance.

What an introduction to the field and its genuinely amazing that I continue to work in it at all. If I didn’t enjoy the technical content so much, and not appreciated the importance of the field, I would have run. Perhaps greater success professionally would have followed such a departure. In the long run this resistance and the rule of experts works to halt progress.

If you can’t solve a problem, then there is an easier problem you can solve: find it.

― George Pólya

Kolgan, V. P. “Application of the principle of minimum values of the derivative to the construction of finite-difference schemes for calculating discontinuous gasdynamics solutions.” TsAGI, Uchenye Zapiski 3, no. 6 (1972): 68-77.

Boris, Jay P., and David L. Book. “Flux-corrected transport. I. SHASTA, a fluid transport algorithm that works.” Journal of computational physics 11, no. 1 (1973): 38-69.

Van Leer, Bram. “Towards the ultimate conservative difference scheme. II. Monotonicity and conservation combined in a second-order scheme.” Journal of computational physics 14, no. 4 (1974): 361-370.\

Van Leer, Bram. “Towards the ultimate conservative difference scheme. V. A second-order sequel to Godunov’s method.” Journal of computational Physics 32, no. 1 (1979): 101-136.

Harten, Ami, Bjorn Engquist, Stanley Osher, and Sukumar R. Chakravarthy. “Uniformly high order accurate essentially non-oscillatory schemes, III.” Journal of computational physics 71, no. 2 (1987): 231-303.

Harten, Ami, and Stanley Osher. “Uniformly high-order accurate nonoscillatory schemes. I.” SIAM Journal on Numerical Analysis 24, no. 2 (1987): 279-309.

Harten, Amiram, James M. Hyman, Peter D. Lax, and Barbara Keyfitz. “On finite‐difference approximations and entropy conditions for shocks.” Communications on pure and applied mathematics 29, no. 3 (1976): 297-322.

 

10 Better Things for Scientific Computing to focus on in 2018

12 Friday Jan 2018

Posted by Bill Rider in Uncategorized

≈ Leave a comment

What I cannot create, I do not understand.

– Richard Feynman

We are in deep danger of relying upon science and associated software we do not understand because we have stopped the active creation of knowledge so broadly. I open with one of my favorite quotes by the great physicist Richard Feynman, who also wrote about Cargo Cult Science (https://en.wikipedia.org/wiki/Cargo_cult_science). It is a bold, but warranted assertion to note that much of our science work today is taking on the character of Cargo Cult Science. We are not all the way there, but we have moved a long way toward taking on all of the characteristics of this pathology. In this assertion money is the “cargo” that pseudo-scientific processes are chasing. It is nomaxresdefaultexaggeration to say that getting funding for science has replaced the conduct and value of that science today. This is broadly true, and particularly true in scientific computing where getting something funded has replaced funding what is needed or wise. The truth of the benefit of pursuing computer power above all else is decided upon a priori. The belief was that this sort of program could “make it rain” and produce funding because this sort of marketing had in the past. All results in theRichard-feynmanprogram must bow to this maxim, and support its premise. All evidence to the contrary is rejected because it is politically incorrect and threatens the attainment of the cargo, the funding, the money. A large part of this utterly rotten core of modern science is the ascendency of the science manager as the apex of the enterprise. The accomplished scientist and expert is merely now a useful and necessary detail, the manager reigns as the peak of achievement.

The first principle is that you must not fool yourself — and you are the easiest person to fool.

We’ve learned from experience that the truth will come out. Other experimenters will repeat your experiment and find out whether you were wrong or right. Nature’s phenomena will agree or they’ll disagree with your theory. And, although you may gain some temporary fame and excitement, you will not gain a good reputation as a scientist if you haven’t tried to be very careful in this kind of work. And it’s this type of integrity, this kind of care not to fool yourself, that is missing to a large extent in much of the research in cargo cult science.

– Richard Feynman

If one looks at the scientific computing landscape today, one sees a single force for progress: the creation of a new more powerful supercomputer that is much faster than anything we have today. The United States, Europe and China are all pursuing this path for advancing scientific computing. It is a continuation of a path we have pursued for the last 25 years, but our future is not remotely like the last 25 years. This approach to progress can be explained simply and marketed to the naïve and untechnical. This works because our National leadership is increasingly naïve, witless and obsessively anti-intellectual lacking any technical sophistication. We are in the midst of a tide of low information leadership who are swayed by sweet sounding bullshit far more easily than hard-nosed facts.

The farther backward you can look, the farther forward you are likely to see.

― Winston S. Churchill

mediocritydemotivatorIn this putrid environment, faster computers seem an obvious benefit to science. They are a benefit and pathway to progress, this is utterly undeniable. Unfortunately, it is an expensive and inefficient path to progress, and an incredibly bad investment in comparison to alternative. The numerous problems with the exascale program are subtle, nuanced, highly technical and pathological. As I’ve pointed out before the modern age is no place for subtlety or nuance, we live it an age of brutish simplicity where bullshit reigns and facts are optional. In such an age, exascale is an exemplar, it is a brutally simple approach tailor made for the ignorant and witless. If one is willing to cast away the cloak of ignorance and embrace subtlety and nuance, a host of investments can be described that would benefit scientific computing vastly more than the current program. If we followed a better balance of research, computing to contribute to science far more greatly and scale far greater heights than the current path provides.

Applications that matter to something big would create a great deal of this focus naturally. The demands of doing something real and consequential would breed a necessity to focus progress in an organic way. Last week I opined that such big things are simply not present today in science or society’s broader narrative. Society is doing nothing big or aspirational or challenging to drive progress forward with genuine purpose. To be more pointed, the push for exascale is not big at all, it is rather an exemplar of the lack of vision and consequence. There is a bit of chicken and egg argument to all this. The bottom line is a general lack of underlying and defining purpose to our efforts in computing. Exascale is what we do when we want to market something as “feeling” big, when it is actually doing something small and inconsequential.

Those who do not move, do not notice their chains.

― Rosa Luxemburg

How can I say such a thing?

In a nutshell computing speed is one of the least efficient and least effective ways to improve computational science. It has only been an enabler because computing speed came for free with Moore’s law for most of the last half century. That free lunch is over and past, yet we mindfully ignore this reality (http://herbsutter.com/welcome-to-the-jungle/ ). Even with Moore’s law fully in effect, it was never the leading contributor for progress, progress was paced by numerical methods and algorithmic scaling. Moreover, computing speed cannot fix modeling that is wrong (methods and algorithms don’t fix this either). If a model is wrong, the wrong answer is simply computed much faster. Of course, we know that every model is wrong and the utility of any model is determined via V&V. Issues associated with the use of computing, naïve code users, the loss of expertise, and understanding are simply overlooked, or worse yet made more intractable due to inattention.

Each of these advances has been mentioned before in the guise of a full blog post, but it is useful to put things together to see the wealth of unused opportunity.

80% of results come from 20% of effort/time

― Vilfredo Pareto

  1. Modernizing modeling ought to be a constant and consistent emphasis in science. Computational science is no different. For some reason, the modeling advances have simply stopped. Our basic models of reality are increasingly fixed and immutable, and ever less fit for future purpose. The models of reality have become embedded in computer codes, and ultimately central to the codes structure in numerous respects. As such we start to embed a framework for modeling whose foundation becomes invariant. We can’t change the model without developing an entirely different code. We reduce our modeling to submodels and closure of existing models while the staying within a fundamental modeling framework. This is another area where progress is phenomenally risky to approach and substantially prone to failures and misguided efforts. Without the failure, the ability to learn and produce new and improved model is virtually impossible. https://williamjrider.wordpress.com/2015/02/02/why-havent-models-of-reality-changed-more/, https://williamjrider.wordpress.com/2015/07/03/modeling-issues-for-exascale-computation/ , https://williamjrider.wordpress.com/2017/07/07/good-validation-practices-are-our-greatest-opportunity-to-advance-modeling-and-simulation/
  2. Modernizing methods is not happening. Since methods are one of the best ways to improve the efficiency and effective solution of models, progress is harmed in a manner that cannot be easily recovered by other means. Usually when a model is decided upon, a method is used to solve the model numerically. The numerical method is only slightly less code specific and invariant than the model itself. By virtue of this character, the basic numerical method for a model becomes indistinguishable from the code. If we preserve the code base, we preserve old methods, which means no progress. We are stuck using relatively low-order methods with crude stability mechanisms. The ability to use high-order methods with enhanced accuracy and efficiency is not advancing. The research in numerical methods and the practical application of numerical methods is becoming increasingly divorced from one another. The gap has grown into a chasm, and numerical methods research is losing relevance. Part of the problem is related to the standards of success where methods research allows success to be found on easier problems rather than keeping the problem difficulty fixed. This is yet another place where the inability to accept failure as a necessary element (or even fuel) for success is fatal. https://williamjrider.wordpress.com/2016/06/14/an-essential-foundation-for-progress/, https://williamjrider.wordpress.com/2016/07/25/a-more-robust-less-fragile-stability-for-numerical-methods/,
  3. Algorithmic scaling is the most incredible thing we could achieve in terms of computational performance. The ability to change the scaling exponent on how much work it takes to solve a problem can have a magical impact. Linear algebra is the posterchild for this effect. A breakthrough in scaling can make the impossible problem, possible and even routine to solve. The classical naïve scaling for matrix inversion has the work scaling with the cube of the problem size. Even small problems quickly become utterly intractable and almost no amount of computer power can fix this. Change the scaling to quadratic and new problems suddenly become routine, change the scaling to linear and the problems that can be tackled routinely were unimaginable before. We are stuck at linear, although some fields are starting to see sublinear algorithms. Could these breakthroughs be more common and useful? If they could the impact on computational science would overwhelm the capacity of exascale easily. Today we aren’t even trying to make these advances. In my view, such work is generically risky and prone to failure, can failure is something that has become intolerable, thus success if sacrificed. https://williamjrider.wordpress.com/2015/05/29/focusing-on-the-right-scaling-is-essential/
  4. cell-phoneToday supercomputing is completely at odds with the commercial industry. After decades of first pacing advances in computing hardware, then riding along with increases in computing power, supercomputing has become separate. The separation occurred when Moore’s law died at the chip level (in about 2007). The supercomputing world has become increasingly disparate to continue the free lunch, and tied to an outdated model for delivering results. Basically, supercomputing is still tied to the mainframe model of computing that died in the business World long ago. Supercomputing has failed to embrace modern computing with its pervasive and multiscale nature moving all the way from mobile to cloud. https://williamjrider.wordpress.com/2017/12/15/scientific-computings-future-is-mobile-adaptive-flexible-and-small/
  5. Verification & validation – If the scientific computing efforts are to be real scientific endeavors, V&V is essential. Computational modeling is still modeling and comparison with experiment is the gold standard for modeling, but with computational work the comparison has numerous technical details needing serious attention.  In a very complete way V&V is the scientific method in action within the context of modeling and simulation. This energizes a top to bottom integration of scientific activities and essential feedback up and down this chain. The process produces actionable evidence of how progress is being made and where the bottlenecks to progress exist. The entirety of the V&V work provides a deep technical discourse on the breadth of computational science. The whole of computational science can be improved by its proper application. By weakly supporting V&V, current efforts are cutting themselves off from the integration of the full scientific enterprise and impact into the use of computation scientifically. https://williamjrider.wordpress.com/2016/12/22/verification-and-validation-with-uncertainty-quantification-is-the-scientific-method/
  6. chart-with-huge-error-barsExpansive uncertainty quantification – too many uncertainties are ignored rather than considered and addressed. Uncertainty is a big part V&V, a genuinely hot topic in computational circles, and practiced quite incompletely. Many view uncertainty quantification as only being a small set of activities that only address a small piece of the uncertainty question. Too much benefit is achieved by simply ignoring a real uncertainty because the value of zero that is implicitly assumed is not challenged. This is exacerbated significantly by a half funded and deemphasized V&V effort in scientific computing. Significant progress was made several decades ago, but the signs now point to regression. The result of this often willful ignorance is a lessening of impact of computing and limiting the true benefits. https://williamjrider.wordpress.com/2016/04/22/the-default-uncertainty-is-always-zero/
  7. Data integration and analysis – one of the latest hot topics is big data and data analysis. The internet and sensors are creating massive amounts of data, and its use is a huge technical problem. The big data issue is looking for significant and actionable understanding from the oceans of data. A related and perhaps more difficult problem is small data where there isn’t enough data, or the enough of the data you want. Lots of science and engineering is data limited to a degree that scientific understanding is limited. Modeling and simulation offers a vehicle to augment this data and fill in the gaps. Doing this in a manner that is credible will be huge challenge. The ways forward with credibility use V&V and intensive uncertainty quantification. The proper use of codes and the role of calibration also becomes critical to success.  https://williamjrider.wordpress.com/2016/07/10/10-big-things-for-the-future-of-computational-science/
  8. Multidisciplinary, multiscale science – one of the hot topics a quarter century ago was better Multiphysics methods to replace the pervasive use of operator splitting for complex codes. This effort has utterly failed. We have made very little progress forward. Part of the issue is the inability to produce computational algorithms that are efficient enough to compete. A fully coupled method ends up being so expensive that any accuracy increases from the improved coupling are rendered ineffective. A second and perhaps more powerful reason for lack of ms11progress are the computer codes. Old computer codes are still being used, and most of them use operator splitting. Back in the 1990’s a big deal was made regarding replacing legacy codes with new codes. The codes developed then are still in use, and no one is replacing them. The methods in these old codes are still being used and now we are told that the codes need to be preserved. The codes, the models, the methods and the algorithms all come along for the ride. We end up having no practical route to advancing the methods. https://williamjrider.wordpress.com/2016/09/16/is-coupled-or-unsplit-always-better-than-operator-split/
  9. legacy-code-1Complete code refresh – we have produced and now we are maintaining a new generation of legacy codes. A code is a storage for vast stores of knowledge in modeling, numerical methods, algorithms, computer science and problem solving. When we fail to replace codes, we fail to replace knowledge. The knowledge comes directly from those who write the code and create the ability to solve useful problems with that code. Much of the methodology for problem solving is complex and problem specific. Ultimately a useful code becomes something that many people are deeply invested in. In addition, the people who originally write the code move on taking their expertise, history and knowledge with them. The code becomes an artifact for this knowledge, but it is also a deeply imperfect reflection of the knowledge. The code usually contains some techniques that are magical, and unexplained. These magic bits of code are often essential for success. If they get changed the code ceases to be useful. The result of this process is a deep loss of expertise and knowledge that arises from the process of creating a code that can solve real problems. If a legacy code continues to be used it also acts to block progress of all the things it contains starting with the model and its fundamental assumption. As a result, progress stops because even when there is research advances, it has no practical outlet. This is where we are today. https://williamjrider.wordpress.com/2015/10/30/preserve-the-code-base-is-an-awful-reason-for-anything/ https://williamjrider.wordpress.com/2016/01/01/are-we-really-modernizing-our-codes/ https://williamjrider.wordpress.com/2016/01/14/a-response-to-criticism-are-we-modernizing-our-codes/ https://williamjrider.wordpress.com/2014/03/20/legacy-code-is-terrible-in-more-ways-than-advertised/
  10. image005Democratization of expertise – the manner in which codes are applied has a very large impact on solutions. The overall process is often called a workflow, encapsulating activities starting with problem conception, meshing, modeling choices, code input, code execution, data analysis, visualization. One of the problems that has arisen is the use of codes by non-experts. Increasingly code users are simply not sophisticated and treat codes like black boxes. Many refer to this as the democratization of the simulation capability, which is generally beneficial. On the other hand, we increasingly see calculations conducted by novices who are generally ignorant of vast swaths of the underlying science. This characteristic is keenly related to a lack of V&V focus and loose standards of acceptance for calculations. Calibration is becoming more prevalent again, and distinctions between calibration and validation are vanishing anew. The creation of broadly available simulation tools must be coupled to first rate practices and appropriate professional education. In both of these veins the current trends are completely in the wrong direction. V&V practices are in decline and recession. Professional education is systematically getting worse as the educational mission of universities is attacked, and diminished along with the role of elites in society. https://williamjrider.wordpress.com/2016/12/02/we-are-ignoring-the-greatest-needs-opportunities-for-improving-computational-science/

titan

One of the key aspects of this discussion is recognizing that these activities are all present to some small degree in exascale, but all of them are subcritical. The program basically starves all of these valuable activities and only supports them in fashion that creates a “zombie-like” existence. As a result, the program is turning its back on a host of valuable avenues for progress that could actually make an exascale computer actually far more useful. Our present path has genuine utility, but represents an immense opportunity cost if you factor in what could have been accomplished instead with better leadership, vision and technical sophistication. The way we approach science more broadly is permeated with these inefficiencies meaning our increasingly paltry investments in science are further undermined by our pathetic execution. At the deepest level our broader societal problems revolving around trust, expertise, scandal and taste for failure may doom any project unless they are addressed. For example, the issues related to the preservation of code bases (i.e., creating new legacy codes) are creating deep problems with advancing on the essential fronts of modeling, methods and algorithms. Everything is woven together into a tapestry whose couplings cannot be ignored. This is exactly the sort of subtlety and nuance our current time finds utterly incomprehensible.

Postscript:

It is sometimes an appropriate response to reality to go insane.

― Philip K. Dick

Healey’s First Law Of Holes: When in one, stop digging.

― Denis Healey

DMgfsliWkAAzZ_-Last week I tried to envision a better path forward for scientific computing. Unfortunately, a true better path flows invariably through a better path for science itself and the Nation as a whole. Ultimately scientific computing, and science more broadly is dependent on the health of society in the broadest sense. It also depends on leadership and courage, two other attributes we are lacking in almost every respect. Our society is not well, the problems we are confronting are deep and perhaps the most serious crisis since the Civil War. I believe that historians will look back to 2016-2018 and perhaps longer as the darkest period in American history since the Civil War. We can’t build anything great when the Nation is tearing itself apart. I hope and pray that it will be resolved before we plunge deeper into the abyss we find ourselves. We see the forces opposed to knowledge, progress and reason emboldened and running amok. The Nation is presently moving backward and embracing a deeply disturbing and abhorrent philosophy. In such an environment science cannot flourish, it can only survive. We all hope the darkness will lift and we can again move forward toward a better future; one with purpose and meaning where science can be a force for the betterment of society as a whole.

Everything passes, but nothing entirely goes away.

― Jenny Diski

Toward a More Useful and Impactful Scientific Computing in 2018?

05 Friday Jan 2018

Posted by Bill Rider in Uncategorized

≈ Leave a comment

The purpose of life is not to be happy. It is to be useful, to be honorable, to be compassionate, to have it make some difference that you have lived and lived well.

― Ralph Waldo Emerson

downloadIt would really be great to be starting 2018 feeling good about the work I do. Useful work that impacts important things would go a long way toward achieving this. I’ve put some thought into considering what might constitute work having these properties. This has two parts, what work would be useful and impactful in general, and what would be important to contribute to. As a necessary subtext to this conversation is a conclusion that most of the work we are doing in scientific computing today is neither useful, nor impactful and nothing important is at stake. This alone is a rather bold assertion. Simply put, as a Nation and society we are not doing anything aspirational, nothing big. This shows up in the lack of substance in the work we are paid to pursue. More deeply, I believe that if we did something big and aspirational, the utility and impact of our work would simply sort itself out as part of a natural order.

5064The march of science is the 20th Century was deeply impacted by international events, several World Wars and a Cold (non) War that spurred National interests in supporting science and technology. The twin projects of the atom bomb and the nuclear arms race along with space exploration drove the creation of much of the science and technology today. These conflicts steeled resolve, purpose and granted resources needed for success. They were important enough that efforts were earnest. Risks were taken because risk is necessary for achievement. Today we don’t take risks because nothing important is a stake. We can basically fake results and market progress where little or none exists. Since nothing is really that essential bullshit reigns supreme.

There is only one thing that makes a dream impossible to achieve: the fear of failure.

― Paulo Coelho

One of the keys to these conflicts was the presence of a worthy adversary to steel ourselves for the push forward. Both Nazi Germany and Soviet Russia were worthy enemies whose competence meant putting our best foot forward. In reality and rhetorically we lack such an adversary today to push us. We needed to fully commit and faithfully execute our endeavors to achieve victory against these enemies. These opponents had the clear capacity to destroy the United States and the West if the patriotic_american_demotivators_640_01-s640x487-200741resistance was not real. Ironically the Soviets were ultimately defeated by bullshit. The Strategic Defense Initiative, or Star Wars bankrupted the Soviets. It was complete bullshit and never had a chance to succeed. This was a brutal harbinger of today’s World where reality is optional, and marketing is the coin of the realm. Today American power seems unassailable. This is partially true and partially over-confidence. We are not on our game at all, and far to much of our power is based on bullshit. As a result, we can basically just pretend to try, and actually not execute anything with substance and competence. This is where we are today; we are doing nothing important, and wasting lots of time and money in the process.

How do you defeat terrorism? Don’t be terrorized.

― Salman Rushdie

Again, I freely admit that this is a bold assertion. In scientific computing, we have a National exascale program that underpins National security and economic interests.  It contributes to all of these things in massive ways, at least rhetorically. This support for these National goals is pure marketing, or less generously absolute bullshit. This is simply trotting out a bunch of tired sales pitches for scientific computing that lack any soul and increasingly lack substance. The Nation has no large objectives to support, the entire system is drifting along on auto-pilot. It is brimming with over-confidence and a feeling of superiority that only needs a worthy opponent to expose our largess. We have no enemies that are remotely worthy. We have created some chicken-shit paper tigers like Iran, North Korea and the amorphous and largely toothless Islamic fundamentalism. None of these enemies is even the remotest threat to the United States, or the West in general. If they were a worthy threat then we are in awful shape and far worse than we actually are. Terrorism is only as much of a threat as we make it. We have stoked fear and let ourselves we terrorized because it is useful for the defense-intelligence Industrial complex. It has put trillions of dollars into their coffers, and done little or nothing to build a future. We could simply defeat these enemies by refusing to be terrorized. Some courage and resilience as a Nation would be sufficient to render these pathetic enemies utterly impotent. The greatest damage and threat from these enemies is our response to it, not the actual carnage. Our “leaders” are using them to spread fear among the populace to further their own agendas.

csm_group1_2c3e352676The result of the current model is a research establishment that only goes through the motions and does little or nothing. We make lots of noise and produce little substance. Our nation deeply needs a purpose that is greater. There are plenty of worthier National goals. If war-making is needed, Russia and China are still worthy adversaries. For some reason, we have chosen to capitulate to Putin’s Russia simply because they are an ally against the non-viable threat of Islamic fundamentalism. This is a completely insane choice that is only rhetorically useful. If we want peaceful goals, there are challenges aplenty. Climate change and weather are worthy problems to tackle requiring both scientific understanding and societal transformation to conquer. Creating clean and renewable energy that does not create horrible environmental side-effects remains unsolved. Solving the international needs for food and prosperity for mankind is always there. Scientific exploration and particularly space remain unconquered frontiers. Medicine and genetics offer new vistas for scientific exploration. All of these areas could transform the Nation in broad ways socially and economically. All of these could meet broad societal needs. More to the point of my post, all need scientific computing in one form or another to fully succeed. Computing always works best as a useful tool employed to help achieve objectives in the real World. The real-World problems provide constraints and objectives that spur innovation and keep the enterprise honest.

Reality is that which, when you stop believing in it, doesn’t go away.

― Philip K. Dick

image008Instead our scientific computing is being applied as a shallow marketing ploy to shore up a vacuous program. Nothing really important or impactful is at stake. The applications for computing are mostly make believe and amount to nothing of significance. The marketing will tell you otherwise, but the lack of gravity for the work is clear and poisons the work. The result of this lack of gravity are phony goals and objectives that have the look and feel of impact, but contribute nothing toward an objective reality. This lack of contribution comes from the deeper malaise of purpose as a Nation, and science’s role as an engine of progress. With little or nothing at stake the tools used for success suffer, scientific computing is no different. The standards of success simply are not real, and lack teeth. Even stockpile stewardship is drifting into the realm of bullshit. It started as a worthy program, but over time it has been allowed to lose its substance. Political and financial goals have replaced science and fact, the goals of the program losing connection to objective reality.

Scientific computing came to maturity being an important supporting player for large enterprises. Originally born in the Cold War as a key tool for science and engineering supporting defense science. Scientific computing spread from this base toward more general science, and more recently into broad use by business and the society as a whole. The kernel from which computing sprang was an interwoven set of large National objectives providing the technical foundation that powers our economy today. Computing was a key contributing player in these endeavors. These endeavors also supported a broad phalanx of other technologies and scientific explorations that formed the broad basis of modernizing the world. Such over-arching goals are breathtakingly missing today. We are lacking a World with any vision of a better future and limitless progress.

If we could marshal our efforts into some worthy efforts, what would we work on?

fig10_roleWe would still be chasing faster computers, but the faster computers would not be the primary focus. We would focus on using computing to solve problems that were important. We would focus on making computers that were useful first and foremost. We would want computers that were faster as long as they enabled progress on problem solving. As a result, efforts would be streamlined toward utility. We would not throw vast amounts of effort into making computers faster, just to make them faster (this is what is happening today there is no rhyme or reason to exascale other than, faster is like better, Duh!). Utility means that we would honestly look at what is limiting problem solving and putting our efforts into removing those limits. The effects of this dose of reality on our current efforts would be stunning; we would see a wholesale change in our emphasis and focus away from hardware. Computing hardware would take its proper role as an important tool for scientific computing and no longer be the driving force. The fact that hardware is a driving force for scientific computing is one of clearest indicators of how unhealthy the field is today.

Thinking something does not make it true. Wanting something does not make it real.

― Michelle Hodkin

If scientific computing was taking its role in a healthy National enterprise, the focus would be entirely different. Invariably we would see a very strong emphasis on modeling. In almost every serious endeavor using computing to get real design and analysis results, the physical modeling is the greatest limiting factor. A faster computer is always welcome, but a faster computer never fixes a faulty model. This maxim seems to be utterly and completely ignored in the current scientific computing narrative. The most effective way to improve modeling is also different than current emphasis. Better numerical methods and algorithms provide faster and more accurate solutions to models than computing hardware. This is another area where progress is completely stalled.

code_monkeyCurrent computing focus is only porting old codes to new computers, a process that keeps old models, methods and algorithms in place. This is one of the most corrosive elements in the current mix. The porting of old codes is the utter abdication of intellectual ownership. These old codes are scientific dinosaurs and act to freeze antiquated models, methods and algorithms in place while acting to squash progress. Worse yet, the skillsets necessary for improving the most valuable and important parts of modeling and simulation are allowed to languish. This is worse than simply choosing a less efficient road, this is going backwards. When we need to turn our attention to serious real work, our scientists will not be ready. These choices are dooming an entire generation that could have been making breakthroughs to simply become caretakers. To be proper stewards of our science we need to write new codes containing new models using new methods and algorithms. Porting codes turns our scientists into mindless monks simply transcribing sacred texts without any depth of understanding. It is a recipe for transforming our science into magic. It is the recipe for defeat and the passage away from the greatness we once had.

Without Your Opponent, You are no Victor.

― Anajo Black

 

 

 

 

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • February 2026
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013

Categories

  • Uncategorized

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • The Regularized Singularity
    • Join 55 other subscribers
    • Already have a WordPress.com account? Log in now.
    • The Regularized Singularity
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...