• About The Regularized Singularity

The Regularized Singularity

~ The Eyes of a citizen; the voice of the silent

The Regularized Singularity

Monthly Archives: December 2017

Saying “NO!” is the key to success

29 Friday Dec 2017

Posted by Bill Rider in Uncategorized

≈ Leave a comment

Things which matter most must never be at the mercy of things which matter least.

― Johann Wolfgang von Goethe

bullshit_everywhere-e1345505471862My work day is full of useless bullshit. There is so much bullshit that it has choked out the room for inspiration and value. We are not so much managed as controlled.  This control comes from a fundamental distrust of each other to a degree that any independent ideas are viewed as dangerous. This realization has come upon me in the past few years. It has also occurred to me that this could simply be a mid-life crisis manifesting itself, but the evidence might seem to indicate that it is something more significant (look at the bigger picture of the constant crisis my Nation is in). My mid-life attitudes are simply much less tolerant of time-wasting activities with little or no redeeming value. You realize that your time and energy is limited, why waste it on useless things.

You and everyone you know are going to be dead soon. And in the short amount of time between here and there, you have a limited amount of fucks to give. Very few, in fact. And if you go around giving a fuck about everything and everyone without conscious thought or choice—well, then you’re going to get fucked.

― Mark Manson

the-subtle-art-3d-340pxI read a book that had a big impact on my thinking, “The Subtle Art of Not Giving a Fuck” by Mark Manson . In a nutshell, the book says that you have a finite number of fucks to give in life and you should optimize your life by mindfully not giving a fuck about unimportant things. This gives you the time and energy to actually give a fuck about things that actually matter. The book isn’t about not caring, it is about caring about the right things and dismissing the wrong things. What I realized is that increasingly my work isn’t competing for my fucks, they just assume that I will spend my limited fucks on complete bullshit out of duty. It is actually extremely disrespectful of me and my limited time and effort. One conclusion is that the “bosses” (the Lab, the Department of Energy) not give enough of a fuck about me to treat my limited time and energy with respect and make sure my fucks actually matter.

Maturity is what happens when one learns to only give a fuck about what’s truly fuckworthy.

― Mark Manson

I’ve realized recently that a sense of being inspired has departed from work. I’ve felt this building for years with the feeling that my work is useful and important ebbing away. I’ve been blessed for much of my career with work that felt important and useful where an important component of the product was my own added creativity. The work included a distinct element of my own talents and ideas in whatever was produced.

screen-shot-2015-03-08-at-11-13-43-am
screen-shot-2015-03-08-at-11-15-29-am
screen-shot-2015-03-08-at-11-15-40-am

Superficially speaking, the element of inspiration seems to be present, work with meaning and importance with a sense of substantial freedom. As I implied, these elements are superficial, the reality is that each of these pieces has eroded away, and it is useful to explore how this has happened. The job I have would be a dream to most people, but conditions are degrading. It isn’t just my job, but most Americans are experiencing worsening conditions. The exception is the top of the management class, the executives. This is a mirror to broader societal inequalities logically expressed in the working environment. The key is recognizing that my job used to be much better, and that is something worth exploring in some depth.

At one level, I should be in the midst of a glorious time to be working in computational science and high-performance computing. We have a massive National program focused on achieving “exascale” or at the very least a great advance in computing power. Looking more closely, we can see deep problems that produce an inspiration gap. On the one hand, we have the technical objectives for the program being obsessively hardware focused for progress. We have been on this hardware path for 25 years producing progress, but no transformation in science has actually occurred (the powers that be will say it has, but the truth is that is hasn’t). Our computations are still not predictive, and the hardware is not the limiting aspect of computational science. Worse yet, the opportunities for massive hardware advances has passed and advancing now is fraught with difficulties, roadblocks and will be immensely costly. Aside from hardware, the program is largely focused on low-level software focused while porting old codes, methods and models (note: the things being ported and not invested in are the actual science!). It is not focused on the more limiting aspects of predictive modeling because they are subtle and risky to work on. They cannot be managed like a construction project using off the shelf management practices better suited for low wage workers, and unsuitable for scientists. The hardware path is superficial, easy to explain to the novice and managed as a project similarly to building a bridge or road.K0013407147--590438

This gets to the second problem with the current programs, how they are managed. Science cannot be managed like a big construction project, at least not successfully. The result of this management model is a stifling level of micromanagement. Our management model is defined by overwhelming suspicion and lack of trust resulting in massive inefficiency. The reporting requirements for this mode of management are massive and without value except to bean-counters. At the same time, there is no appetite for risk, and no capacity to tolerate failure. As a result, the entire program loses an ability to inspire, or reach for greatness.

Trinity_Test_Fireball_16ms
Los_Alamos_colloquium

If the Apollo Program had been managed in this fashion, we would have never made it to the Moon while spending vastly greater sums of money. If we had managed the Manhattan project in this way, we would have failed to create the atomic bomb. Without risk, there is no reward. There is a huge amount of resource and effort wasted. We do not lack money as much as we lack vision, inspiration and competent management. This is not to say that the United States does not have an issue investing in science and technology, we do. The current level of commitment to science and technology will assure that some other nation becomes the global leader in science and technology. A compounding issue to the lack of investment is how appallingly inefficient our investment is because of how science is managed today. A complimentary compounding element is the lack of trust in the scientists and engineers. Without trust, no one will take any risk and without taking risks nothing great will ever be achieved. If we don’t solve these problems, we will not produce greatness, plain and simple; we will create decline and decay into mediocrity.

But until a person can say deeply and honestly, “I am what I am today because of the choices I made yesterday,” that person cannot say, “I choose otherwise.

― Stephen R. Covey

National_Labs
ornl
1200px-Los_Alamos_aerial_view

None of these problems suddenly appeared. They are the consequence of decades of evolution toward the current completely dysfunctional management approach. Once great Laboratories have been brought to heel with a combination of constraints, regulations and money. There is more than enough money and people to accomplish massive things. The problem is that the constraints and regulatory environment have destroyed any chance for achievement. With each passing year our scientific programs sound more expansive, but less capable of achieving anything of substance. Our management approach is undermining achievement at every turn. The focus of the management is not producing results, but producing the appearance of success without regard for reality. The workforce must be complaint, and never make any mistakes. The best way to avoid mistakes is low-balling results. You always aim low to avoid the possibility of failing. Each year we aim a little lower, and achieve a little less. This has produced a steady erosion of capability much like an interest-bearing account, but in reverse.

mediocritydemotivatorIf we look at work, it might seem that an inspired workforce would be a benefit worth creating. People would work hard and create wonderful things because of the depth of their commitment to a deeper purpose. An employer would benefit mightily from such an environment, and the employees could flourish brimming with satisfaction and growth. With all these benefits, we should expect the workplace to naturally create the conditions for inspiration. Yet this is not happening; the conditions are the complete opposite. The reason is that inspired employees are not entirely controlled. Creative people do things that are unexpected and unplanned. The job of managing a work place like this is much harder. In addition, mistakes and bad things happen too. Failure and mistakes are an inevitable consequence of hard working inspired people. This is the thing that our work places cannot tolerate. The lack of control and unintended consequences are unacceptable. Fundamentally this stems from a complete lack of trust. Our employers do not trust their employees at all. In turn, the employees do not trust the workplace. It is vicious cycles that drags inspiration under and smothers it. The entire environment is overflowing with micromanagement, control suspicion and doubt.

In the end that was the choice you made, and it doesn’t matter how hard it was to make it. It matters that you did.”

― Cassandra Clare

How do we change it?

One clear way of changing this is giving the employees more control over their work. It has become very clear to me that we have little or no power to make choices at work. One of the clearest ways of making a choice is being given the option to say “NO”. Many articles are written about the power of saying NO to things because it makes your “YES” more powerful. The problem is that we can’t say NO to so many things. I can’t begin to elaborate on all the functionally useless things that don’t have to option of skipping. I spend a great deal of effort on mandatory meetings, training, and reporting that has no value whatsoever. None of it is optional, and most of it is completely useless. Each of these useless activities drains away energy from something useful. All of the useless things I do are related to a deep lack of trust in me and my fellow scientists.

Let’s take the endless reporting and tracking of work as a key example. There is nothing wrong with planning a project and getting updates on progress. This is not what is happening today. We are seeing a system that does not trust its employees and needs to continually look over their sholders. A big part of the problem is that the employees are completely uninspired because the programs they work on are terrible. The people see very little of themselves in the work, or much purpose and meaning in the work. Rather than make the work something deeper and more collaborative, the employers increase the micromanagement and control. A big part of the lack of trust is the reporting. Somehow the whole concept of quarterly progress used for business has become part of science creating immense damage. Lately quarterly progress isn’t enough, and we’ve moved to monthly reporting. All of this says, “we don’t trust you,” “we need to watch you closely” and “don’t fuck up”.

The smart way to keep people passive and obedient is to strictly limit the spectrum of acceptable opinion, but allow very lively debate within that spectrum….

― Noam Chomsky

The-Subtle-Art-of-not-giving-a-fuck2If we can’t say NO to all this useless stuff, we can’t say YES to things either. My work and time budget is completely stocked up with non-optional things that I should say NO to. They are largely useless and produce no value. Because I can’t say NO, I can’t say YES to something better. My employer is sending a message to me with very clear emphasis, we don’t trust you to make decisions. Your ideas are not worth working on. You are expected to implement other people’s ideas no matter how bad they are. You have no ability to steer the ideas to be better. Your expertise has absolutely no value. A huge part of this problem is the ascendency of the management class as the core of organizational value. We are living in the era of the manager; the employee is a cog and not valued. Organizations voice platitudes toward the employees, but they are hollow. The actions of the organization spell out their true intent. Employees are not to be trusted, they are to be controlled and they need to do what they are told to do. Inspired employees would do things that are not intended, and take organizations in new directions, focused on new things. This would mean losing control and changing plans. More importantly, the value of the organization would move away from the managers and move to the employees. Managers are much happier with employees that are “seen and not heard”.

If something is not a “hell, YEAH!”, then it’s a “no!

― James Altucher

What should I be saying YES to?

If I could say YES then I might be able to put my focus into useful, inspired and risky endeavors. I could produce work that might go in directions that I can’t anticipate or predict. These risky ideas might be complete failures. Being a failure I could learn invaluable lessons, and grow my knowledge and expertise. Being risky these ideas might produce something amazing and create something of real value. None of these outcomes are a sure thing. All of these characteristics are unthinkable today. Our managers want a sure thing and cannot deal with unpredictable outcomes. The biggest thing our managers cannot tolerate is failure. Failure is impossible to take and leads to career limiting consequences. For this reason, inspired risks are impossible to support. As a result, I can’t say NO to anything, no matter how stupid and useless it is. In the process, I see work as an increasingly frustrating waste of my time.

Action expresses priorities.

― Mahatma Gandhi

We all have limits defined our personal time and effort. Naturally we have 24 hours a day, 7 days a week and 365 days a year, along with our own personal energy budget. If we are managed well, we can expand our abilities and create more. We can be more efficient and work more effectively. If one looks honestly at how we are managed expanding our abilities and personal growth has almost no priority. Creating an inspiring and exciting place to work is equally low on the list. Given the pathetic level of support for creation and inspiration attention naturally turns elsewhere. Everyone needs a level of balance in their lives and we obviously gravitate toward places where a difference can be made.

mark_manson_the_subtle_art_of_not_giving_a_f_ck_our_crisis_is_no_longerAs Mark Manson writes we only have so many fucks to give and my work is doing precious little to give them there. I have always focused on personal growth and increasingly personal growth is resisted by work instead of resonated with. It has become quite obvious that being the best “me” is not remotely a priority. The priority at work is to be compliant, take no risks, fail at nothing and help produce marketing material for success and achievement. We aren’t doing great work anymore, but pretend we are. My work could simply be awesome, but that would require giving me the freedom to set priorities, take risks, fail often, learn continually and actually produce wonderful things. If this happened the results would speak for themselves and the marketing would take care of itself. When the Labs I’ve worked at were actually great this is how it actually happened.  The Labs were great because they achieved great things. The labs said NO to a lot of things, so they could say YES to the right things. Today, we simply don’t have this freedom.

We are our choices.

― Jean-Paul Sartre

9.26.16If we could say NO to the bullshit, and give our limited fucks a powerful YES, we might be able to achieve great things. Our Labs could stop trying to convince everyone that they were doing great things and actually do great things. The missing element at work today is trust. If the trust was there we could produce inspiring work that would generate genuine pride and accomplishment. Computing is a wonderful example of these principles in action. Scientific computing became a force in science and engineering contributing to genuine endeavors for massive societal goals. Computing helped win the Cold War and put a man on the moon. Weather and climate has been modeled successfully. More broadly, computers have reshaped business and now societally massively. All of these endeavors had computing contributing to solutions. Computing focused on computers was not the endeavor itself like it is today. The modern computing emphasis was originally part of a bigger program of using science to support the nuclear stockpile without testing. It was part of a focused scientific enterprise and objective. Today it is a goal unto itself, and not moored to anything larger. If we want to progress and advance science, we should focus on great things for society, not superficially put our effort into mere tools.

Most of us spend too much time on what is urgent and not enough time on what is important.

― Stephen R. Covey

Say no to everything, so you can say yes to the one thing.

― Richie Norton

Verification and Validation’s Biggest Hurdle is an Honesty

22 Friday Dec 2017

Posted by Bill Rider in Uncategorized

≈ Leave a comment

Better to get hurt by the truth than comforted with a lie.

― Khaled Hosseini

Being honest about one’s shortcomings is incredibly difficult. This is true whether one is looking at their self, or looking at a computer model. It’s even harder to let someone else be honest with you. This difficulty is the core of many problems with verification and validation (V&V). If done correctly, V&V is a form of radical honesty that many simply cannot tolerate. The reasons are easy to see if our reward systems are considered. Computer modeling desires to get great results on the problems they want to solve. Computer modelers are rated on their ability to get seemingly high-quality answers (https://williamjrider.wordpress.com/2016/12/22/verification-and-validation-with-uncertainty-quantification-is-the-scientific-method/ ). As a result, there is significant friction with honest V&V assessments, which provide uncertainty and doubt on the quality of results. The tension between good results and honesty will always favor the results. Thus V&V is done poorly to conserve the ability of modelers to believe their results are better than they really are. If we want V&V to be done well an additional level of emphasis needs to be placed on honesty.image008

If you do not tell the truth about yourself you cannot tell it about other people.

― Virginia Woolf

V&V is about assessing capability. It is not about getting great answers. This distinction is essential to recognize. V&V is about collecting highly credible evidence about the nature of modeling capability. By its very nature, the credibility of the evidence means that the results are whatever the results happen to be. If the results are good the evidence will show this persuasively. If the results are poor, the evidence will indicate the quality (https://williamjrider.wordpress.com/2017/09/22/testing-the-limits-of-our-knowledge/ ). The utility of V&V is providing a path to improvement along with evidence to support this path. As such, V&V provides a path and evidence for getting to improved results. This improved result would then be supported by V&V assessments. This entire process is predicated on the honesty of those conducting the work, but the management of these efforts is a problem. Management is continually trying to promote the great results outcomes for modeling. Unless the results are actually great, this promotion provides direction for lower quality V&V. In the process, honesty and evidence are typically sacrificed.

image002

Standards Subcommittee. Provide procedures for assessing and quantifying the accuracy and credibility of computational modeling and simulation. V&V Standards Committee in Computational Modeling and Simulation. V&V-10 – Verification and Validation in Computational Solid Mechanics. V&V-20 – Verification and Validation in Computational Fluid Dynamics and Heat Transfer. V&V-30 – Verification and Validation in Computational Simulation of Nuclear System Thermal Fluids Behavior. V&V-40 – Verification and Validation in Computational Modeling of Medical Devices.

If we want to do V&V properly, something in this value system needs to change. Fundamentally, honesty and a true understanding of the basis of computational modeling must surpass the desire to show great capability. The trends in management of science are firmly arrayed against honestly assessing capability. With the prevalence of management by press release, and a marketing based sales pitch for science money both act to promote a basic lack of honesty and undermine disclosure of problems. V&V provides firm evidence of what we know, and what we don’t know. The quantitative and qualitative aspects of V&V can produce exceptionally useful evidence of where modeling needs to improve. These characteristics conflict directly with the narrative that modeling has already brought reality to heel. Program after program is sold on the basis that modeling can produce predictions of what will be seen in reality. Computational modeling is seen as an alternative to expensive and dangerous experiments and testing. It can provide reduced costs and cycle times for engineering. All of this can be a real benefit, but the degree of current mastery is seriously oversold.image001

Doing V&V properly can unmask this deception (I do mean deception even if the deceivers are largely innocent of outright graft). The deception is more the product of massive amounts of wishful thinking, and harmful group think focused on showing good results rather than honest results. Sometimes this means willfully ignoring evidence that does not support the mastery. In other cases, the results are based on heavy-handed calibrations, and the modeling is far from predictive. In the naïve view, the non-predictive modeling will be presented as predictions and hailed as great achievements. Those who manage modeling are largely responsible for this state of affairs. They reward the results that show how good the models are and punish honest assessment. Since V&V is the vehicle for honest assessment, it suffers. Modelers will either avoid V&V entirely, or thwart any effort to apply it properly. Usually the results are given without any firm breakdown of uncertainties, and simply assert that the “agreement is good” or the “agreement is excellent” without any evidentiary basis save plots that display data points and simulation values being “close”.

If you truly have faith in your convictions, then your convictions should be able to stand criticism and testing.

― DaShanne Stokes

This situation can be made better by changing the narrative about what constitutes good results. If we value knowledge and evidence of mastery as objectives instead of predictive power, we tilt the scales toward honesty. One of the clearest invitations to hedge toward dishonesty is the demand of “predictive modeling”. Predictive modeling has become a mantra and sales pitch instead of an objective. Vast sums of money are allotted to purchase computers, and place modeling software on these computers with the promise of prediction. We are told that we can predict how our nuclear weapons work so that we don’t have to test them. The new computer that is a little bit faster is the key to doing this (they always help, but are never the lynchpin). We can predict the effects of human activity on climate to be proactive about stemming its effects. We can predict weather and hurricanes with increasing precision. We can predict all sorts of consequences and effect better designs of our products. All of these predictive capabilities are real, and all have been massively oversold. We have lost our ability to look at challenges as good things and muster the will to overcome them. We need to tilt ourselves to be honest about how predictive we are, and understand where our efforts can make modeling better. Just as important we need to unveil the real limits on our ability to predict.

images
Unknown-2
images
Cielo rotator
imgres
Unknown-3

A large part of the conduct of V&V is unmasking the detailed nature of uncertainty. Some of this uncertainty comes from our lack of knowledge of nature, or flaws in our fundamental models. Other uncertainty is simply intrinsic to our reality. This is phenomena that is variable even with seemingly identical starting points. Separating these types of uncertainty, and defining their magnitude should be greatly in the service of science. For the uncertainties that we can reduce through greater knowledge, we can array efforts to affect this reduction. This must be coupled to the opportunity for experiment and theory to improve matters. On the other hand, if uncertainty is irreducible, it is important to factor it into decisions and accommodate its presence. By ignoring uncertainty with the practice of default of ZERO uncertainty (https://williamjrider.wordpress.com/2016/04/22/the-default-uncertainty-is-always-zero/ ), we become powerless to assert our authority, or practically react to it.

image004In the conduct of predictive science, we should look to uncertainty as one of our primary outcomes. When V&V is conducted with high professional standards, uncertainty is unveiled and estimated in magnitude. With our highly over-promised mantra of predictive modeling enabled by high performance computing, uncertainty is almost always viewed negatively. This creates an environment where willful or casual ignorance of uncertainty is tolerated and even encouraged. Incomplete and haphazard V&V practice becomes accepted because it serves the narrative of predictive science. The truth and actual uncertainty is treated as bad news, and greeted with scorn instead of praise. It is simply so much easier to accept the comfort that the modeling has achieved a level of mastery. This comfort is usually offered without evidence.

The trouble with most of us is that we’d rather be ruined by praise than saved by criticism.

― Norman Vincent Peale

Somehow a different narrative and value system needs to be promoted for science to flourish. A starting point would be a recognition of the value of highly professional V&V work and the desire for completeness and disclosure. A second element of the value system would be valuing progress in science. In keeping with the value on progress would be a recognition that detailed knowledge of uncertainty provides direct and useful evidence to steer science productively. We can also use uncertainty to act proactively in making decisions based on actual predictive power. Furthermore, we may choose not to use modeling to decide if the uncertainties are too large and informing decisions. The general support for the march forward of scientific knowledge and capability is greatly aided by V&V. If we have a firm accounting of our current state of knowledge and capability, we can mindfully choose where to put emphasis on progress.\

image006This last point gets at the problems with implementing a more professional V&V practice. If V&V finds that uncertainties are too large, the rational choice may be to not use modeling at all. This runs the risk of being politically incorrect. Programs are sold on predictive modeling, and the money might look like a waste! We might find that the uncertainties from numerical error are much smaller than other uncertainties, and the new super expensive, super-fast computer will not help make things any better. In other cases, we might find out that the model is not converging toward a (correct) solution. Again, the computer is not going to help. Actual V&V is likely to produce results that require changing programs and investments in reaction. Current management often looks to this as a negative and worries that the feedback will reflect poorly on previous investments. There is a deep-seated lack of trust between the source of the money and the work. The lack of trust is driving a lack of honesty in science. Any money spent on fruitless endeavors is viewed as a potential scandal. The money will simply be withdrawn instead of redirected more productively. No one trusts the scientific process to work effectively.  The result is an unwillingness to engage in a frank and accurate dialog about how predictive we actually are.

It’s discouraging to think how many people are shocked by honesty and how few by deceit.

― Noël Coward

It wouldn’t be too much of a stretch to say that technical matters are a minor aspect of improving V&V. This does not make light of, nor minimize the immense technical challenges in conducting V&V. The problem is that the current culture of science is utterly toxic for progress technically. We need a couple of elements to change in the culture of science to make progress. The first one is trust. The lack of trust is pervasive and utterly incapacitating (https://williamjrider.wordpress.com/2013/11/27/trust/, https://williamjrider.wordpress.com/2016/04/01/our-collective-lack-of-trust-and-its-massive-costs/, https://williamjrider.wordpress.com/2014/12/11/trust-and-truth-in-management/  ). Because of the underlying lack of trust, scientists and engineers cannot provide honest results or honest feedback on results. They do not feel safe and secure to do either. This is a core element surrounding the issues with peer review (https://williamjrider.wordpress.com/2016/07/16/the-death-of-peer-review/ ). In an environment where there is compromised trust, peer review cannot flourish because honesty is fatal.

Nothing in this world is harder than speaking the truth, nothing easier than flattery.

― Fyodor Dostoyevsky

The second is a value on honesty. Today’s World is full of examples where honesty is punished rather than rewarded. Speaking truth to power is a great way to get fired. Those of us who want to be honest are left in a precarious position. Choose safety and security while compromising our core principles, or stay true to our principles and risk everything. Over time, the forces of compromised integrity, marketing and bullshit over substance wear us down. Today the liars and charlatans are winning. Being someone of integrity is painful and overwhelming difficult. The system seems to be stacked against honest discourse and disclosure. Of course, honesty and trust are completely coupled. Both need to be supported and rewarded. V&V is simply one area where these trends play out and distort work.download-1

It is both jarring and hopeful that the elements holding science back are evident in the wider world. The new and current political discourse is full of issues that are tied to trust and honesty. The degree to which we lack trust and honesty in the public sphere is completely disheartening. The entire system seems to be spiraling out of control. It does not seem that the system can continue on this path much longer (https://williamjrider.wordpress.com/2017/10/20/our-silence-is-their-real-power/ ). Perhaps we have hit bottom and things will get better. How much worse can things get? The time for things to start getting better has already passed. This is true in the broader public World as well as science. In both cases trust for each other, and a spirit of honesty would go a long way to providing a foundation for progress. The forces of stagnation and opposition to progress have won too much ground.

Integrity is telling myself the truth. And honesty is telling the truth to other people.

― Spencer Johnson

 

Nothing is so difficult as not deceiving oneself.

― Ludwig Wittgenstein

 

Scientific Computing’s Future Is Mobile, Adaptive, Flexible and Small

15 Friday Dec 2017

Posted by Bill Rider in Uncategorized

≈ 6 Comments

Without deviation from the norm, progress is not possible.

― Frank Zappa

titanThere is something seriously off about working on scientific computing today. Once upon a time it felt like working in the future where the technology and the work was amazingly advanced and forward-looking. Over the past decade this feeling has changed dramatically. Working in scientific computing is starting to feel worn-out, old and backwards. It has lost a lot of its sheen and it’s no longer sexy and fresh. If I look back 10 years everything we then had was top of the line and right at the “bleeding” edge. Now we seem to be living in the past, the current advances driving computing are absent from our work lives. We are slaving away in a totally reactive mode. Scientific computing is staid, immobile and static, where modern computing is dynamic, mobile and adaptive. If I want to step into the modern world, now I have to leave work. Work is a glimpse into the past instead of a window to the future. It is not simply the technology, but the management systems that come along with our approach. We are being left behind, and our leadership seems oblivious to the problem.

For most of the history of computing in the 20th and into the 21st Century, scientific computing was at the forefront of technology. That is starting to change. Even today scientific computing remains exotic in terms of hardware and some aspects of software, but it also feels antiquated and antique. We get to use cutting edge computer chips and networking hardware that demand we live on the ragged edge technologically. This is only half the story. We also remain firmly entrenched in the “mainframe” era with corporate computing divisions that seem more “Mad Men” and less “Star Trek” than ever. The distance between the computers we use to execute our leading edge scientific investigations and our offices or our personal lives are diverging at warp speed. It has become hopelessly ironic in many ways. Worse than ironic, the current state of things is unhealthy and lessens the impact of scientific computing on today’s World.

Unknown-2Even worse than the irony is the price this approach is exacting on scientific computing. For example, the computing industry used to beat a path to scientific computing’s door, and now we have to basically bribe the industry to pay attention to us. A fair accounting of the role of government in computing is some combination of being a purely niche market, and partially pork barrel spending. Scientific computing used to be a driving force in the industry, and now lies as a cul-de-sac, or even pocket universe, divorced from the day-to-day reality of computing. Scientific computing is now a tiny and unimportant market to an industry that dominates the modern World. In the process, scientific computing has allowed itself to become disconnected from modernity, and hopelessly imbalanced. Rather than leverage the modern World and its technological wonders many of which are grounded in information science, it resists and fails to make best use of the opportunity. It robs scientific computing of impact in the broader World, and diminishes the draw of new talent to the field.

It would be great to elaborate on the nature of the opportunities, and the cost of the pileofshitpresent imbalances. If one looks at the modern computing industry and its ascension to the top of the economic food chain, two things come to mind: mobile computing – cell phones – and the Internet. Mobile computing made connectivity and access ubiquitous with massive penetration into our lives. Networks and apps began to create new social connections in the real world and lubricated communications between people in a myriad of ways. The Internet became both a huge information repository, and commerce. but also an engine of social connection. In short order, the adoption and use of the internet and computing in the broader human World overtook and surpassed the use by scientists and business. Where once scientists used and knew computers better than anyone, now the World is full of people for whom computing is far more important than for science. Science once were in the lead, and now they are behind. Worse yet, science is not adapting to this new reality.

Those who do not move, do not notice their chains.

― Rosa Luxemburg

The core of the problem with scientific computing is its failure to adapt and take advantage of the opportunity defined by this ascendency of computing. A core of science’s issue with computing is the lost sense that computers are merely a tool. Computers are a tool that may be used to do science. Instead of following this maxim, we simply focus on the older antiquated model of scientific computing firmly grounded in the mainframe era. Our mindset has not evolved with the rest of the World. One of the clear consequences of the mindset is a creeping degree of gluttony and intellectual laziness with high performance computing. All problems reduce to simply creating faster computers and making problems submit to the raw power of virtually limitless computations. We have lost sight of the lack of efficiency of this approach. A renewed focus on issues of modeling, methods and algorithms could be deeply enlivened by the constraints imposed by limited computing resources. Moreover, the benefits of solving problems more efficiently with smaller computing resources would yield innumerable benefits in the setting of big iron. This could be achieved without the very real limitations of having big iron be the sole focus of our efforts.

Cielo rotator

Scientific computing could be arranged to leverage the technology that is advancing the World today. We could look at a mobile, adaptive platform for modeling, simulation and data analysis that harnessed the best of technology. We could move through the cloud using technology in an adaptive, multiscale manner. One of the biggest challenges is letting go of the power dynamic that drives thinking today. Scientific computing has been addicted to Moore’s law for too long. The current exascale push is symptomatic of this addiction. Like any addiction it is unhealthy and causes the subject to avoid real cures for their problem. We see progress as equivalent to raw power with a single computer. The huge stunt calculation as a vehicle for science is a manifestation of this addiction. Science is done with many calculations along with an adaptive examination of problems or mindful interrogation of results. Power can also be achieved through mobility, ubiquity and flexibility. The big iron we pursue has become tantamount to progress because it’s the only route we can envision. The problem is that technology, and the arc of progress is working against us instead of with us. It is past time to change our vision of what the future can be. The future needs to be different by embracing a different technological path. On one hand, we won’t be swimming against the current of computing technology, but on the other hand we will need to invest in different solutions to make it work.

Flexibility is an art of creating way outs within the cul-de-sacs!
― Mehmet Murat ildan

Mobility is power, and it has made computing ubiquitous. When the broader computing industry embraced the death of Moore’s law, it switched its attention to cell phones. Instead of simply being phones, they became mobile computers and mobile extensions of the Internet. In doing so we unleashed a torrent of creativity and connection. All of a sudden, we saw computers enable the level of social connection that the Internet always had promised, but never delivered. The mobile computing revolution has reshaped the World in a decade. In the process, the mobile market overwhelmed the entire computing industry and created economic dominance on an unparalleled scale. The killer piece of technology was the iPhone. It combined a focus on user interface along with software that enabled everything. We also need to recognize that each phone is more powerful than the fastest computer in the World 25 years ago. We have tremendous power at our fingertips.

imgres
images
facebook-friends.jpg.pagespeed.ce_.UPAsGtTZXH
tinder-640x334

One of the really clear messages of the recent era in computing is a change in the nature of value and power. For a long time, power was measured by hardware gains in speed, memory and capability, but now application innovation and flexibility rule. Hardware is largely a fixed and slowly changing commodity and represents a level playing field. The software in the applications and the user interface are far more important. Algorithms that direct information and attention are dominating the success in computing. Providing the basis of connection and adaption to the needs of the users has become the medium for creating new markets. At the same time these algorithms have come under fire for how they manipulate people and data. These mobile computers have become a massive issue for society as a whole. We are creating brand new social problems and side-effects we need to effectively solve. The impact of this revolution in computing on society as a whole has been incredible.

138
Unknown
Steve_Jobs_Headshot_2010-CROP
Dts_news_bill_gates_wikipedia
JohnvonNeumann-LosAlamos

A whole cadre of experts is fading from the field of play in computing. In taking the tact of focusing on mainframe computing, scientific computing is sidelining itself. Instead of this enormously talented group of people playing in the area that means the most to society, they are focused on a cul-de-sac grounded in old and outdated models of success. Our society would benefit by engaging these experts in making mobile computing more effective in delivering value in new innovative ways. We could be contributing to solving some of the greatest problems facing us rather than seeing our computing as a special niche serving a relatively small segment of society’s needs. In the past, scientific computing has provided innovative and dynamic solutions that ultimately made their way into the general computing. A perfect example is Google. The problem new-google-algorithmthat Google solved is firmly grounded in scientific computing and applied mathematics. It is easy to see how massive the impact of this solution is. Today we in scientific computing are getting further and further from relevance to society. This niche does scientific computing little good because it is swimming against a tide that is more like a tsunami. The result is a horribly expensive and marginally effective effort that will fail needlessly where it has the potential to provide phenomenal value.

You never change things by fighting the existing reality.

To change something, build a new model that makes the existing model obsolete.

― R. Buckminster Fuller

We are long passed the time to make a change in scientific computing’s direction and strategy. Almost everywhere else the mainframe era died decades ago. Why is scientific computing tied to this model? Why are scientists resisting the conclusions so nakedly obvious? In today’s risk, adverse environment making a change to the underlying model of this branch of science is virtually impossible. Even when the change is dramatically needed and overdue by years the resistance is strong. The status quo is safe and firmly entrenched. In a time when success can be simply asserted and largely manufactured, this unacceptable state of affairs will persist far longer than it should. Sooner or later someone will take the plunge, and success will follow them. They will have the winds of progress at their backs solving most of the problems easily that we throw billions of dollars at with meager success.

The measure of intelligence is the ability to change.

― Albert Einstein

 

What’s going wrong and why

08 Friday Dec 2017

Posted by Bill Rider in Uncategorized

≈ 3 Comments

If I had an hour to solve a problem I’d spend 55 minutes thinking about the problem and 5 minutes thinking about solutions.

― Albert Einstein

A few years ago, I was challenged to examine the behavior of void in continuum hydrocodes. A senior colleague suggested looking at problems that might allow us to understand how the absence of material would be treated in a code. The simplest version of this problem would solve the expansion of a real gas into a void. With an ideal gas this problem has an exact solution that can be found with a Riemann solution. In the process, we have discovered that these problems are not solved well by existing methods. We approximate the void with a very low density and pressure material, and we have found as the material approaches an actual void, the solutions seem to become non-convergent, and prone to other significant numerical difficulties. Even when using extremely refined meshes with many 1000’s of cells in one dimension, convergence is not observed for a broad class of methods. These methods have solved many difficult problems and we believe them to be robust and reliable. These problems persist for all methods tested including our fail-safe methods (e.g., first order Godunov).

What is going on?

I’ll just say in passing that this post is a bit of a work in progress conversation with myself (or myself to you). My hope is that it will shake lose my thinking. It is patterned on the observation that sometimes you can solve a problem by carefully explaining it to someone else.

I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.

― Abraham H. Maslow

This slideshow requires JavaScript.

One of the difficulties of this problem is the seemingly bad behavior coming from our most reliable and robust methods. When we want a guaranteed a good solution to a problem, we unleash a first-order Godunov method on it, and if use an exact Riemann solver we can expect the solution to be convergent. The results we see with void seemingly violate this principle. We are getting terrible solutions in a seemingly systematic manner. To make matters worse, the first-order Godunov method is the basis, and fallback position for the more important second- or third-order methods we practically want to use. We can conclude that this problem is exposing some rather serious problems with our workhorse methods and the potential for wholesale weakness in our capability.

There are no facts, only interpretations.

― Friedrich Nietzsche

Smalljumps-1stg

First order Godunov with 1000 cells. Plotting the maximum velocity over time shows the convergence for 100 and 1000:1 jumps. The velocioty peaks and decays to the correct solution.

Let’s look at what happens for the approximate-void problem. We approximate the void with a gas that has a density and pressure of twelve orders of magnitude smaller than the “real” material. This problem has a solution that almost gives the expansion into vacuum solution to the Euler equations (where the head of the rarefaction and the contact discontinuity collapse into a single structure that separates material from nothing). The problem is dominated by an enormous rarefaction that takes the density down by many orders of magnitude. What we see is a solution that appears to get worse and worse under mesh refinement. In other words, it diverges under mesh refinement. Actually, the behavior we see is a bit more complex than this. At very low resolutions, the solution is behind the exact solution, and as we refine the mesh, the solution catches up to and, then passes the exact solution. Then as we add more and more mesh, the solution just gets worse and worse. This is not supposed to happen. This is a very bad thing that needs focused attention.

Methods-compare-1000

Comparing first order, PLM and PPM solutions for the 1000:1 jump. The high order methods converge much faster than the first-order method.

So maybe backing away from the extreme problem is worth doing. I ran a sequence of shock tube problems varying the jump in pressure and density starting at 10:1 and slowly going up to the extreme jump that approximates an expansion into void. The shock tube is a self-similar problem, meaning that we can swap time and space through a similarity transformation. Thus, the very early time evolution on a very fine grid is essentially the same as a late time solution on a very coarse grid. What I noticed is the same pattern over and over. More importantly, the problem gets worse and worse as the jumps get larger and larger. By examining the trend as the jumps become very large, we start to see the nature of our problem. As the jump becomes larger and larger, the solution converges more and more slowly. We can start to estimate the mesh resolution needed for a good result and we can see that the situation becomes almost hopeless in the limit. I believe the solution will eventually converge given enough mesh, but the size of the mesh needed to get a convergent solution becomes completely absurd.

large-jumps-compare

For the large jumps of a million to a trillion convergence is lost at 1000 cells. The solution hasn’t even reached its peak value to decay toward the correct solution.

In summary, the problem with a factor of a million jump converges with modestly unreasonable mesh. As the jump grows in size, the convergence requires a mesh that is prohibitive for any practical work. If we are going to accurately solve this class of problems some other approach is needed. To make things worse the when the problem converges, the rate of convergence under refinement of the mesh is painfully slow, and incredibly expensive as a result.

Everywhere is walking distance if you have the time.

― Steven Wright

The second issue we see is a persistent velocity glitch at the head of the rarefaction. It is fair to say that the glitch has heretofore been viewed as a cosmetic problem. This velocity peak looks like a meaningfully wrong solution to the equations locally. It produces a shock like solution in the sense that produces a violation of Lax’s entropy condition, where the characteristics locally converge in a shock-like manner in a rarefaction where the characteristics should diverge locally. We might expect that this problem would hurt the physically meaningful solution. Not all-together surprisingly the solution can also violate the second law of thermodynamics when using higher than first-order methods. Moreover, this character simply gets worse and worse as the problem gets closer to a void. A reasonable supposition is that this feature in the numerical solution is a symptom of difficulties in rarefactions. Usually this feature can be categorized as a nuisance and relatively small contributor to error, but may be a sign of something deeper. Perhaps this nuisance becomes a significant issue as the rarefaction becomes stronger, and ultimately dominates the numerical character of the solution. We might be well-served by removing it from the solution. One notion we might add to the treatment of the glitch is its diminishing size as the mesh is refined. Having this anomalous shock-like character allows dissipation to damp the spike and improve the solution. The counter-point to this solution is not creating the glitch in the first place.

convergence-plot-1e8

For the jump of 100 million we get convergence with 2000 and 4000 cells. This also shows that the curves are quite close to self-similar In addition the slow convergence is evident in the behavior.

At this point it’s useful to back away from the immediate problem to a broader philosophical point. The shock capturing methods are naturally focused on computing shocks. Shock waves were a big challenge for numerical methods. They remain a large challenge, and failure to treat them effectively can be fatal for a calculation. If the shock wave was not treated with care, the numerical can fail catastrophically, or significantly damaged. Even when the results are not catastrophic, poor treatment of a shock can result in significant corruption of the solution that often spreads from the shock to other areas in the solution. For this reason, the shock wave and its numerical treatment has been an enduring focus of numerical methods for compressible flows. Conversely rarefactions have largely been an afterthought. Rarefactions are benign smooth structures that do not directly threaten a calculation. A few bad things can happen in rarefactions, but they are rarely fatal to the calculation. A few have been so cosmetically problematic that major effort has ensued (the rarefaction shock). Problems in rarefactions are generally just a nuisance, and only become a focal point when the details of the solution are examined. One aspect of the details is the convergence character of the solution. Shock tube problems are rarely subjected to a full convergence analysis. The problem we focus on here is dominated by a rarefaction thus magnifying any problems immensely. What we can conclude is that strong rarefactions are not computed with high fidelity.

The trick to forgetting the big picture is to look at everything close up.

― Chuck Palahniuk

One of the key ways of dealing with shock waves are upwind methods. A clear manner of treating these waves and getting an upwind solution is the use of a discontinuous basis to define the spatial discretization. This discontinuous basis is also used with high-order methods, and the first order solution becomes the fallback position for the methods. This approach is very well suited to computing shocks; a discontinuous approximation for a discontinuous phenomenon. By the same token, a discontinuous basis is not well suited for a continuous phenomenon like a rarefaction. One hypothesis to explore is different types of approximations to the problem where the rarefaction dominates the solution. We may find that we can solve this class of problem far more efficiently with a continuous basis getting asymptotically convergent solutions far sooner. What we observe is an ever slower approach to a convergent behavior in the code. For this class of problems we see a consistent pattern, the solution starts out being under-resolved and the velocity rises, it then overshoots the correct analytical result, then slowly decays toward the correct solution. As the rarefaction becomes stronger and stronger, we see that the mesh resolution needed to capture the full rise, its achievement of the peak overshoot value take place at a finer and finer mesh.  Ultimately, the mesh required to get a solution that converges becomes absurdly refined.

If this proposition is indeed correct, it implies that we need to define a hybrid approach where the basis is adaptively chosen. At discontinuous structures, we want to choose discontinuous approaches, and at continuous structures we want continuous structures. This is almost obvious, but carrying this out in practice is difficult. Clearly the current adaptive approaches are not working well enough as evidenced by the painful and absurd degree of mesh needed to get a reasonable solution. It would seem that the answer to this problem lies in developing a new method capable of solving extreme rarefactions on reasonable meshes.  We need to have methods that can solve strong, but continuous waves with higher fidelity. In all reality, these methods might need to effectively compute shocks albeit less effectively than methods using a discontinuous basis. The bottom line from attacking a challenging problem like this is the demonstration that our methods today are not sufficient to all our challenges.

Creativity consists of coming up with many ideas, not just that one great idea.

― Charles Thompson

Is the code part of the model?

01 Friday Dec 2017

Posted by Bill Rider in Uncategorized

≈ 1 Comment

Yes.

Of course, it’s not really that simple, but yes, the code is part of the model. If it isn’t, one has to provide a substantial burden of proof.

We have no idea about the ‘real’ nature of things … The function of modeling is to arrive at descriptions which are useful.

– Richard Bandler and John Grinder

images-2Ideally, it should not be, but proving that ideal is a very high bar that is almost never met. A great deal of compelling evidence is needed to support an assertion that the code is not part of the model. The real difficulty is that the more complex the modeling problem is, the more the code is definitely and irreducibly part of the model. These complex models are the most important uses of modeling and simulation. The complex models of engineered things, or important physical systems have many submodels each essential to successful modeling. The code is often designed quite specifically to model a class of problems. The code then becomes are clear part of the definition of the problem. Even in the simplest cases, the code includes the recipe for the numerical solution of a model. This numerical solution leaves its fingerprints all over the solution of the model. The numerical solution is imperfect and contains errors that influence the solution. For a code, there is the mesh and geometric description plus boundary conditions, not to mention the various modeling options employed. Removing the specific details of the implementation of the model in the code from consideration as part of the model becomes increasingly intractable.

The word model is used as a noun, adjective, and verb, and in each instance it has a slightly different connotation. As a noun “model” is a representation in the sense in which an architect constructs a small-scale model of a building or a physicist a large-scale model of an atom. As an adjective “model” implies a degree of perfection or idealization, as in reference to a model home, a model student, or a model husband. As an adjective “model” implies a degree or perfection or idealization, as in reference to a model home, a model student, or a model husband. As a verb “to model” means to demonstrate, to reveal, to show what a thing is like.

– Russell L. Ackoff

The word model itself is deeply problematic. Model is one of those words that can mean many different things whether its used a noun or verb (I’ll note in passing much like the curse word, “fuck” is so flexible as to be wonderful and confusing all at once). Its application in a scientific and engineering context is common and pervasive. As such, we need to inject some precision into how it is being used. For this reason, some discourage the use of “model” in discussion. On the other hand, models and modeling is so 16376102935_002fea8384_zcentral to the conduct of science and engineering that it should be dealt with head on. It isn’t going away. We model our reality when we want to make sure we understand it. We engage in modeling when we have something in the Real World, we want to demonstrate an understand of. Sometimes this is for the purpose of understanding, but ultimately this gives way to manipulation, the essence of engineering. The Real World is complex and effective models are usually immune to analytical solution.

Essentially, all models are wrong, but some are useful.

– George E. P. Box, Norman R. Draper

You view the world from within a model.

― Nassim Nicholas Taleb

Computational science comes to the rescue, and opens the doors to solving these complex models via numerical approximations. It is a marvelous advance, but brings new challenges because the solutions are imperfect. This adds a new layer of imperfection to modeling. We already should recognize that models are generically approximate versions of reality (i.e., wrong), and necessarily imperfect mathematical representations of the Real World. Solving this imperfect model, imperfectly via an approximate method makes the modeling issue even more fraught. Invariably for any model with complexity, the numerical solution of the model, and its detailed descriptionjohn-von-neumann-2 are implemented in computer code, or “a computer code”. The details and correctness of the implementation become inseparable from the model itself. It becomes quite difficult to extract the model as any sort of pure mathematical construct; the code is part of it intimately.

Evidence of the model’s nature and correctness is produced in the basic conduct of verification and validation with uncertainty quantification. Doing a full accounting of the credibility of modeling, including pedigree of the model will not help to exclude the code from the model, simply define the extent of this connection. Properly speaking, the code is always part of the model, but the extent or magnitude of its impact can be small, or even considered minor or negligible. This evidence is contained within the full assessment of the predictive quality of the simulation including a quantitative assessment. Among these activities verification is the most important for the question at hand. Do we have evidence that the mathematical model desired is correctly solved? Do we have evidence that the numerical errors in the solution are small? Can all the aspects of the model be well described by clearly articulated mathematics?

Any physical theory is always provisional, in the sense that it is only a hypothesis: you can never prove it. No matter how many times the results of experiments agree with some theory, you can never be sure that the next time the result will not contradict the theory.

― Stephen Hawking

A model is not the operating system for the universe. Reality is not determined by these mathematical abstractions; the mathematics is designed to describe what we observe. As such, the models are always flawed and imperfect representations to some level. Determining the flaws and the quantitative level of imperfection is difficult worlogok requiring detailed verification and validation. It is an abstraction and representation of the processes we believe produce observable physical effects. We theorize that the model explains how these effects are produced. Some models are not remotely this high minded; they are nothing, but crude empirical engines for reproducing what we observe. Unfortunately, as phenomena become more complex, these crude models become increasingly essential to modeling. They may not play a central role in the modeling, but still provide necessary physical effects for utility. These submodels necessary to produce realistic simulations become ever more prone to include these crude empirical engines as problems enter the engineering realm. As the reality of interest becomes more complicated, the modeling becomes elaborate and complex being a deep chain of efforts to grapple with these details.

It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts.

― Arthur Conan Doyle

Validation of a model occurs when we take the results of solving the model and compare them directly with observations from the Real World. A key aspect of the validation exercise is characterizing the uncertainty in both the observations and the model. When all this assessment is in hand, we can render a judgment of whether the model represents the observed reality well enough for the purposes we intend. This use is defined by a question we want to answer with the modeling. The answer needs to have a certain fidelity, and certainty that provides the notion of precision to the exercise. The certainty of the observations defines a degree of agreement that can be demanded. The model’s uncertainties define the model’s precision, but includes the imagesimpact of numerical approximation. The numerical uncertainty needs to be accounted for to isolate the model. This uncertainty defines the level of approximation in the solution to the model, and a deviation from the mathematical idealization the model represents. In actual validation work, we see a stunning lack of this essential step from validation work presented. Another big part of the validation is recognizing the subtle differences between calibrated results and predictive simulation. Again, calibration is rarely elaborated in validation to the degree that it should.

We should always expect the model to deviate from observations to some degree. If we are capable of producing more accurate observations of reality, we can more accurately determine how wrong the model is. In a sense, we can view this as a competitive race. If our model is quite precise, we are challenged in being able to observe nature well enough to expose its innate flaws. Conversely, if we can observe nature with extreme precision, we can define the model’s imperfections clearly. Progress can be made by using this tension to push one or the other. The modeling uncertainty is compounded by approximate numerical solution implemented in a computer code (including the correctness of the code). Verification and validation activities are a systematic manner to collect evidence so that the comparison can be made in a complete and compelling manner.

noh_den_error_2d_400pt_800
hqdefault
FIG-7-Comparison-of-Noh-problem-results-on-a-polar-grid-for-tensor-and-edge-viscosity
sedov_den_2d_240pt_800

Computer codes serve two very important roles in modeling: the model is contained in the code including geometry, boundary condition, and a host of ancillary models for complex situations, and solving the model numerically. Both of these characteristics are essential in the conduct of modeling, but numerical solutions are far more subtle and complex. Many people using codes for modeling do not have a background sufficient to understand the subtleties of numerical methods and their impact on solutions. Moreover, the fiction that numerical methods and codes are so reliable that detailed understanding is not essential, persists and grows. Our high performance computing programs work to fuel this fiction. The most obvious aspect of the numerical solution is the meshing and the time integration with the error’s proportionality to this detail. Producing evidence of the correctness and error characteristics is produced through verification. In addition, most advanced codes solve linear and nonlinear equations in an iterative manner. Iterative solutions have a finite tolerance in their solution, which can impact solutions. This is particularly true for nonlinear equation solvers where the error tolerance that can be achieved by some popular solvers is extremely loose. This looseness can produce significant physical effects in solutions. Most verification work does not examine these aspects closely although they should. Again, the code and its capabilities and methods are extremely important, if not essential, to the model produced. In many cases fantastic modeling work is polluted by naïve numerical methods, thus a wonderful model can be wiped out by a terrible code.

You’ve baked a really lovely cake, but then you’ve used dog shit for frosting.

― Steve Jobs

So, when can we exclude the code? The big thing to focus on in this question is verification evidence. Code verification is necessary to be confident that the mathematical model intended is provably present in the code. It asks whether the mathematical abstraction that the model is based on is correctly solved by the code. Code verification can be completely satisfactory and successful, and the code can still be important. Code verification does not say that the numerical error is small, it says that numerical error is ordered and the model equations desired to be solved are indeed solved. The second half of verification is solution (calculation) verification determines the errors in solving the model. The question is how large (or small) the numerical errors in the solution of the model are? Ultimately, these errors are a 6767444295_259ef3e354strong function of the discretization and solver used in the code. The question of whether the code matters comes down to asking if another code used skillfully would produce a significantly different result. This is rarely, if ever, the case. To make matters worse, verification evidence tends to be flimsy and half-assed. Even if we could make this call and ignore the code, we rarely have evidence that this is a valid and defensible decision.

Truth can only be found in one place: the code.

― Robert C. Martin

In closing, the code IS part of the model unless evidence can be found otherwise. This can happen more easily where the model is simple. In general, the exclusion of the code is an ideal that cannot be reached. As models become complex detaching the model from the code becomes nearly intractable, and indefensible. Evidence will almost invariably point to the code being an important contributor to the model’s picture of reality.

For the scientist a model is also a way in which the human though processes can be amplified. This method often takes the form of models that can be programmed into computers. At no point, however, the scientist intend to loose control of the situation because off the computer does some of his thinking for him. The scientist controls the basic assumptions and the computer only derives some of the more complicated implications.

– C. West Churchman

 

 

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • February 2026
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013

Categories

  • Uncategorized

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • The Regularized Singularity
    • Join 55 other subscribers
    • Already have a WordPress.com account? Log in now.
    • The Regularized Singularity
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...