• About The Regularized Singularity

The Regularized Singularity

~ The Eyes of a citizen; the voice of the silent

The Regularized Singularity

Category Archives: Uncategorized

If I had a time machine…

21 Friday Mar 2014

Posted by Bill Rider in Uncategorized

≈ Leave a comment

First an admission, I thought it was Friday yesterday and it was time to post.  It pretty much sums up my week, so as penance I’m giving you a bonus post.  I also had a conversation yesterday that really struck a cord with me, and it relates to yesterday’s topic, legacy codes.  In a major project area I would like to step into a time machine and see what where current decision-making will take us.  The conversation offered me that very opportunity.

A brief aside to introduce the topic, I’m a nuclear engineer educationally, and worked as a reactor safety engineer for three years at Los Alamos.  Lately I’ve returned to nuclear engineering as part of a large DOE project.  After having gone into great depth with modern computational science, the return to nuclear engineering has been bracing.  It is like stepping into the past.  I am trying to bring modern concepts in computational science quality to bear on the analysis of nuclear reactors.  To say it is an ill fit is a dramatic over-statement, it is a major culture clash.  The nuclear engineering community’s idea of quality is so antiquated that almost none of my previous 20 years experience is helpful; it is the source of immense frustration.  I have to hold back my disgust constantly at what I see.

A big part of the problem is the code base that nuclear engineering uses.  It is legacy code.  The standard methodology is almost always based on the way things were done in the 1970’s, when the codes were written.  You get to see lots of Fortran, lots of really crude approximations, and lots of code coupling via passing information through the file system.  Nuclear reactor analysis is almost always done with a code and model that is highly calibrated.  It is so calibrated that there isn’t any data left over to validate the model.  We have no idea whether the codes are predictive (it is almost assured that they are not).

It is a giant steaming pile of crap.  The best part is that this steaming pile of crap is the mandated way doing the analysis.  The absurd calibrated standards are written into regulations that the industry must follow.  It creates a system where nothing will ever get any better, and rather than follow the best scientific approach to doing this analysis, we do things in the slipshod way they were done in the 1970’s.  I am mindful that we didn’t know any better back then and we had limitations in the methodology we could apply.  After all, a wristwatch can beat the biggest supercomputer in the world in the early to mid-1970’s today.  This is a weak excuse for continuing to do things today like we did it then, but we do.  We have to.

We still use codes today that ended their active development in that era.  Some of the codes have been revamped with modern languages and interfaces, but the legacy intellectual core remains stuck in the mid-1970’s (40 years ago!).  The government simply stopped funding the development of new methods, and began to mandate the perpetuation of the legacy methodology.  The money dried up for new development and has been replaced by maintenance of the legacy capability and legacy analysis methodology that is unworthy of the task it is set to in the modern world.

Here is the punch line in my mind. We are setting ourselves on a course to do the same with the analysis of the nuclear weapons.  I think looking at computational analysis of nuclear reactors gives us a “time machine” that shows the path nuclear weapon’s analysis is on.  We have stopped developing anything new, and started to define a set of legacy capabilities that must be perpetuated.  Some want to simply work on porting the existing codes to the next generation of computers without adding anything to the intellectual basis.  Will this create an environment just like reactor safety analysis in 15 years?  Will we be perpetuating the way we do things now for perpetuity?  I worry that this is where our leaders are taking us?

I believe that three major factors are at play.  One is a deep cultural milleu that is strangling scientific innovation, reducing both aggregate funding and the emphasis and capacity for innovation.  The United States simply lacks faith that science can improve our lives and acts accordingly.  The second two factors are more psychological.  The first is a belief that we have a massive sunk cost in software and it must be preserved.  This is the same fallacy that makes people lose all their money in Las Vegas. It is stupid, but people buy it. Software can’t be preserved; it begins to decay the moment it is written.  More tellingly, the intellectual basis of software must either grow or it begins to die.  We are creating experts in preserving past knowledge, which is very different from creating new knowledge.

Lastly, when the codes began to become useful to analysis an anchoring bias was formed.  A lot of what nuclear engineers analyze can’t be seen.  As such, a computer code becomes the picture many of us have of phenomena.  Think about radiation transport and what it “looks” like.  We can’t see it visually.  Our path to seeing it is computational simulation.  When we do “see” it, it forms a powerful mental image.  I can attest to learning about radiation transport for years and the power of simulation to put this concept into vivid images.  This image becomes an anchor bias that is difficult to escape.  This image includes both the simulation’s picture of reality as well as the simulation’s errors and model deficiencies.  The bias means that an unambiguously better simulation will be rejected because it doesn’t “look” right.  It is why legacy codes are so hard to displace.  For reactor safety the anchoring bias has been written into regulatory law.

This resonates with my assessment of how the United States’ is managing to destroy its National Laboratory system through systematic mismanagement of the scientific enterprise.  It fact it is self-consistent.  The deeper question is why our leader makes decisions like this?  These are two cases where a collective decision has been made to mothball a technology, an important and controversial technology, as everything nuclear is.  Instead of applying the best of modern science to the mothballed technology, we mothball everything about it.

It would seem that the United States will only invest in the analysis of significant technological programs while the technology is being actively developed.  In other words, the computational tools are built only when the thing they are being analyzed is being built too.  We do an awful job of stewardship using computation.  This is true in spades with nuclear reactors, and I fear may be true with the awkwardly named “stockpile stewardship program”.  It turns out that the entirety of the stewardship is grounded on ever-faster computers rather than a holistic, balanced approach.  We aren’t making new nuclear weapons, and increasingly we aren’t applying new science to their stewardship.  We aren’t actually doing our best to do this important job.  Instead we are holding fast to a poorly constructed, politically expedient plan laid out 20 years ago.

On the other hand maybe it’s just the United States ceding scientific leadership in yet another field.  We’ll just let the Europeans and Chinese have computational science too.

Legacy Code is Terrible in More Ways than Advertised

20 Thursday Mar 2014

Posted by Bill Rider in Uncategorized

≈ 2 Comments

From the moment I started graduate school I dealt with legacy code.  I started off by extending the development of a modeling code by my advisor’s previous student.  My hatred of legacy code had begun.  The existing code was poorly written, poorly commented and obtuse.  I probably responded by adding more of the same.  The code was crap so why should I write nice code on top of this basis.  Bad code can help encourage more bad code.  The only positive was contributing to the ultimate death of that code so that no other poor soul would be tortured by developing on top of my work.  Moving to a real professional job only hardened my views; a National Lab is teeming with legacy code.  I soon encountered code that made the legacy code of grad school look polished.  These codes were better documented, but written even more poorly.  Lots of dusty deck Fortran 4 was encountered with memory management techniques associated with computing on CDC supercomputers. Programming devices created at the dawn of computer programming languages were common.  I encountered spaghetti code that would make your head spin.  If I tried to flowchart the method it would look like a Mobius strip.  What dreck!

On the positive side, all this legacy code powered my desire to write better code.  I started to learn about software development and good practices.  I didn’t want to leave people cleaning up my messes and cursing the code I wrote.  They probably did anyway.  The code I wrote was good enough to be reused for purposes I never intended and as far as I know is still in use today.  Nonetheless, legacy code is terrible, and expensive, but necessary.  Replacing legacy code is terrible, expensive and necessary too.  Software is just this way.  There is a deeper problem with legacy code, legacy ideas.  Software is a way of actualizing ideas, algorithms into action.  It is the way that computers can do useful things.  The problem is that the deeper ideas behind the algorithms often get lost in the process. 

Writing code is a manner of concrete problem solving.  Writing a code for general production use is a particularly difficult brand of problem solving because of the human element involved.  The code isn’t just for you to use, but for others to use.   Code should be written for humans, not the computer.  You have to provide them with a tool they can wield.  If the users of a code wield the code successfully, the code begins to take on a character of its own.  If the problem is hard enough and the code is useful enough, the code begins to become legendary. 

A legendary code then becomes a legacy code that must be maintained.  Often the magic that makes it useful is shrouded in the mystery of the techniques for problem solving used.  It becomes a legacy code when the architect who made it useful moves on.  At this point the quality of the code and the clarity of the key ideas becomes paramount in importance.   If the ideas are not clear, the ideas become fixed because subsequent stewards of the capability cannot change them without breaking them.  Too often the “wizards” who developed the code were too busy solving their user’s problems to document what they are doing. 

These codes are a real problem for scientific computing.  They also form the basis of collective achievement and knowledge in many cases, the storehouse of powerful results.  They often become entrenched because they solve important problems for important users, and their legendary capability takes on the air of magic.  I’ve seen this over-and-over and it is a pox on computational science.  It is one of the major reasons that ancient methods for computing solutions continue to be used long after they should have been retired.  For a lot of physics particularly those involved with transport (first order hyperbolic partial differential equations), the numerical method has a large impact on the physical model. 

More properly, the numerical method is part of the model itself, thus the numerical solution and physical modeling are not separable.  Part of the reason is the need to add some sort of stabilization mechanism to the solution (some form of numerical or artificial viscosity).  If the numerical model changes, the related models need to change too.  Any calibrations need to be redone (and there are always calibrations!).  If the existing code is useful there is huge resistance to change because any new method is likely to be worse on the problems that count. Again, I’ve seen this repeatedly over the past 25 years.   The end result is that old legacy codes simply keep going long after their appropriate shelf life. 

Worse yet, a new code can be developed to put the old method into a new code base.  The good thing is that the legacy “code” goes away, but the legacy method remains.  It is sort of like getting a new body and simply moving the soul out of the old body into the new one.  If the method being transferred is well understood and documented, this process has some positives  (i.e., fresh code).  It also represents the loss of the opportunity to refresh the method along with the code.  Since the legacy code started it is likely that the numerical solver technology has improved.  Not improving the solver is a lost opportunity to improve the code.

I am defining the “soul” of the code, the approximations made to the laws of physics.  These are differential equation solvers and the quality of the approximation is one of the most important characteristics of the code.  The nature of the approximations and the errors made therein often define the code’s success.  It really is the soul or personality of the code.  Changes to this part of a successful legacy code are almost impossible.  The more useful or successful the code is the harder such changes are to execute.  I might argue these are the conditions where it is more important to achieve. 

Some algorithms are more of a utility. An example is numerical linear algebra.  Many improvements have taken place with the efficiency that we can solve linear algebra on a computer.  These are important utility that massively impacts the efficiency, but not the solution itself.  We can make the solution on the computer much faster without any effect on the nature of the approximations we make to the laws of physics.  Good software abstracts the interface to these methods so that improvements can be had independent of the core code.  There are fewer impediments to this sort of development because the answer doesn’t change.  If the solution has been highly calibrated and/or highly trusted, getting it faster is naturally accepted.  Too often changes (i.e., improvements) in the solution are not accepted so naturally.

In the minds of many of the users of the code, the legacy code often provides the archetype of what a solution should look like.  This is especially true is the code is used to do useful programmatic work and analyze or engineer important systems.  This mental picture provides an anchor to their computational picture of reality.  Should that picture become too entrenched, the users of the code begin to lose objectivity and the anchor becomes a bias.  This bias can be exceedingly dangerous in that the legacy code’s solutions, errors, imperfections and all become their view of reality.  This view becomes an outright impediment to improving on the legacy code’s results.  It should be a maxim that results can always be improved; the model and method in the code are imperfect reflections of nature and should always be subject to improvements.  These improvements can happen via direct focused research, or the serendipitous application of research from other sources.  Far too often the legacy code acts to suffocate research and stifle creativity because of assumptions made in its creation both implicit and explicit.

One key concept with Legacy codes is technical debt.  Technical debt is an accumulation issues that have been solved in a quick and dirty manner rather than systematically.  If the legacy codes are full of methods that are not well understood, technical debt will accumulate and begin to dominate the development.  A related concept is technical inflation where basic technology passes what is implemented in a code.  Most often this term is applied to aspects of computer science.  In reality technical inflation may also apply to the basic numerical methods in the legacy code.  If the code has insufficient flexibility, the numerical methods become fixed, and rapidly lose any state-of-the-art character (if it even had it to begin with!).  Time only increases the distance between the code and the best available methods.  The lack of connectivity ultimately short-circuits the ability of the methods in the legacy code to influence the development of better methods.  All of these factors conspire to accelerate the rate of technical inflation.

In circumstances where the legacy “code” is replaced, but the legacy methodology is retained (i.e., a fresh code base).  The presence of the intellectual legacy can strangle innovation.  If the fresh code is a starting point for real extensions from the foundational methods and not overly constrained to the past, progress can be had.  This sort of endeavor must be entered into carefully with a well thought-through plan.  Too often this is not the approach, and legacy methods are promulgated forward without a genuine change.  With each passing year the intellectual basis that the methodology was grounded upon ages and understanding is lost.  Technical inflation sets in and the ability to close the gap recedes.  In many cases the code developers will lose sight of what is going on in the research community as it becomes increasingly irrelevant to them.  Eventually, the technical inflation becomes a cultural barrier that will threaten the code.  The results obtained with the code cease to be scientific, and the code developers become curators or priests. They are paying homage to the achievements of the past, and sacrificing their careers at the altar of expediency.  The original developers of the methodology move from legendary to mythic status and all perspective is lost.  The users of the code become a cult. 

Believe me, I’ve seen this in action.  It isn’t pretty.  Solving the inherent problems at this stage require the sorts of interventions that technical people suck at.  

Depending on the underlying culture of the organization using and/or developing the code, the cult can revolve around different things.  At Los Alamos, it is a cult of physicists with numerical methods, software and engineering slighted in importance.  At Sandia, it is engineering that defines the cult.  Engineers are better at software engineering too, so that gets more priority.  The numerical methods and the underlying models are slighted.  In the nuclear industry, legacy code and methods are rampant, with all things bowing to the cult of nuclear regulation.  This regulation is supposed to provide safety, but I fear the actual impact is to squash debate and attention to any details other than the regulatory demands.  This might be the most troubling cult I’ve seen.  It avoids any real deep thought and enshrines legacy code as the core of a legally mandated cult of calibration.  This calibration is papering over a deep lack of understanding and leads to over-confidence or over-spending, probably both. The calibration is deeply entrenched into their problem solving approach that they have no real idea how well the actual systems are being modeled.  Understanding is not even on the radar.  I’ve seen talented and thoughtful engineers self-limit their approach to problem solving because of the sort of fear the regulatory environment brings.  Instead of bringing their “A” game, the regulation induces a thought paralyzing fear. 

The way to avoid the issues is avoid using legacy code and/or methods that are poorly understood.  Important application results should not be dependent on things you do not understand.  Codes are holistic things.  The quality of results depends on many things and people tend to focus on single aspects of the code usually in a completely self-absorbed manner.  Code users think that their efforts are the core of quality, which lends itself to justifying crude calibrations.  People developing closure models tend to focus on their efforts and believe that their impact is paramount.  Method developers focus on the impact of the methods.  The code developer thinks about the issues related to the quality of the code and its impact.  With regulatory factors all independent thought is destroyed.  The fact is that all of these things are intertwined.  It is the nature of a problem that is not separable and must be solved in a unified fashion.  Every single aspect of the code from its core methods, to the models it contains to the manner of its use must be considered in providing quality results.

What sort of person does V&V?

14 Friday Mar 2014

Posted by Bill Rider in Uncategorized

≈ Leave a comment

The proper way to say the title of this talk is with more than a bit of distain.  Too often, I have encountered a disturbingly negative attitude toward V&V and those who practice it. I think it is time for us to shoulder some of the blame and rethink our approach to engaging other scientists and engineers on the topic of modeling and simulation (M&S) quality. 

 V&V should be an easy sell to the scientific and engineering establishment.  It hasn’t been, it has been resisted at every step.  V&V is basically a rearticulation of the scientific method we all learn, use and ultimately love and cherish.  Instead, we find a great deal of animosity toward V&V, and outright resistence to including it as part of the M&S product.  To some extent it has been successful in growing as a discipline and focus, but too many barriers still exist.  Through hard learned lessons I have come to the conclusion that a large part of the reason is the V&V community’s approach.  For example, one of the worst ideas the V&V community has ever had is “independent V&V”.  In this model V&V comes in independently and renders a judgment on the quality of M&S.  It ends up being completely adversarial with the M&S community, and a recipe for disaster.  We end up less engaged and hated by those we judge.  No lasting V&V legacy is created through the effort.  The M&S professionals treat V&V like a disease and spend a lot of time trying to simply ignore or defeat it.  This time could be better spent improving the true quality, which ought to be everyone’s actual objective.  Archetypical examples of this appraoch in action are federal regulators (NRC, the Defense Board…).  This idea needs to be modified into something collaborative where the M&S professions end up owning the quality of their work, and V&V engages as a resource to improve quality.

 The fact is that everyone doing M&S wants to do the best job they can, but to some degree don’t know how to do everything.  In a lot of cases they haven’t even considered some of the issues we can help with.  V&V expertise can provide knowledge and capability to improve quality if they are welcome and trusted.  One of the main jobs of V&V should be build trust so that they might provide their knowledge to important work.  In sense, the V&V community should be quality “coaches” for M&S.  Another way the V&V community can help is to provide appropriately leveled tools for managing quality.  PCMM can be such a tool if its flexibility is increased. Most acutely, PCMM needs a simpler version.  Most modeling and simulation professionals will do a very good job with some aspects of quality.  Other areas of quality fall outside their expertise or interest.  In a very real sense, PCMM is a catalog of quality measures that could be taken.  Following the framework helps M&S professionals keep all the aspects of quality in mind and within reach.  The V&V community can then provide the necessary expertise to carry out a deeper quality approach.

 If V&V allows itself to get into the role of judge and jury on quality, progress will be poor. V&V’s job is to ask appropriate questions about quality as partners with M&S professionals interested in improving the quality of their work.  By taking this approach we can produce a M&S future where quality continuously improves.

What is the role of Passion in Science and Engineering?

14 Friday Mar 2014

Posted by Bill Rider in Uncategorized

≈ Leave a comment

 In most people’s minds science and engineering doesn’t invoke images of passionate devotion.  Instead they think of equations, computers, metal being cut and exotic instruments.  Nonetheless like all manner of human endeavor, passion plays a key role in producing the most stunning progress and achievement.  Passion is one of those soft things we are so intently uncomfortable with, and like most soft things our success is keenly determined by how well we engage the issues around them. 

 So what am I passionate about?  What got me to do what I do today?  I started thinking about what got me started in computational simulation, and led to getting a PhD and ending in computational science today.  At the heart of the journey is a passionate idealistic sentiment, “if I can simulate something on the computer, it implies that I understand it.”  To me, a focus on V&V is a natural outgrowth of this idealism.  Too often, I lose sight of what got me started.  Too often, I end up doing work that has too little meaning, too little heart. I need to get in better touch with that original passion that propelled me through graduate school, and those first few years as a professional.  The humdrum reality of work so often squeezes my passion for modeling and simulation out.  When you feel passionately about what you are doing, it stops being work.

 Employers and employees are most comfortable with hard skills and tangible things that can be measured.  Often interviews and hiring revolve around the technical skills associated with the job, soft skills are either ignored or an afterthought.  Things like knowledge, ability to solve problems, money, and time are concrete and measureable.  They lend themselves to metrics.  Soft things are feelings (like passion), innovation, inclusion, emotion, and connectedness.  Most of these things are close to the core of what defines success and evade measurement.  Hard skills are necessary, but woefully insufficient. 

 Scientists and especially engineers are very uncomfortable with this.  Take a wonderfully written and insightful essay as an example.  Its quality is a matter of opinion, and can’t be quantified in a manner that makes the scientific world happy.  Yet the quality exists, and the capacity of such an essay to move ones emotion, shape one’s opinions, and enrich the lives of those that read it are clear.  If we don’t value the soft stuff success will elude us.

 A well-written persuasive argument can shape action and ultimately lead to greater material gains in what can be measured.  The inability to measure this quality should in no way undermine its value.  Yet, so often, it does.  We end up valuing what we can measure and fail to address what cannot be measured.  We support the development of hard skills and fail to develop the soft skills.  Passion is one of these soft things that does not receive the care and feeding it needs.  It is overlooked as a way forward to productivity and innovation.  Fun is another thing and its link to passion is strong.  People feel fun in doing things they have passion for.  With the passion and fun comes effortless work and greater achievement than skillful execution without those characteristics.

 Passion needs to be fed.  Passion can ignite innovation and productivity.  If you work at what you have passion for, you’ll likely be happier, and more productive.  Your life and the lives of those you touch will be better.  Too often people fail to find passion in work and end up channeling themselves into something outside of work where they can find passion.  At the foundation of many great achievements is passionate work.   As passion is lost all that is left is work, and with the loss of passion, the loss of possibility.

 Maybe you should find your own passion again.  Something propelled you to where you are today.  It must be powerful to have done that.

The Clay Prize and The Reality of the Navier-Stokes Equations

07 Friday Mar 2014

Posted by Bill Rider in Uncategorized

≈ 22 Comments

The existence of solutions to the (incompressible) Navier-Stokes is one of the Clay Institutes Millennium Prizes.  Each of the problems is a wickedly difficult mathematical problem, and the Navier-Stokes existence proof is no exception.  The interest in the problem has been enlivened by a claim that the problem has been solved.  Terry Tao has made a series of stunning posts to his blog outlining majestically the mathematical beauty and difficulty associated with the problem. In my mind, the issue is whether it really matters to the real world, or whether it is formulated in a manner that leads to utility. 

I fear the answer no.  We might have enormous mathematical skill applied to a problem that has no meaning to reality.

The key word I left out so far is “incompressible”, and incompressible is not entirely physical.  An incompressible fluid is impossible.  Really.  It is impossible, but before you stop reading out of disgust with me let me explain why.

One characteristic of incompressibility is the endowment of infinitely fast sound waves into the Navier-Stokes equations.  By infinite, I mean infinite, sound is propagated everywhere instantaneously.  This is clearly unphysical (a colleague of mine from Los Alamos deemed the sound waves to be “superluminal”).  It violates a key principle known as causality.

More properly, incompressibility is an approximation to reality, and there is a distinct possibility that this approximation causes the resulting equations to depart from reality in essential ways for a given application.   For very many applications incompressibility is enormously useful as an approximation, but like all approximations it has limits on its utility.  Some of these limitations are well known.  Mathematically this makes the equations set elliptic.  This ellipticity is at the heart of why the problem remains unsolved to this day.  Real fluids are not elliptic, real fluids have finite speed of sound. In fact, compressibility may hold the key to solving the most important real physical problem in fluid mechanics, turbulence.

Turbulence is the chaotic motion of fluids that arise when a fluid is moving fast enough and the inertial force of the flow exceeds the viscous force to a certain degree.  Turbulence is characterized by the loss of perceptible dependence of the solution on the initial data, carries with it a massive loss of predictability. Turbulence is enormously important to engineer, and science in general.  The universe is full of turbulence flows, as is the Earth’s atmosphere and ocean.  Turbulence also causes the loss of efficiency of almost any machine built by engineers.  It also drives mixing from stars to car engines to the cream in the coffee cup sitting next to my right hand.

There does exist a set of equations that does have the physical properties the incompressible equations lack, the compressible Navier-Stokes equations.  The problem is that the Millennium prize doesn’t focus on this equation set, but rather it focuses upon the incompressible version.  The question is whether or not going from compressible to incompressible has changed something essential about the equations, and if that is something that is essential to understanding turbulence.  There is a broadly stated assumption about turbulence; it is contained in the solution of the incompressible Navier-Stokes equations (see for example the very first page of Frisch’s book “Turbulence: The Legacy of A. N. Kolmogorov”).  In other words, it is contained inside an equation set that has undeniably unphysical aspects.  Implicitly, the belief is that these details do not matter to turbulence.  I counter that we really don’t understand turbulence well enough to make that leap.

Incompressible flow is a meaningful oft used approximation for very many engineering applications where the large-scale speed of the fluid flow is small.  This parameter is the Mach number, and if the Mach number is low (less than 0.1 to 0.3) it is assumed that the flow can be taken to be incompressible.  Incompressibility is a useful approximation that allows the practical solution of many problems.  Turbulence is ubiquitous and particularly relevant in this low speed limit.  For this reason scientists have believed that ignoring sound waves is reasonable and turbulence can be tackled with the incompressible approximation.

It is worth point out that turbulence is perhaps one of the most elusive topics known.  It has defied progress for a century with only a trickle of advances.  No single person has brought more understanding to bear on turbulence than the Russian scientist Kolmogorov.  His work has established some very fundamental scaling laws that get right to the heart of the problem with incompressibility.

He established an analytical estimate that flows achieve at very high Reynolds numbers (Reynolds number is the ratio of inertial forces to dissipative forces), the 4/5 law.  Basically this law implies that turbulence flows are dissipative and the rate of dissipation is not determined by the value of the viscosity.  In other words, as the Reynolds number becomes large its precise value does not matter to the rate of dissipation.  The implications of this are massive and get to the heart of the issue with incompressibility.  This law implies that the flow has discontinuous dissipative solutions (gradients that become infinitely steep).  These structures would be strongly analogous to shock waves where they would appear to step functions at large scales, and any transition would take place at infinitesimally small distances.  These structures have eluded scientists both experimentally and mathematically.  I believe part of the reason for evasion has been the insistence on incompressibility.

Compressible flows have no such problems.  The flows physically and mathematically readily admit discontinuous solutions.  It is not really a challenge to see this, and shock waves form naturally.  Shock waves form at all Mach numbers, including the limit of zero Mach number.  These structures actually dissipate energy at the same rate as Kolmogorov’s law would indicate (this was first observed by Hans Bethe in 1942, Kolmogorov’s proof occurred in 1941, there was no indication that they knew of each other’s work). It is worth looking at Bethe’s derivation closely.  In the limit of zero Mach number, the compressible flow equations are dissipation free if the expansion is taken to second-order.  It is only when the third-order term in the asymptotic expansion is considered that dissipation arises and shock looks different than an adiabatic smooth solution. The question is whether in taking the limit of incompressibility has removed the desired behavior from the Navier-Stokes equations. 

I believe the answer is yes.  More explicitly shock phenomena and turbulence are assumed to be largely independent except for more compressible flows where classical shocks are found.  The question is whether the fundamental nature of shocks changes continuously as the Mach number goes to zero.  We know that shocks continue to form all the way to a Mach number of zero.  In that limit, the dissipation of energy is proportional to the third power of the jump in variables (velocity, density, pressure).  This dependence matches the scaling associated with turbulence in the above-mentioned 4/5 law.  For shocks, we know that the passage of any shock wave creates entropy in the flow.  A question worth asking is “How does this bath of nonlinear sound waves leading to pressure act, and do these nonlinear features work together to produce the effects of turbulence?”  This is a very difficult problem, and we may have made it more difficult by focusing on the wrong set of equations.

There have been several derivations of the incompressible Navier-Stokes equations from the compressible equations.  These are called the zero-Mach number equations and are an asymptotic limit.  Embedded in their derivation is the assumption that the equation of state for the fluid is adiabatic, no entropy is created.  This is the key to the problem.  Bethe’s result is that shocks are non-adiabatic.  In the process of deriving the incompressible equations we have removed the most important behavior visa-vis turbulence, the dissipation of energy by purely inertial forces.

The problem with all the existing work I’ve looked at is that entropy creating compressible flows is not considered in the passage to the zero Mach limit.  In the process one of the most interesting and significant aspects of a compressible fluid is removed because the approximation doesn’t go far enough.  It is possible, or maybe even likely that these two phenomena are firmly connected.  A rate of dissipation independent of viscosity is a profound aspect of fluid flows.  We understand intrinsically how it arises from shock waves, and its presence with turbulence remains mysterious.  It implies a singularity, a discontinuous derivative, which is exactly what a shock wave is.  We have chosen to systematically remove this aspect of the equations from incompressible flows.  Is it any wonder that the problem of turbulence is so hard to solve?  It is worth thinking about.

Mukhtarbay Otelbayev of Kazakhstan claims that he has proved the Navier-Stokes existence and smoothness problem.  I don’t know whether he has or not, I don’t have the mathematical chops to deliver that conclusion.  What I’m asking is if he has, does it really matter to our understanding of real fluid dynamics?  I think it might not matter either way

We only fund low risk research today

03 Monday Mar 2014

Posted by Bill Rider in Uncategorized

≈ 2 Comments

While musing about Moore’s law and algorithms last week something important occurred to me.  The systematic decision to emphasize hardware and performance increases via Moore’s law over the larger gains that algorithms produce may have a distinct physiological basis.  For decades, Moore’s law has provided a slow steady improvement, kind of like investing in bonds instead of risky stocks.  Algorithmic improvements tends to be episodic and “quantum” in nature.  They can’t be relied upon to deliver a steady diet of progress on a time scale that most program manager’s reign’s run. 

 I think the problem is that banking on Moore’s law is safe while banking on algorithms is risky.  The program manager looking to succeed and move up may not want to risk the uncertainty of having little progress made on their watch.  The risk of the appearance of scandal looms.  Money can be spent with no obvious progress.  Algorithmic work depends upon breakthroughs, which we all should know can’t be scheduled. 

 This has profound implications on what we try to do as a society and Nation.  I’ve bemoaned the lack of “Moonshots” today.  These would be the big risky projects that have huge payoffs, but large chances at failure.  It is much better to rely upon the unsexy, low-risk, low-payoff work that is incremental.  Pressure to publish is much the same.  Incremental work will provide easier publishing and less chance of crashing and burning.  Risky work is harder to publish, more prone to clashes with reviewers and can be immensely time consuming. 

 The deeper question is “what are we losing?”  How can we remove the element of risk and failure as an impediment to deeper investments in the future?  How can we create efforts that capture the imagination and provide greater accomplishments that benefit us all?  Right now, the barriers to risky, but rewarding, research must be lowered or society as a whole will suffer.  This suffering isn’t deeply felt because it reflects gains not seen and loss of possibility.

 

Why algorithms and modeling Beat Moore’s Law

28 Friday Feb 2014

Posted by Bill Rider in Uncategorized

≈ Leave a comment

Why algorithms and modeling Beat Moore’s Law.

Why algorithms and modeling Beat Moore’s Law

28 Friday Feb 2014

Posted by Bill Rider in Uncategorized

≈ 5 Comments

This is especially true with Moore’s law on its deathbed.  With Moore’s law going away it isn’t going to a contest in less than 10 years.

 This comes on the heels of 20 years of ignoring the following wisdom: “The fundamental law of computer science: As machines become more powerful, the efficiency of algorithms grows more important, not less.” – Nick Trefethen, Oxford University 

Before digging into the depths of the topic, I’d like to point out that all of you make intimate contact with algorithms every day.  We collectively can’t live without them in our computerized world.  An algorithm defines Google.  Without the algorithm there is no Google.  The computers powering the algorithm are an afterthought, a necessary detail, but not the soul of what Google does.  The world with Google is different from the world without it.  More perniciously if the Google algorithm were never invented, we’d never miss it.  Today Google defines access to information, and its ideas have built one of the biggest businesses.  This is the power of algorithms laid bare, the power to change whatever they touch.  Many other algorithms have importance such as those that occupy our smartphones, tablets and computers allowing commerce to take place securely, access music, connect with friends, and so on. The algorithm is more important than the computer itself.  The computer is the body, but the algorithm is the soul, the ghost in the machine.

Last week I tried making the case that scientific computing need to diminish its emphasis on high performance computing. Supercomputers are going to be a thing of the past, and Moore’s law is ending.  More importantly, the nature of computing is changing in ways far beyond the control of the scientific computing community.  Trying to maintain the focus on supercomputing is going to be swimming against an incoming tidal wave as mobile computing, and the Internet becomes the dominant forces in computing.

Instead, we need to start thinking more about how we compute, which implies that algorithms and modeling should be our emphasis.  Even with Moore’s law in force, the twin impacts of modeling and algorithm improvement will be more powerful.   With Moore’s law going the way of the Dodo, thinking is the only game in town.  The problem is that funding and emphasis are still trying to push supercomputing like a bowling ball up a garden hose.  In the end we will probably waste a lot of money that could have been invested more meaningfully in algorithms, modeling and software.  In other words we need to invest where we can have leverage instead of the fool’s errand of a wasteful attempt at nostalgic recreation of a bygone era. 

Let’s start by saying that the scaling that Moore’s law gives is almost magical.  For scientific computing, the impact is heavily taxed by the poor scaling for improvements in computed solutions.  Take the archetype of scientific computing, big 3-D calculations that are used as the use case for supercomputers.  Add time dependence and you have a 4-D calculation.  Typically answers to these problems improve with first-order accuracy if you are lucky (this includes problems with shock waves and direct numerical simulation of turbulence).  This means that if you double the mesh density, the solution’s error goes down by a factor of two.  That mesh doubling actually costs 16 times as much (if your parallel efficiency is perfect and it never is).  The 16 come from having eight times as many computational points/cells/volumes and needing twice as many time steps. So you need almost a decade of Moore’s law improvement in computers to enable this. 

What instead, you developed a method that cut the error in half?  Now you get the same accuracy as the decade of advances in supercomputing overnight (two or three years that the research project needs), for a fraction of the cost of computers.  Moreover you can run the code on the same computers you already have.  This is faster, more economical and adds to base of human knowledge. 

So why don’t we take this path?  It doesn’t seem to make sense.  Part of the reason is the people funding scientific computing.  Supercomputers (or Big Iron as they are affectionately known) are big capital outlays.  Politicians, and computer executives love this because they are big, tangible and you can touch, hear and see them.  Algorithms, models and software are ephemeral, exotic and abstract.  Does it have to be this way?  I’m optimistic that change is coming.  The new generation of leaders is beginning to understand that software and algorithms are powerful.  Really, really powerful.  Anyone out there heard of Google.  The algorithm and software are the soul of one of the biggest and most powerful businesses on the face of the Earth.  All this power comes from an algorithm that gives us access to information as never before.  It is the algorithm that is reshaping our society.  Google still buys a lot of computing power, but it isn’t exotic or researchy, it’s just functional, and an afterthought to the algorithmic heart of the monster.  Maybe it is time for the politicians to take notice.

We should have already noticed that software was more important that computers.  Anyone remember Microsoft?  Microsoft’s takedown of IBM should have firmly planted the idea that software beats hardware in our minds.  One of the key points is that scientific computing, like politicians haven’t been ready the lessons in the events of the world correctly.  We are stuck in an old-fashioned mindset.  We still act like Seymour Cray is delivering his fantastic machines to the Labs, and dreaming wistfully for those bygone days.  It is time to get with the times.  Today software, algorithms and data rule the roost.  We need to focus on providing a resonant effort to ride this wave instead of fighting against it.

To strengthen my case let me spell out a host of areas where algorithms have provided huge advances and should have more to provide us.  A great resource for this is the ScaLeS workshop held in 2003, or the more recent PITAC report.  In addition to the cases spelled out there a few more cases can be found in the literature.   The aggregate case to be made is that advances in individual technology cases alone keep up with Moore’s law, but the aggregations of algorithms for more complex applications provide benefits that outpace Moore’s law by orders of magnitude!  That’s right, by factors of 100, or 1000 or more! 

Before spelling a few of these cases out something about Moore’s law needs to be pointed out.  Moore’s law applies to computers, but the technology powering the growth in computer capability is actually an ensemble of technologies comprising the computer.  Instead of tracking the growth in capability for a single area of science the computer capability is the integration of many disciplines.  The advances in different areas are uneven, but when taken together provide a smoother transition in computational performance.  Modeling and simulation is the same thing.  Algorithms from multiple areas work together to produce a capability. 

Taken alone the improvements in algorithms tend to be quantum leaps when a breakthrough is made.  This can be easily seen in the first of the eight cases I will spell out below, numerical linear algebra.  This area of algorithm development is really a core method that very many simulation technologies depend upon.  New algorithms come along that change the scaling of the methodology, and the performance of the algorithm jumps.  Every other code that needs this capability also jumps, but these codes depend on many algorithms and advances there are independent.

Case 1: Numerical linear algebra – this is the simplest case and close to the core of the argument.  Several studies have shown that the gains in efficiency (essentially scaling in the number of equations solved) by linear algebra algorithms come very close to equally the gains achieved by Moore’s law.  The march over time from direct solutions, to banded solvers, to relaxation methods, to preconditioned Krylov methods and now multigrid methods has provided a steady advance in the efficiency of many simulations. 

Case 2:  Finite differences for conservation laws – This is a less well-known instance, but equally compelling.  Before the late 1970’s simulations had two options: use high-order methods that produced oscillations, or low-order dissipative methods like upwind (or donor cell as the Labs call them).  The oscillatory methods like Lax-Wendroff are stabilized with artificial dissipation, which was often heavy-handed.  Then limiters were invented.  All of a sudden one could have the security of upwind with the accuracy of Lax-Wendroff without the negative side effects.  Broadly speaking the methods using limiters are non-oscillatory methods, and they are magic.

“Any sufficiently advanced technology is indistinguishable from magic.“ – Arthur C. Clark

More than simply changing the accuracy, the limiters changed the solutions to be more physical.  They changed the models themselves.  The first order methods have a powerful numerical dissipation that defacto laminarizes flow.  In other words you never get anything that looks remotely turbulence with a first-order method.  With the second-order limited methods you do.   The flows act turbulent, the limiters give you an effective large eddy simulation!  For example look at accretion discs, they never form (or flatten) with first-order methods, and with second-order limited methods, BINGO, they form.  The second-order non-oscillatory method isn’t just more accurate it is a different model!  It opens doors to simulations that were impossible before it was invented.

Solutions were immediately more accurate.  Even more accurate limiting approaches have been developed leading to high-order methods.  These methods have been slow to be adopted in part because of the tendency to port the legacy methods, and the degree to which the original limiters were revolutionary.   New opportunities exist to further these gains in accuracy.  For now, it isn’t clear whether the newer more accurate methods can promise the revolutionary advances offered by the first non-oscillatory methods, but one can never be certain.

Case 3: Optimization – this was spelled out in the PCAST report from 2010.  Algorithms improved the peformance of optimization by 43,000 times over a nearly twenty-year period, computer hardware only improved things by 1000.   Yet we seem to systematically invest more in hardware.  Mind-bending!

Case 4: Plasma Physics – The ScaLeS report spelled out the case for advances in plasma physics, which more explicitly combined algorithms and modeling for huge advances.  Over twenty years algorithmic advances coupled with modeling changes provided more than a factor of 1000 improvement in performance where computational power only provided a factor of 100. 

The plot from the report clearly shows the “quantum” nature of the jumps in performances as opposed to the “smooth” plot for Moore’s law.  It is not clear what role the nature of the improvement plays in the acceptance.  Moore’s law provides a “sure” steady improvement while algorithms and modeling provide intermittent jumps in performance at uneven intervals.  Perhaps this embodies the short-term thinking that pervades our society today.  Someone paying for computational science would rather get a small, but sure improvement rather than the risky proposition of a huge, but very uncertain advance.  It is a sad commentary on the state of R&D funding.

Case 5: N-Body problems – A similar plot exists for N-body simulations where algorithms have provided an extra factor of 1000 improvement in capability compared with hardware.

Case 6: Data Science – There have been instances in this fledgling science where a change in an analysis algorithm can speed up the achievement of results by a factor of 1000.

We’ve been here before.  The ScaLeS workshop back in 2003 tried to make the case for algorithms, and the government response has been to double-down on HPC.  All the signs point to a continuation of this foolhardy approach.  The deeper problem is that those that fund Science don’t seem to have much faith in the degree to which innovative thinking can solve problems.  It is much easier to just ride the glide path defined by Moore’s law.  

That glide path is now headed for a crash landing.  Will we just limp along without advances, or invest in the algorithms that can continue to provide an ever growing level of computational simulation capability for the scientific community?  This investment will ultimately pay off in spurring the economy of the future allowing job growth, as well as the traditional benefits in National Security.

A couple of additional aspects of this change are notable.  The determination of whether an algorithm and modeling approach is better than the legacy is complex.  This is a challenge to the verification and validation methodology.  A legacy on a method with a finer mesh is a simple verification and error estimation problem (which is too infrequently actually carried out!).  A new method and/or often produces a systematically different answer that requires much more subtle examination.  This then carries over to a delicate matter of providing confidence in those that might use the new methodology.  In many cases users have already accepted the results computed with the legacy method, and the nearby answers obtainable with a refined mesh are close enough to inspire confidence (or simply be familiar).  Innovative approaches providing different looking answers will induce the opposite effect, and inspire suspicion.  This is a deep socio-cultural problem rather than a purely technical issue, but at its solution lays the roots of success or failure.

Should Computational Science Focus So Much on High Performance Computing?

21 Friday Feb 2014

Posted by Bill Rider in Uncategorized

≈ 4 Comments

No.

Scientific computing and high performance computing are virtually synonymous.  Should they be? Is this even a discussion worth having? 

It should be.  It shouldn’t be an article of faith.

I’m going to argue that perhaps they shouldn’t be so completely intertwined.    The energy in the computing industry is nearly completely divorced from HPC.  HPC is trying to influence computing industry to little avail.  In doing so, scientific computing is probably missing opportunities to ride the wave of technology that is transforming society.  The societal transformation brings with it economic forces that HPC never had.  It is unleashing forces that will have a profound impact on how our society and economy look for decades to come.

Computing is increasingly mobile, and increasingly networked.  The access to information and computational power is omniscient in today’s world.  It is not an understatement to say that computers and the Internet are reshaping our social, political and scientific worlds.  Why shouldn’t scientific computing be similarly reshaped?

HPC is trying to maintain the connection of scientific computing and supercomputing.  Increasingly, supercomputing seems passé and a relic of the past, just as mainframes are relics.  Once upon a time scientific computing and mainframes dominated the computer industry.  Government Labs had the ear of the computing industry, and to a large extent drove the technology.  No more.  Computing has become a massive element in the World’s economy with science only being a speck on the windshield.  The extent to which scientific research is attempting to drive computing is becoming ever more ridiculous, and shortsighted.

At a superficial level all the emphasis on HPC is reasonable, but leads to a group think that is quite damaging in other respects.  We expect all of our simulations of the real world get better if we have a bigger, faster computer.  In fact for many simulations we have ended up relying upon Moore’s law to do all the heavy lifting.  Our simulations just get better because the computer is faster and has more memory.  All we have to do is make sure we have a convergent approximation as the basis of the simulation.  This entire approach is reasonable, but suffers from intense intellectual laziness. 

There I said it.  The reliance on Moore’s law is just plain lazy. 

Rather than focus on smarter, better, faster solution methods, we just let the computer do all the work.  It is lazy.  As a result the most common approach is to simply take the old-fashioned computer code and port it to the new computer.  Occasionally, this requires us to change the programming model, but the intellectual guts of the program remains fixed.  Because consumers of simulations are picky, the sales pitch is simple.  “You get the same results, only faster,” “no thinking required!”  It is lazy and it serves science, particularly computational science, poorly. 

Not only is it lazy, it is inefficient.  We are failing to properly invest in advances in algorithms.  Study, after study, has shown that the gains from algorithms exceed those of the computers themselves.  This is in spite of the relatively high investment in computing compared to algorithms.  Think what a systematic investment in better algorithms could do?

It is time for this to end.  Moreover there is a very dirty little secret under the hood of our simulation codes.  For the greater part, our simulation codes are utilizing an ever-decreasing portion of the potential performance offered by modern computing.  This inability to utilize computing is just getting worse and worse.  Recently, I was treated to a benchmark of the newest chips, and for the first time the actual runtimes for the codes started to get longer.  The new chips won’t even run the code faster, efficiency be damned.  A large part of the reason for such poor performance is that we have been immensely lazy in moving simulation forward for the last quarter of a century.

For example, I ran the Linpack benchmark on the laptop I’m writing this on.  The laptop is about a generation behind the top of the line, but rates as a 50 GFLOP machine!  It is equivalent to the fastest computer in the World 20 years ago; one that cost millions of dollars.  My iPad4 is equivalent to Cray-2 (1 GLFOP), and I just use it for email, web-browsing, and note taking.  Twenty years ago I would have traded my first born simply to have access to this.  Today it sits idle most of the day.   We are surrounded by computational power, most of it goes to waste.

The ubiquity of computational power is actually an opportunity to overcome our laziness and start doing something.  Most of our codes are using about 1% of the available power.  Worse yet, the 1% utility may look fantastic very soon.  Back in the days of Crays we could expect to squeeze 25-50% of the power with sufficiently vectorized code.  Let’s just say that I could run a code that got 20% of the potential of my laptop, now my 50 GFLOP laptop is acting like a one TeraFLOP computer.  No money spent, just working smarter.  

Beyond the laziness of just porting old codes with old methods, we also expect the answers to simply get better by having less discrete error (i.e., a finer mesh).  This should be true and normally is, but also fails to rely upon the role that a better method can play.  Again, the reliance on brute force through a better computer is an aspect of outright intellectual laziness.  To get this performance we need to write new algorithms and new implementations.  It is not sufficient to simply port the codes.  We need to think, we need to ask the users of simulation results to think, and have faith in the ability of the human mind to create new, better solutions to old and new problems.  This only applies to the areas of science where computing has been firmly established, there are new areas and opportunities that our intimately connected and computationally rich world have to offer. 

These points are just the tip of the proverbial iceberg.  The deluge of data and our increasingly networked world offer other opportunities most of which haven’t even been thought of.  It is time to put our thinking caps back on.  They’ve been gathering dust for too long.

How V&V is like HR

14 Friday Feb 2014

Posted by Bill Rider in Uncategorized

≈ 2 Comments

My wife has a degree in and has worked as an HR person.  Hate her already? HR is among the most despised part of many companies and organizations, mostly because they act as the policy police.  Almost everyone I know at work hates HR since they don’t seem to help and just get in the way.  My wife knows that HR isn’t very popular because of their policing through policy and would like to see HR engage in its work differently.  In fact, the HR departments aren’t the happiest or healthiest places anyway. It isn’t clear hope much self-loathing is involved, but its clear that HR is a stressful job.  Happily for her, she has moved on to new challenges.

As she relayed to me as we spoke, HR wishes it could be more positive by working to help manage people better. They would apply their efforts to creating a better working environment for people to build and nurture their careers.  Instead, HR carries water for the legal department, and a lot of what they do is directly related to protecting their organizations from litigation.  They get to do this without raking in the dough like lawyers! 

Sometimes it helps to see yourself through someone else’s eyes.  I remember during that conversation with my wife, listening to her describe about how she is perceived at work.  Working for human resources, and being almost universally dispised.  I hate the HR people whereever I worked, so it sounded reasonable.  Suddenly, I realized that the way she was talking about HR sounds exactly like how I would relate my reactions to how people look at V&V.  HR is full of well-intentioned individuals, but acts to bind people’s actions because of various legal concerns, or corporate policies (often in service of legal concerns).  HR enforces the corporate processes related to personell.  They get in the way of emotion and desire, and for this they are roundly hated by broad swathes of the workforce.  They also complicate decisions based on management judgement by requiring hiring and firing decisions to be well-documented, and sound from perspectives far beyond the local management.

V&V often does the same sorts of things.  V&V likes process, V&V likes to criticize how people do their modeling and simulation work.  They like to introduce doubt where confidence once reigned (no matter how appropriate the doubt actually is people don’t like it!).  V&V likes documentation and evidence.  What does V&V get for all of this emphasis on quality?  They are dispised.  As my friend Tim has said, “V&V takes all the fun out of computing”.  Gone is the wonder of being able to simulate something replaced with process and questions.  V&V is incredibly well intentioned, but the forceful way of going through the process of injecting quality can be distinctly counter-productive, just like HR.

Just as HR has realized their villianous reputation, I believe V&V is perceived similarly.  Both HR and V&V could benefit from a reboot of their roles.  HR professionals would like to be sources of positive energy for employers, and quite honestly most employers need some positive energy these days.  More and more the employer-employee relationship has become advesarial.  Benefits are worse every year and the compensation disparity from top to bottom has sky rocketed.  HR would like to be a force for positive employment experiences, and the development of employee-centered, career oriented development.

V&V could be a direct parallel.  The tension with V&V is the drive to get results for a given application (product) above all else.  V&V sits their whining about quality while a job needs to get done.  The product-line for an organization is what the customer cares about, and should get the credit. Too often V&V is just viewed as getting the way of progress.  Instead V&V should craft a different path like that desired by HR.

There is a natural tension between the execution of an organization’s mission in the most mission-appropriate fashion, and completely staying entirely within modern personnel practices.  The policing of personnel actions by HR is usually taken as an imprediment to “getting the job done”.  The same holds for doing proper and adequete V&V of mission-focused computational simulation.  There is a tension between the execution of the mission effectively and refining the credibility of the simulation through V&V.  Both V&V and HR could stand to approach the execution of their role in the modern world in a more positive and mission-constructive fashion.

The whole issue can be cast in the frame of coaching versus refereeing, which parallels managing & leading versus policing & punishing.  Effective management leads to good outcomes through cooperation with people whereas policing forces people to work toward outcomes via threat. People would rather be managed positively rather than threatened with the sort of punishment policing implies.  Ultimately, managed results are better (and cheaper) than those driven via threat of force or punishment.

V&V often acts the same way by defining policy for how modeling and simulation is done.  This manner of policing ends up being counter-productive in much the same way as HR’s policing works against them.  When thinking about how V&V is applied to computational science, consider how similarly high-minded outcomes are driven by policy in other areas of business, and how you perceive them.   When V&V acts like HR, the results will be taken accordingly; moreover once the policing is gone, the good behavior will rapidly dissappear.  Instead both V&V and HR should focus on teaching or coaching the principles that lead to best practices.  This would lead to a real sustained improvement far more effectively than policies with the same objective.

← Older posts
Newer posts →

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • March 2026
  • February 2026
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013

Categories

  • Uncategorized

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • The Regularized Singularity
    • Join 60 other subscribers
    • Already have a WordPress.com account? Log in now.
    • The Regularized Singularity
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar