• About The Regularized Singularity

The Regularized Singularity

~ The Eyes of a citizen; the voice of the silent

The Regularized Singularity

Monthly Archives: February 2014

Why algorithms and modeling Beat Moore’s Law

28 Friday Feb 2014

Posted by Bill Rider in Uncategorized

≈ Leave a comment

Why algorithms and modeling Beat Moore’s Law.

Why algorithms and modeling Beat Moore’s Law

28 Friday Feb 2014

Posted by Bill Rider in Uncategorized

≈ 5 Comments

This is especially true with Moore’s law on its deathbed.  With Moore’s law going away it isn’t going to a contest in less than 10 years.

 This comes on the heels of 20 years of ignoring the following wisdom: “The fundamental law of computer science: As machines become more powerful, the efficiency of algorithms grows more important, not less.” – Nick Trefethen, Oxford University 

Before digging into the depths of the topic, I’d like to point out that all of you make intimate contact with algorithms every day.  We collectively can’t live without them in our computerized world.  An algorithm defines Google.  Without the algorithm there is no Google.  The computers powering the algorithm are an afterthought, a necessary detail, but not the soul of what Google does.  The world with Google is different from the world without it.  More perniciously if the Google algorithm were never invented, we’d never miss it.  Today Google defines access to information, and its ideas have built one of the biggest businesses.  This is the power of algorithms laid bare, the power to change whatever they touch.  Many other algorithms have importance such as those that occupy our smartphones, tablets and computers allowing commerce to take place securely, access music, connect with friends, and so on. The algorithm is more important than the computer itself.  The computer is the body, but the algorithm is the soul, the ghost in the machine.

Last week I tried making the case that scientific computing need to diminish its emphasis on high performance computing. Supercomputers are going to be a thing of the past, and Moore’s law is ending.  More importantly, the nature of computing is changing in ways far beyond the control of the scientific computing community.  Trying to maintain the focus on supercomputing is going to be swimming against an incoming tidal wave as mobile computing, and the Internet becomes the dominant forces in computing.

Instead, we need to start thinking more about how we compute, which implies that algorithms and modeling should be our emphasis.  Even with Moore’s law in force, the twin impacts of modeling and algorithm improvement will be more powerful.   With Moore’s law going the way of the Dodo, thinking is the only game in town.  The problem is that funding and emphasis are still trying to push supercomputing like a bowling ball up a garden hose.  In the end we will probably waste a lot of money that could have been invested more meaningfully in algorithms, modeling and software.  In other words we need to invest where we can have leverage instead of the fool’s errand of a wasteful attempt at nostalgic recreation of a bygone era. 

Let’s start by saying that the scaling that Moore’s law gives is almost magical.  For scientific computing, the impact is heavily taxed by the poor scaling for improvements in computed solutions.  Take the archetype of scientific computing, big 3-D calculations that are used as the use case for supercomputers.  Add time dependence and you have a 4-D calculation.  Typically answers to these problems improve with first-order accuracy if you are lucky (this includes problems with shock waves and direct numerical simulation of turbulence).  This means that if you double the mesh density, the solution’s error goes down by a factor of two.  That mesh doubling actually costs 16 times as much (if your parallel efficiency is perfect and it never is).  The 16 come from having eight times as many computational points/cells/volumes and needing twice as many time steps. So you need almost a decade of Moore’s law improvement in computers to enable this. 

What instead, you developed a method that cut the error in half?  Now you get the same accuracy as the decade of advances in supercomputing overnight (two or three years that the research project needs), for a fraction of the cost of computers.  Moreover you can run the code on the same computers you already have.  This is faster, more economical and adds to base of human knowledge. 

So why don’t we take this path?  It doesn’t seem to make sense.  Part of the reason is the people funding scientific computing.  Supercomputers (or Big Iron as they are affectionately known) are big capital outlays.  Politicians, and computer executives love this because they are big, tangible and you can touch, hear and see them.  Algorithms, models and software are ephemeral, exotic and abstract.  Does it have to be this way?  I’m optimistic that change is coming.  The new generation of leaders is beginning to understand that software and algorithms are powerful.  Really, really powerful.  Anyone out there heard of Google.  The algorithm and software are the soul of one of the biggest and most powerful businesses on the face of the Earth.  All this power comes from an algorithm that gives us access to information as never before.  It is the algorithm that is reshaping our society.  Google still buys a lot of computing power, but it isn’t exotic or researchy, it’s just functional, and an afterthought to the algorithmic heart of the monster.  Maybe it is time for the politicians to take notice.

We should have already noticed that software was more important that computers.  Anyone remember Microsoft?  Microsoft’s takedown of IBM should have firmly planted the idea that software beats hardware in our minds.  One of the key points is that scientific computing, like politicians haven’t been ready the lessons in the events of the world correctly.  We are stuck in an old-fashioned mindset.  We still act like Seymour Cray is delivering his fantastic machines to the Labs, and dreaming wistfully for those bygone days.  It is time to get with the times.  Today software, algorithms and data rule the roost.  We need to focus on providing a resonant effort to ride this wave instead of fighting against it.

To strengthen my case let me spell out a host of areas where algorithms have provided huge advances and should have more to provide us.  A great resource for this is the ScaLeS workshop held in 2003, or the more recent PITAC report.  In addition to the cases spelled out there a few more cases can be found in the literature.   The aggregate case to be made is that advances in individual technology cases alone keep up with Moore’s law, but the aggregations of algorithms for more complex applications provide benefits that outpace Moore’s law by orders of magnitude!  That’s right, by factors of 100, or 1000 or more! 

Before spelling a few of these cases out something about Moore’s law needs to be pointed out.  Moore’s law applies to computers, but the technology powering the growth in computer capability is actually an ensemble of technologies comprising the computer.  Instead of tracking the growth in capability for a single area of science the computer capability is the integration of many disciplines.  The advances in different areas are uneven, but when taken together provide a smoother transition in computational performance.  Modeling and simulation is the same thing.  Algorithms from multiple areas work together to produce a capability. 

Taken alone the improvements in algorithms tend to be quantum leaps when a breakthrough is made.  This can be easily seen in the first of the eight cases I will spell out below, numerical linear algebra.  This area of algorithm development is really a core method that very many simulation technologies depend upon.  New algorithms come along that change the scaling of the methodology, and the performance of the algorithm jumps.  Every other code that needs this capability also jumps, but these codes depend on many algorithms and advances there are independent.

Case 1: Numerical linear algebra – this is the simplest case and close to the core of the argument.  Several studies have shown that the gains in efficiency (essentially scaling in the number of equations solved) by linear algebra algorithms come very close to equally the gains achieved by Moore’s law.  The march over time from direct solutions, to banded solvers, to relaxation methods, to preconditioned Krylov methods and now multigrid methods has provided a steady advance in the efficiency of many simulations. 

Case 2:  Finite differences for conservation laws – This is a less well-known instance, but equally compelling.  Before the late 1970’s simulations had two options: use high-order methods that produced oscillations, or low-order dissipative methods like upwind (or donor cell as the Labs call them).  The oscillatory methods like Lax-Wendroff are stabilized with artificial dissipation, which was often heavy-handed.  Then limiters were invented.  All of a sudden one could have the security of upwind with the accuracy of Lax-Wendroff without the negative side effects.  Broadly speaking the methods using limiters are non-oscillatory methods, and they are magic.

“Any sufficiently advanced technology is indistinguishable from magic.“ – Arthur C. Clark

More than simply changing the accuracy, the limiters changed the solutions to be more physical.  They changed the models themselves.  The first order methods have a powerful numerical dissipation that defacto laminarizes flow.  In other words you never get anything that looks remotely turbulence with a first-order method.  With the second-order limited methods you do.   The flows act turbulent, the limiters give you an effective large eddy simulation!  For example look at accretion discs, they never form (or flatten) with first-order methods, and with second-order limited methods, BINGO, they form.  The second-order non-oscillatory method isn’t just more accurate it is a different model!  It opens doors to simulations that were impossible before it was invented.

Solutions were immediately more accurate.  Even more accurate limiting approaches have been developed leading to high-order methods.  These methods have been slow to be adopted in part because of the tendency to port the legacy methods, and the degree to which the original limiters were revolutionary.   New opportunities exist to further these gains in accuracy.  For now, it isn’t clear whether the newer more accurate methods can promise the revolutionary advances offered by the first non-oscillatory methods, but one can never be certain.

Case 3: Optimization – this was spelled out in the PCAST report from 2010.  Algorithms improved the peformance of optimization by 43,000 times over a nearly twenty-year period, computer hardware only improved things by 1000.   Yet we seem to systematically invest more in hardware.  Mind-bending!

Case 4: Plasma Physics – The ScaLeS report spelled out the case for advances in plasma physics, which more explicitly combined algorithms and modeling for huge advances.  Over twenty years algorithmic advances coupled with modeling changes provided more than a factor of 1000 improvement in performance where computational power only provided a factor of 100. 

The plot from the report clearly shows the “quantum” nature of the jumps in performances as opposed to the “smooth” plot for Moore’s law.  It is not clear what role the nature of the improvement plays in the acceptance.  Moore’s law provides a “sure” steady improvement while algorithms and modeling provide intermittent jumps in performance at uneven intervals.  Perhaps this embodies the short-term thinking that pervades our society today.  Someone paying for computational science would rather get a small, but sure improvement rather than the risky proposition of a huge, but very uncertain advance.  It is a sad commentary on the state of R&D funding.

Case 5: N-Body problems – A similar plot exists for N-body simulations where algorithms have provided an extra factor of 1000 improvement in capability compared with hardware.

Case 6: Data Science – There have been instances in this fledgling science where a change in an analysis algorithm can speed up the achievement of results by a factor of 1000.

We’ve been here before.  The ScaLeS workshop back in 2003 tried to make the case for algorithms, and the government response has been to double-down on HPC.  All the signs point to a continuation of this foolhardy approach.  The deeper problem is that those that fund Science don’t seem to have much faith in the degree to which innovative thinking can solve problems.  It is much easier to just ride the glide path defined by Moore’s law.  

That glide path is now headed for a crash landing.  Will we just limp along without advances, or invest in the algorithms that can continue to provide an ever growing level of computational simulation capability for the scientific community?  This investment will ultimately pay off in spurring the economy of the future allowing job growth, as well as the traditional benefits in National Security.

A couple of additional aspects of this change are notable.  The determination of whether an algorithm and modeling approach is better than the legacy is complex.  This is a challenge to the verification and validation methodology.  A legacy on a method with a finer mesh is a simple verification and error estimation problem (which is too infrequently actually carried out!).  A new method and/or often produces a systematically different answer that requires much more subtle examination.  This then carries over to a delicate matter of providing confidence in those that might use the new methodology.  In many cases users have already accepted the results computed with the legacy method, and the nearby answers obtainable with a refined mesh are close enough to inspire confidence (or simply be familiar).  Innovative approaches providing different looking answers will induce the opposite effect, and inspire suspicion.  This is a deep socio-cultural problem rather than a purely technical issue, but at its solution lays the roots of success or failure.

Should Computational Science Focus So Much on High Performance Computing?

21 Friday Feb 2014

Posted by Bill Rider in Uncategorized

≈ 4 Comments

No.

Scientific computing and high performance computing are virtually synonymous.  Should they be? Is this even a discussion worth having? 

It should be.  It shouldn’t be an article of faith.

I’m going to argue that perhaps they shouldn’t be so completely intertwined.    The energy in the computing industry is nearly completely divorced from HPC.  HPC is trying to influence computing industry to little avail.  In doing so, scientific computing is probably missing opportunities to ride the wave of technology that is transforming society.  The societal transformation brings with it economic forces that HPC never had.  It is unleashing forces that will have a profound impact on how our society and economy look for decades to come.

Computing is increasingly mobile, and increasingly networked.  The access to information and computational power is omniscient in today’s world.  It is not an understatement to say that computers and the Internet are reshaping our social, political and scientific worlds.  Why shouldn’t scientific computing be similarly reshaped?

HPC is trying to maintain the connection of scientific computing and supercomputing.  Increasingly, supercomputing seems passé and a relic of the past, just as mainframes are relics.  Once upon a time scientific computing and mainframes dominated the computer industry.  Government Labs had the ear of the computing industry, and to a large extent drove the technology.  No more.  Computing has become a massive element in the World’s economy with science only being a speck on the windshield.  The extent to which scientific research is attempting to drive computing is becoming ever more ridiculous, and shortsighted.

At a superficial level all the emphasis on HPC is reasonable, but leads to a group think that is quite damaging in other respects.  We expect all of our simulations of the real world get better if we have a bigger, faster computer.  In fact for many simulations we have ended up relying upon Moore’s law to do all the heavy lifting.  Our simulations just get better because the computer is faster and has more memory.  All we have to do is make sure we have a convergent approximation as the basis of the simulation.  This entire approach is reasonable, but suffers from intense intellectual laziness. 

There I said it.  The reliance on Moore’s law is just plain lazy. 

Rather than focus on smarter, better, faster solution methods, we just let the computer do all the work.  It is lazy.  As a result the most common approach is to simply take the old-fashioned computer code and port it to the new computer.  Occasionally, this requires us to change the programming model, but the intellectual guts of the program remains fixed.  Because consumers of simulations are picky, the sales pitch is simple.  “You get the same results, only faster,” “no thinking required!”  It is lazy and it serves science, particularly computational science, poorly. 

Not only is it lazy, it is inefficient.  We are failing to properly invest in advances in algorithms.  Study, after study, has shown that the gains from algorithms exceed those of the computers themselves.  This is in spite of the relatively high investment in computing compared to algorithms.  Think what a systematic investment in better algorithms could do?

It is time for this to end.  Moreover there is a very dirty little secret under the hood of our simulation codes.  For the greater part, our simulation codes are utilizing an ever-decreasing portion of the potential performance offered by modern computing.  This inability to utilize computing is just getting worse and worse.  Recently, I was treated to a benchmark of the newest chips, and for the first time the actual runtimes for the codes started to get longer.  The new chips won’t even run the code faster, efficiency be damned.  A large part of the reason for such poor performance is that we have been immensely lazy in moving simulation forward for the last quarter of a century.

For example, I ran the Linpack benchmark on the laptop I’m writing this on.  The laptop is about a generation behind the top of the line, but rates as a 50 GFLOP machine!  It is equivalent to the fastest computer in the World 20 years ago; one that cost millions of dollars.  My iPad4 is equivalent to Cray-2 (1 GLFOP), and I just use it for email, web-browsing, and note taking.  Twenty years ago I would have traded my first born simply to have access to this.  Today it sits idle most of the day.   We are surrounded by computational power, most of it goes to waste.

The ubiquity of computational power is actually an opportunity to overcome our laziness and start doing something.  Most of our codes are using about 1% of the available power.  Worse yet, the 1% utility may look fantastic very soon.  Back in the days of Crays we could expect to squeeze 25-50% of the power with sufficiently vectorized code.  Let’s just say that I could run a code that got 20% of the potential of my laptop, now my 50 GFLOP laptop is acting like a one TeraFLOP computer.  No money spent, just working smarter.  

Beyond the laziness of just porting old codes with old methods, we also expect the answers to simply get better by having less discrete error (i.e., a finer mesh).  This should be true and normally is, but also fails to rely upon the role that a better method can play.  Again, the reliance on brute force through a better computer is an aspect of outright intellectual laziness.  To get this performance we need to write new algorithms and new implementations.  It is not sufficient to simply port the codes.  We need to think, we need to ask the users of simulation results to think, and have faith in the ability of the human mind to create new, better solutions to old and new problems.  This only applies to the areas of science where computing has been firmly established, there are new areas and opportunities that our intimately connected and computationally rich world have to offer. 

These points are just the tip of the proverbial iceberg.  The deluge of data and our increasingly networked world offer other opportunities most of which haven’t even been thought of.  It is time to put our thinking caps back on.  They’ve been gathering dust for too long.

How V&V is like HR

14 Friday Feb 2014

Posted by Bill Rider in Uncategorized

≈ 2 Comments

My wife has a degree in and has worked as an HR person.  Hate her already? HR is among the most despised part of many companies and organizations, mostly because they act as the policy police.  Almost everyone I know at work hates HR since they don’t seem to help and just get in the way.  My wife knows that HR isn’t very popular because of their policing through policy and would like to see HR engage in its work differently.  In fact, the HR departments aren’t the happiest or healthiest places anyway. It isn’t clear hope much self-loathing is involved, but its clear that HR is a stressful job.  Happily for her, she has moved on to new challenges.

As she relayed to me as we spoke, HR wishes it could be more positive by working to help manage people better. They would apply their efforts to creating a better working environment for people to build and nurture their careers.  Instead, HR carries water for the legal department, and a lot of what they do is directly related to protecting their organizations from litigation.  They get to do this without raking in the dough like lawyers! 

Sometimes it helps to see yourself through someone else’s eyes.  I remember during that conversation with my wife, listening to her describe about how she is perceived at work.  Working for human resources, and being almost universally dispised.  I hate the HR people whereever I worked, so it sounded reasonable.  Suddenly, I realized that the way she was talking about HR sounds exactly like how I would relate my reactions to how people look at V&V.  HR is full of well-intentioned individuals, but acts to bind people’s actions because of various legal concerns, or corporate policies (often in service of legal concerns).  HR enforces the corporate processes related to personell.  They get in the way of emotion and desire, and for this they are roundly hated by broad swathes of the workforce.  They also complicate decisions based on management judgement by requiring hiring and firing decisions to be well-documented, and sound from perspectives far beyond the local management.

V&V often does the same sorts of things.  V&V likes process, V&V likes to criticize how people do their modeling and simulation work.  They like to introduce doubt where confidence once reigned (no matter how appropriate the doubt actually is people don’t like it!).  V&V likes documentation and evidence.  What does V&V get for all of this emphasis on quality?  They are dispised.  As my friend Tim has said, “V&V takes all the fun out of computing”.  Gone is the wonder of being able to simulate something replaced with process and questions.  V&V is incredibly well intentioned, but the forceful way of going through the process of injecting quality can be distinctly counter-productive, just like HR.

Just as HR has realized their villianous reputation, I believe V&V is perceived similarly.  Both HR and V&V could benefit from a reboot of their roles.  HR professionals would like to be sources of positive energy for employers, and quite honestly most employers need some positive energy these days.  More and more the employer-employee relationship has become advesarial.  Benefits are worse every year and the compensation disparity from top to bottom has sky rocketed.  HR would like to be a force for positive employment experiences, and the development of employee-centered, career oriented development.

V&V could be a direct parallel.  The tension with V&V is the drive to get results for a given application (product) above all else.  V&V sits their whining about quality while a job needs to get done.  The product-line for an organization is what the customer cares about, and should get the credit. Too often V&V is just viewed as getting the way of progress.  Instead V&V should craft a different path like that desired by HR.

There is a natural tension between the execution of an organization’s mission in the most mission-appropriate fashion, and completely staying entirely within modern personnel practices.  The policing of personnel actions by HR is usually taken as an imprediment to “getting the job done”.  The same holds for doing proper and adequete V&V of mission-focused computational simulation.  There is a tension between the execution of the mission effectively and refining the credibility of the simulation through V&V.  Both V&V and HR could stand to approach the execution of their role in the modern world in a more positive and mission-constructive fashion.

The whole issue can be cast in the frame of coaching versus refereeing, which parallels managing & leading versus policing & punishing.  Effective management leads to good outcomes through cooperation with people whereas policing forces people to work toward outcomes via threat. People would rather be managed positively rather than threatened with the sort of punishment policing implies.  Ultimately, managed results are better (and cheaper) than those driven via threat of force or punishment.

V&V often acts the same way by defining policy for how modeling and simulation is done.  This manner of policing ends up being counter-productive in much the same way as HR’s policing works against them.  When thinking about how V&V is applied to computational science, consider how similarly high-minded outcomes are driven by policy in other areas of business, and how you perceive them.   When V&V acts like HR, the results will be taken accordingly; moreover once the policing is gone, the good behavior will rapidly dissappear.  Instead both V&V and HR should focus on teaching or coaching the principles that lead to best practices.  This would lead to a real sustained improvement far more effectively than policies with the same objective.

How the United States is destroying its National Labs

07 Friday Feb 2014

Posted by Bill Rider in Uncategorized

≈ Leave a comment

While this might seem to be a bold statement, I’ll bet a lot of my colleagues would agree with this.  We are as a nation destroying an incredible national resource though systematic and pervasive mismanagement.  In fact, it would be difficult to conceive of a foreign power doing a better job of killing the Labs than we are doing to ourselves (think of the infamous circular firing squad).

Who is to blame?

You, and me and those who represent you in government are doing this.  Both the legislative and executive branches of government are responsible along with the scandal-mongering media-political machine.  The loss of anything that looks like the traditional fourth-estate helps to fuel the destruction.  Our new media has now simply turned into an arm of the corporate propaganda machine mostly devoted to the accumulation of ever-greater amounts of wealth.  All the responsible adults seem to have left the room.  At the Labs, no amount of expense or additional paperwork will be spared to avoid the tinge of scandal or impropriety.  Most of the rules taken by alone seem to be reasonable and at least have reasonable objectives, but taken as a whole represent a lack of trust that is strangling the good work that could be happening.

There are no big projects or plans that we are working on.  Our Nation does not have any large goals or objectives that we are striving for.  Aside from the ill constructed and overly wasteful “war on terror”, we have little or nothing in the way of vision.  In truth, the entire war on terror is simply a giant money pit that supplies the military-industrial complex with wealth in lieu of the Cold War.  The work at the Labs is now a whole bunch of disconnected projects from a plethora of agencies with no common theme, or vision to unite the efforts.  As a result, we cost a fortune and do very very little of real lasting value any more.

More than simply being expensive and useless, something even worse has happened.  We have lost the engine of innovation in the process.  The vast majority of innovation is simply the ingenious application of old ideas in new contexts.  The goals that served to create our marvelous National Labs served as engines of innovation because the projects required a multitude of disciplines to be coordinated engaged and melded together to achieve success.  As a result of this unified purpose engineers, physicists, chemists, mathematicians, metallurgists, and more had to work together to a common goal.  Ideas that had matured in one field had to be communicated to another field.  Some of the scientists took this as inspiration to apply the ideas in new ways in their field.  This was the engine of innovation.  The core of our current economic power can be traced to these discoveries.

We are literally killing the goose that laid the golden egg, if it isn’t already dead.

Gone are bold explorations of science, and grand creations in the name of the National interest.  These have been replaced by caution, incrementalism, and programs that are largely a waste of time and effort.  Programs are typically wildly overblown, when in fact they represent a marginal advance.  Washington DC has created this environment and continue to foster, fertilize and plant new seeds of destruction with each passing year.

Take for example their latest creation, the unneeded oversight of conference attendance.  I’ll remind you that a couple of year’s ago the IRS had a conference in Las Vegas that had some events, and activities that were of questionable value and certainly showed poor judgment.  What would be the proper response?  Fire the people responsible.  What is the government’s response? Punish everybody and create a brand new beauracracy to scrutinize conference attendance.  Everyone includes the National Labs, and any scientific meetings that we participate in.  The entire system now costs much more and produces less value.

All of this stupidity was done in the name of an irrational fear of scandal.  The political system is responsible because one party is afraid that the other party will make political hay over the mismanagement.  The party of anti-government will make an example of the pro-government party as being wasteful.   No one bats an eyelash at wasting money to administer cost controls instead of spending money to actually do our mission.  As a result we waste even more money while getting less done.  A lot is made of applying best business practices in how we run the Labs.  I will guarantee to you that no business would treat its visitors or host meetings as cheaply and disrespectfully as we do.  Instead of hosting visitors in a reasonable and respectful way, we make them pay for their own meals, and hold ridiculously Spartan receptions.  Only under the most extreme circumstances would a single drop of alcohol be served to a visitor (and only after copious paperwork has been pushed along).  All in the name of saving money while in the background we waste enormous amounts of money managing everything we do.

I can guarantee you that our meetings are not fun; they are full of long, incomprehensible talks, terrible travel, and crappy hotels (flying coach is awful domestically, and horrible overseas).  Travelling these days sucks, and I don’t like being away from my family and home.  I go to meetings to learn about what my field, present my own work, and engage with my colleagues.  My field is international as are most scientific disciplines.  Where I work the conference attendance “tool” assumes that the only valid reason to go is to present your own work, and sometimes this reasoning doesn’t suffice.  I’ll invite the reader to do the exercise of what happens if everyone just goes to a meeting to talk and not listen.  What happens to the audience? Who are you talking to?  Welcome to the version of science being created, it is utterly absurd.  Increasingly it is ineffective.

A big part of the issue is the misapplication of a business model to science at the Laboratory.  Government in the last 30 years has fallen in love with business.  Business management can do no wrong, and its been applied to science.  Now we have quarterly reports, and business plans.  Accounting has chopped our projects into smaller and smaller pieces all requiring management and reporting.  All my time is supposed to accounted for and charged appropriately to a “sponsor”.  Sometimes people even ask for a project to charge for their attendance at a seminar, professional training, reviewing papers, and numerous other tasks that used to be under the broad heading of professional responsibilities.  A lot of these professional responsibilities are falling by the wayside.  I’ve noticed that increasingly the Labs are missing any sense of community or common purpose.  In the past, the Labs formed a community with an overarching goal and reason for existence.

Now we have replaced this with a phalanx of sponsors who simply want the Labs to execute tasks, and have no necessary association with each other.  It has become increasingly difficult to see any difference between the Labs and a typical beltway bandit (think SAIC, Booz-Allen, …).  Basically, the business model of management has been poisonous to the Labs.  Aside from executive bonuses and stock options it is arguable that the modern business model isn’t good for business either.  Lots of companies are being hollowed out and having their future’s destroyed in the service of the next quarterly report.  For science it is a veritable cancer, and death sentence.

On top of this it is expensive and horribly inefficient, I cost an arm and a leg, while actually producing much less output than the same scientist would have produced 30 or 40 years ago.  The bottom line, the business model for scientific management is a bad deal for everyone.

As I mentioned in an early post, I think a large part of the problem is trust, or the lack thereof.  Back in the days of the Cold War (before 1980), the Labs were trusted.  The Nation said, “build us a nuclear stockpile” and the Labs produced.  The Nation said, “let’s go to the Moon” and we went.  We created an endless supply of technology and science as a collateral effect of this trust.  In those days the money flowed into the Labs in very large chunks (on the order of a billion current dollars), with broad guidance of what needed to be achieved (build weapons, go to the moon, etc…).  The Labs were trusted to divvy up the money to tasks needed to execute the work and produce the desired outcome.  It worked great, and the Nation received a scientific product that massively exceeded the requested outcomes.  In other words, we got more than we asked for.

Now, we get less, and in fact because of the fear of failure what the Labs promise is a severely “low balled”.  The whole thing revolves around the issue of trust, and the impact of the lack of trust in the modern world.  A big part of what the Labs used to create was driven by serendipity, where new ideas arose through the merging of old ideas associated with the differing disciplines brought together to execute those Cold War missions.  Now the missions are largely gone because the nation seems to lack any sense of vision, and the work is parceled out in small chunks that short-circuit the interactions that drive the serendipity.   No larger vision or goal binds the work at the Labs together, and drives the engine of innovation.

What has and is happening is a tragedy.  Our National Labs were marvelous places.  They are now being swallowed by a tidal wave of mediocrity.  Suspicion and fear are driving behaviors that hurt our ability to be a vibrant scientific Nation.  The Labs were created by faith and trust in the human capacity for success.  They created a wealth of knowledge and innovation that enriches our Nation and World today.

It is time to stem the tide.  It is time to think big again.  It is time to trust our fellow citizens to do work that is high-minded.  It is time to let the Labs do work that is in the best interest of the Nation (and humanity).

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • February 2026
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013

Categories

  • Uncategorized

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • The Regularized Singularity
    • Join 55 other subscribers
    • Already have a WordPress.com account? Log in now.
    • The Regularized Singularity
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar