A secret’s worth depends on the people from whom it must be kept.
Secrecy is a necessary element in the conduct of National Security. Some information is either too dangerous to too many to be allowed to be freely shared. It needs to be effective. The mis-classification of information is a threat because it undermines the cases where classification is necessary and appropriate.
Secrecy also carries a massive cost that should be factored into the consideration. When something is secret is isn’t subject to the sort of review and consideration that public information is. As a result aspects like the quality of the related work suffer, or achieving quality is more expensive. The United States kept a large number of documents classified for decades and hidden from view in spite of having no classified information and having great historical value. Other documents were classified solely due to their embarrassing nature to the government. Some of these were the same.
Man is not what he thinks he is, he is what he hides.
It isn’t a secret that the United States has engaged in a veritable orgy of classification since 9/11. What is less well known is the massive implied classification through other data categories such as “official use only (OUO)”. This designation is itself largely unregulated as such is quite prone to abuse.
The smart way to keep people passive and obedient is to strictly limit the spectrum of acceptable opinion, but allow very lively debate within that spectrum….
OUO is also used to manage things like export control. Despite its importance, the law managing export control is terrible. It is poorly written, poorly managed, and its application is primarily driven by fear rather than rational thought. It is too important to treat this way, and I believe its implementation is actually a threat to our security. It might be useful to describe explicitly the ways that secrecy and classification are abused. Some of these sins are a nuisance, and some of them border on unethical, immoral or illegal.
The ones with no imagination are always the quickest to justify themselves
Legitimacy (pride). Some information seeks legitimacy through being declared classified in some manner. For example, shoddy work can be implied to be legit through its classification.
It is in the nature of the human being to seek a justification for his actions.
Importance (envy). Some information or the work related to the information is implied to be more important because it is classified. I see this a lot. It is a way of making the case that your work is so important that it needs to be protected.
I have grown to love secrecy. It seems to be the one thing that can make modern life mysterious or marvelous to us. The commonest thing is delightful if only one hides it.
Hiding (greed). Some information is deliberately hidden through classification, the words to remember here are “need to know”. This is used to hide things that people don’t want too many eyes on. When I encounter this it disgusts me.
It is almost always the cover-up rather than the event that causes trouble.
–Howard Baker
Cover-up (lust). Fortunately, I have not seen this form in person, but it often involves things that are embarrassing or illegal. You’ve seen this in the news, I’m sure it happens a lot more than we think.
Withholding information is the essence of tyranny. Control of the flow of information is the tool of the dictatorship.
― Bruce Cavil
Control (wrath). This is a common use of classification. It is a valid reason in many cases, but it is also misused when it is applied to keep communication from happening. Another place where the words “need to know” appear.
Safety (gluttony). This is the “cover your ass” version of classification. Basically, you’re not sure so you declare it to be classified because it’s the “safe” thing to do. At some level there isn’t anything wrong with this if you rectify the situation promptly. In many areas there is clear guidance that allows a better final determination to be made. In other areas, like OUO there is no clear guidance, and the safety sin reigns through the fear associated with awfully written laws.
The awareness of the ambiguity of one’s highest achievements – as well as one’s deepest failures – is a definite symptom of maturity.
― Paul Tillich
Ambiguity (sloth). This goes to the heart of the management of classified documents. In the example of export control we have allowed ambiguity and resultant fear to rule for years. There is no standard and no clear guidance. As a result the application of classification is uneven and ultimately self-conflicting. 
Again something at work has inspired me to write. It’s a persistent theme among authors, artists and scientists regarding the concept of the fresh start (blank page, empty canvas, original idea). I think its worth considering how truly “fresh” these really are. This idea came up during a technical planning meeting where one of participants viewed this new project as being offered a blank page.
Once we stepped over that threshold, conflict erupted over the choices available with little conclusion. A large part of the issue was the axioms each person was working with. Across the board we all took a different set of decisions to be axiomatic. At some time in the past these “axioms” were choices, and became axiomatic through success. Someone’s past success becomes the model for future success, and the choices that led to that success become unstated decisions we are generally completely unaware of. These form the foundation of future work and often become culturally iconic in nature.
Take the basic framework for discretization as an operative example: at Sandia this is the finite element method; at Los Alamos it is finite volumes. At Sandia we talk “elements”, at Los Alamos it is “cells”. From there we continued further down the proverbial rabbit hole to discuss what sort of elements (tets or hexes). Sandia is a hex shop, causing all sorts of headaches, but enabling other things, or simply the way a difficult problem was tackled. Tets would improve some things, but produce other problems. For some ,the decisions are flexible, for others there isn’t a choice, the use of a certain type of element is virtually axiomatic. None of these things allows a blank slate, all of them are deeply informed and biased toward specific decisions of made in some cases decades ago.
The other day I risked a lot by comparing the choices we’ve collectively made in the past as “original sin”. In other words what is computing’s original sin? Of course this is a dangerous path to tread, but the concept is important. We don’t have a blank slate; our choices are shaped, if not made by decisions of the past. We are living, if not suffering due to decisions made years or decades ago. This is true in computing as much as any other area.
In case you’re wondering about my writing habit and blog. I can explain a bit more. If you aren’t, stop reading. In the sense of authorship I force myself to face the blank page every day as an exercise in self-improvement. I read Charles Durhigg’s book “Habits” and realized that I needed better habits. I thought about what would make me better and set about building them up. I have a list of things to do every day, “write” “exercise” “walk” “meditate” “read” and so on.
The blog is a concrete way of putting the writing to work. Usually, I have an idea the night before, and draft most of the thoughts during my morning dog walk (dogs make good motivators for walks). I still need to craft (hopefully) coherent words and sentences forming the theme. The blog allows me to publish the writing with a minimal effort, and forces me to take editing a bit more seriously. The whole thing is an effort to improve my writing both in style and ease of production.
For some reason I’m having more “WTF” moments at work lately. Perhaps something is up, or I’m just paying attention to things. Yesterday we had a discussion about reviews, and people’s intense desire to avoid them. The topic came up because there have been numerous efforts to encourage and amplify technical review recently. There are a couple of reasons for this, mostly positive, but a tinge of negativity lies just below the surface. It might be useful to peel back the layers a bit and look at the dark underbelly.
I touched on this topic a couple of weeks ago (
The biggest problems with peer reviews are “bullshit reviews”. These are reviews that are mandated by organizations for the organization. These always get graded and the grades have consequences. The review teams know this thus the reviews are always on a curve, a very generous curve. Any and all criticism is completely muted and soft because of the repercussions. Any harsh critique even if warranted puts the reviewers (and their compensation for the review at risk). As a result of this dynamic, these reviews are quite close to a complete waste of time.
Because of the risk associated with the entire process, the organizations approach the review in an overly risk-averse manner, and control the whole thing. It ends up being all spin, and little content. Together with the dynamic created with the reviewers, the whole thing spirals into a wasteful mess that does no one any good. Even worse, the whole process has a corrosive impact on the perception of reviews. They end up having no up side; it is all down side and nothing useful comes out of them. All of this even though the risk from the reviews has been removed through a thoroughly incestuous process.
An element in the overall dynamic is the societal image of external review as a sideshow meant to embarrass. The congressional hearing is emblematic of the worst sort of review. The whole point is grandstanding and/or destroying those being reviewed. Given this societal model, it is no wonder that reviews have a bad name. No one likes to be invited to their own execution.
environment we find ourselves. First of all, people should be trained or educated in conducting, accepting and responding to reviews. Despite its importance to the scientific process, we are never trained how to conduct, accept or responds to a review (response happens a bit in a typical graduate education). Today, it is a purely experiential process. Next, we should stop including the scoring of reviews in any organizational “score”. Instead the quality of the review including the production or hard-hitting critique should be expected as a normal part of organizational functioning.
I’m a progressive. In almost every way that I can imagine, I favor progress over the status quo. This is true for science, music, art, and literature, among other things. The one place where I tend to be status quo are work and personal relationships that form the foundation for my progressive attitudes. These foundations are formed by several “social contracts” that serve to define the roles and expectations. Without this foundation, the progress I so cherish is threatened because people naturally retreat to conservatism for stability.
What I’ve come to realize is that the shortsighted, short is demolishing many of these social contracts –term thinking dominating our governance. Our social contracts are the basis of trust and faith in our institutions whether they are the rule of government, or the place we work. In each case we are left with a severe corrosion of the intrinsic faith once granted these cornerstones of public life. The cost is enormous, and may have created a self-perpetuating cycle of loss of trust precipitating more acts that undermine trust.
What gets lost? Almost everything. Progress, quality, security, you name it. Our short-term balance sheet looks better, but our long-term prospects look dismal. The scary thing is that these developments help drive conservative thinking, which in turn drives these developments. As much as anything this could explain our Nation’s 50-year march to the right. We have taken the virtuous cycle we were granted, and developed a viscous cycle. It is a cycle that we need to get out of before it crushes our future.
We got here through overconfidence and loss of trust can we get out of it by combining realism with trust in each other. Right now, the signs are particularly bad with nothing looking like realism, or trust being part of the current public discourse on anything. 

It seems to be a lot easier to metaphorically put our heads in the sand. A lot of the time we go to great lengths to convince ourselves of the opposite of the truth, to convince ourselves that we are the master’s of the universe. Instead we can only achieve the mastery we crave though the opposite. We should never consider our knowledge and capability to be flawless, but flawed and incomplete.
The people applying calibrated models are often lauded as the models of success. The problems with this are deep and pernicious. We want to do much more than calibrate results, we want to understand and explore the unknown. The only way to do that is systematically uncover our failings, and shortcomings with a ken focus on exposing the limits we have. The practical success of calibrated modeling stands squarely in the way of pushing the bounds of knowledge.

We are encouraged by everything around us to work on things that are important. Given the intrinsic differences between the messaging we are given explicitly and implicitly, its hard to really decide what’s important. Of course, if you work on what’s important you will personally make a bit more money. You really make a lot of money if you work specifically in the money making industry…
These words are spoken whenever we go into planning “reportable” milestones in virtually every project I know about. If we are getting a certain chunk of money, we are expected to provide milestones that report our progress. It is a reasoned and reasonable thing, but the execution is horribly botched by the expectations that are grafted onto the milestone. Along with the guidance in the title of this post, we are told, “these milestones must always be successful, so choose your completion criteria carefully.” Along with this we make sure that these milestones don’t contain too much risk.
The real danger in the philosophy we have adopted is the creeping intrusion of mediocrity into everything we do. Nothing is important enough to take risks with. The thoughts expressed through these words are driving a mindless march toward mediocrity, once great research institutions are being thrust headfirst into the realm of milquetoast also-rans. The scientific and engineering superiority of the United States is leaving in lockstep with every successfully completed milestone built this way.
Science depends on venturing bravely into the unknown, a task of inherent risk, and massive potential reward. The reward and risk are linked intimately; with nothing risked, nothing is gained. By making milestones both important and free of risk, we sap vitality from our work. Instead of wisely and competently stewarding the resources we are trusted with, they are squandered on work that is shallow and uninspired. Rather than being the best we can do, it becomes the thing we can surely do.
When push comes to shove, these milestones are always done, and always first in line for resource allocation. At the same time we have neutered them from the outset. The strategy (if you can call it that!) is self-defeating, and only yields the short-term benefit of the appearance of success. This appearance of success is believed to be necessary for continuing the supply of resources.
If you haven’t heard of “wicked problems” before it’s a concept that you should familiarize your self with. Simply put, a wicked problem is a problem that can’t be stated or defined without attempting to solve it. Even then your definition will be woefully incomplete. Wicked problems are recursive. Every attempt to solve the problem yields a better definition of the problem. They are the metaphorical onion where peeling back every layer produces another layer.
In code development this often takes the form of refactoring where the original design of part of the software is redone based on the experience gained through its earlier implementation. You understand the use of and form that the software should take once you’ve tried to write it (or twice or thrice or…). The point is that the implementation is better the second or third time based on the experience of the earlier work. In essence this is embracing failure in its proper role as a learning experience. A working, but ultimately failed form of the software is the best experience for producing a better piece of software.
This principle applies far more broadly to scientific endeavors. An archetypical scientific wicked problem is climate change not simply because the complexity of the scientific aspects of the problem, but also the political and cultural dynamics stirred up. In this way climate change connects back to the traditional wicked problems from the social sciences. A more purely scientific problem that is wicked is turbulence because of its enormous depth in terms of physics, engineering and math with an almost metaphysical level of intractability arising naturally. Turbulence is also connected to a wealth of engineering endeavors with immense economic consequences.
Maintaining the perspective of wickedness as being fundamental is useful as it drives home the belief that your deep knowledge is intrinsically limited. The way that experts look at V&V (or any other wicked problem) is based on their own experience, but is not “right” or “correct” in and of itself. It is simply a workable structure that fits the way they have attacked the problem over time.
This is truly sad considering the transformative potential bound up in those hopeful, unrealistic, dreams we allowed ourselves to express. We could be doing things that are magnificent; instead we withdraw to the world of the possible and bureaucratically controlled, politically viable reality. The projects we hopefully envisioned would be transformative and create a far greater future than the path we are currently on. We are told that the people in Washington can’t envision anything greater either. Perhaps they are just like us, simply unwilling to honestly voice anything greater than our currently pedestrian path.
This is why the future is so bleak; the dreams are there, but no one has faith that these dreams can be realized. Support for working on the dream is missing, why start something that will never be finished? People have recently realized that the future was supposed to bring flying cars and instead we got mini-supercomputers in our pockets (that do very little computing). Of course it doesn’t quite look like “Blade Runner” or “Minority Report” either. The problem is that it looks like the dystopian parts of those movies have a greater chance of reality than the cool parts.