This Forum is Closed
December 08, 2021, 12:04:40 pm
Welcome, Guest. Please login or register.

Login with username, password and session length
News: GGF now has a permanent home:
  Home Help Search Links Staff List Login Register  
  Show Posts
Pages: [1] 2 3 ... 5
1  General / General Discussion / "Cybernetics in the Service of Communism" 1967 essay by Col. Raymond S. Sleeper on: January 10, 2011, 07:34:00 pm


Colonel Raymond S. Sleeper (USMA; M.A., Harvard University) is Commander, Foreign Technology Division, Air Force Systems Command, Wright-Patterson AFB, Ohio. During World War II he served with the 11th Bombardment Squadron, 7th Bombardment Group, in Java and Australia; in 1943 was transferred to General MacArthur’s staff as Chief of Military Personnel; and in 1944 became Deputy Chief, Enlisted Branch, Personnel, Hq USAF. Other assignments have been as Deputy Chief, Strategic Vulnerability Branch, ACS/Intelligence, Hq USAF, 1948-50; as student, then as faculty member, Air War College; as Deputy Commander, 11th Bombardment Wing, later Commander, 7th Bombardment Wing H (B-36), 1955-57; as Chief of War Plans, CINCPAC, from 1957 until he became Assistant to the DCS/Foreign Technology in 1960; and as DCS/Foreign Technology, Hq AFSC, from 1963 until he assumed his present position in August 1966.


The conclusions and opinions expressed in this document are those of the author cultivated in the freedom of expression, academic environment of Air University. They do not reflect the official position of the U.S. Government, Department of Defense, the United States Air Force or the Air University.

Air University Review, March-April 1967

Cybernetics in the Service of Communism

Colonel Raymond S. Sleeper

The Spearhead for the spread of Communism was forged in the Soviet Union when Lenin seized power and began to use this philosophy as the rallying standard for achieving world Communist domination. The Soviet Union’s progress from the revolutionary chaos of the early Twenties to the space-age discipline of the Sixties has been phenomenal. In response to a series of difficulties and events in attempting to accelerate this task, the Soviets have borrowed and adapted to their use a unique and powerful philosophical and technological tool—cybernetics.

the promise of cybernetics

This tool seems to offer the means to optimize the continued development and growth of the power of Soviet Russia, the subversive capture of free nations, and the establishment of worldwide educational, technological, military, and space superiority. But more important, cybernetics is now seen by some Soviet authorities as the means of facilitating the optimum (Communist) control of the complex system of states, peoples, and resources of the world which the Communists hope will result from Communist world domination.

Simply stated, cybernetics involves purposeful control of complex dynamic systems. Dynamic systems are those systems which can react to or adapt to a changing environment. In practice, the Soviets appear to be classifying almost any subject that has to do with information and control in man, machine, and society as cybernetics. Cybernetic systems, as opposed to automatic devices, are capable of responding in a predictable orderly manner to changes in the environment. An example of a crude cybernetic system is the home furnace that responds via thermostatic control to changes in temperature for the purpose of maintaining a reasonably constant temperature in the home. One of the first complex cybernetic systems developed was Norbert Wiener’s design of a system to link radar through a computer to a battery of automatic fire-controlled antiaircraft guns.

In facing this extremely difficult problem, Wiener realized that the complex system he was designing performed the same functions as a skilled skeet shooter who acquired the target, tracked it, allowed for an appropriate lead, and fired. The skilled marksman achieved a high degree of accuracy. Knowing that biological systems (man or animal) could adapt easily to rapidly changing environmental parameters, both external as in the case of the skeet shooter and internal as in the case of an athlete whose body adjusts to give him a second wind, he often consulted with neurologists and others to determine if he was on the right track in his basic design philosophy. There were several instances in which he found direct analogs between the behavior of his gun-laying systems and certain characteristics of the nervous systems.

Wiener’s great achievement was that he was able to synthesize existing technology and ideas into a basic conceptual framework that unified this technology to produce a high degree of control in any type of complex dynamic system. The basic elements of this concept are

(1) A well-defined goal or end state to be achieved.

(2) Sensors to detect changes in the environment, i.e., temperature, velocity, chemical reactions, learning states, etc.

(3) Communications nets connecting all elements of the system to assure information flow.

(4) Logic units to process the information flow according to criteria contained in the goal (1).

(5) Control units that are responsive to decisions from the logic center (4), which adjusts system units to the desired states as information from (1), (2), (3), and (4) changes.

Wiener felt that this scheme was basic to the control of all complex systems—technical, biological, or social. The Soviets regard the U.S. PERT management system, or the “critical path technique,” as they call it, to be a highly sophisticated example of applying cybernetic theory to an administrative system.

Cybernetics, as it developed tinder Wiener and in the U.S.S.R., imposes a rigid discipline for clear thinking upon both the theorist and the practitioner. If a true cybernetic approach to problem solving is adopted, the planner must first define his goals and criteria for their achievement as clearly and with as little ambiguity as possible.

the thrust of cybernetics in the Soviet system

The thrust of cybernetics in Russia extends from the microbiological to the macrocosmic dimensions of man’s relationship to the elements of the universe. The volume of Soviet literature on cybernetics is monumental. Academician A. I. Berg, chairman of the Governmental Council on Cybernetics, refers to over 5000 articles in 1961 alone on “the problems of the application of mathematics, electronics, and cybernetics to biology and medicine.” Since 1961, the volume of literature and research on this subject has continued to increase.

On the biological side of cybernetics one sees interesting developments, such as the “iron hand” which attaches pneumatically to the stump of the arm and, through electrodes connected to the stump muscles of the forearm, picks up myocurrents generated from the contraction of these muscles, which then control the opening and closing of the hand. There are many other devices which link the nervous system to machines, and vice versa. One example is the biostimulator, which uses the recorded muscle movements of a sharpshooter to provide programmed electronic sleeves for automated rifle training instruction. This device is slipped over the arms and torso and electronically “stimulates” the proper muscles of the student soldier to emulate the sharp-shooting techniques of an expert rifleman recorded in the simulator. Another device, the Soviet sleep machine, is claimed to produce a relaxed state, or sleep, which provides more rest than an equivalent amount of normal sleep. This device is used in medical treatment for a variety of symptoms. Soviet cybernetics includes, in addition to biologic and physiologic control techniques, a broad program of research in neurology, psychology, and related fields, especially those areas which have the potential for technological application and behavior control.

The Soviet concept and program of the “new man” involves the “creation” of a wholly superior type of individual. It begins with the separation of numbers of young children from their families at the ages 1 to 6 years. These children are trained in some 800 special boarding homes and schools, separated from their families. Estimates vary, but it appears that 1,500,000 to 2,500,000 children have been entered into this program. The training and education of these selected children has been called the “technocratization of youth” in Russia. In other references the Soviets have called this program the preparation for “the rationalization of world economics and cybernation.” The U.S.S.R. is thus planning for rapid development of automation and encourages, promotes, and fosters cybernetics at the highest level of government and party. Social adjustment to automation is planned through the preparation of students to accommodate to the “cybernated society.” And, according to the Soviets, the change will therefore be more orderly in Russia than in any other country.

At the machine level, the applications vary from guidance systems for missiles to automated power distribution centers for controlling the flow of electric power between widely dispersed nets so as to eliminate costly, redundant power generation.

But it is at the socioeconomic level that one sees the major innovations being attempted in the Soviet Union. A cybernetics center is planned for each state. Several are already being built, and the first one at Kiev is nearly finished. These, together with the Cybernetics Council in Moscow, the Moscow information storage and retrieval center (VINITI), the Moscow computer center, the developing nationwide unified information network, some 350 computer centers, and over 100 institutes that are working in cybernetic science and technology, if built as planned, will constitute the physical structure of the program. A typical center such as the one at Kiev will have mathematicians, physiologists, psychologists, sociologists, neurologists, economists, electronic scientists, engineers, and physicists assigned. Thus a very broad multidisciplinary scientific force will attack the problems involved in the automation of Soviet society. The implications of such an enormous undertaking cannot possibly be seen with clarity at this early date, but it deserves serious observation, study, and attempts at interpretation.

It helps us some in taking a serious view of these Soviet activities when we realize that such very large modeling and attempts to structure society are actually beginning here in the United States. San Francisco is using an operating mathematical model of the city in terms of its land, buildings, peoples, jobs, amenities, etc. This model is being used for forward planning, and other U.S. cities are now developing their own models. But the Soviet scheme involves all of Russia and promises to involve the world.

One interpretation of the Soviet effort describes the purpose of cybernetics in the U.S.S.R. as “threefold: improved military and civilian technology, rationalization of the economy, and mechanization of intellectual tasks.” l But it is likely that the main thrust of Soviet cybernetics is much more encompassing. For the central argument of the Soviets is that cybernetics can work only in a “socialist” society:

As distinct from capitalist countries where the various firms create, each for itself, separate automated systems of control, under socialism it is perfectly possible to organize a single, (integrated) complex, automated system of control of the country’s national economy. Obviously, the effect of such automation will be much greater than that of automating control of individual enterprises. 2

Probably this is the key to the major difference between the Soviet purpose in cybernetics and the purpose in the West. Not so much that the Soviets are already beginning to apply cybernetics to the optimum control of the entire Soviet society but that they are aiming to reconstruct society through the widest possible application of cybernetics and eventually to employ it as the principal system of Communist control of the world. Some observers of the Soviet scene have responded with ridicule; others have simply stated that such a grand scheme is impossible. Perhaps the most common reaction is that Soviet technology cannot possibly support such a plan in Russia, to say nothing of the world. It is normal among these latter observers to note that “the U.S. is still ahead in the design, analysis, and evaluation of complex and sophisticated systems. . . ; we are still ahead of Soviet technology in the fields of radar systems, television systems, telemetry systems; and still ahead of Soviet technology by a considerable margin in the design and manufacture of high speed computers with large memories.”3

But there are indications of steady Soviet progress: “Soviet science is ahead in the analysis of random-processes of shooting and random process representation; Soviet science is generally superior to U.S. science in the fields of detection theory, parameters, prediction and estimation, and the analysis of phase-keyed systems in the presence of fading; and Soviet science can be said to be slightly ahead of the U.S. sciences in the overall fields of cybernetics, logic algebra, automated theory, and pattern recognition.”4 And cybernetics seems to have given the Russian leaders a new vision of the utopian future of Communist social progress. For they now see in cybernetics, they think, a means to stimulate progress and to integrate advances in all fields of science. Again, the most fundamental and overriding point is that through cybernetics the integration of scientific progress now enables the construction of the ideal Communist society in Russia as well as throughout the rest of the world. 5

To restructure the Russian society, to establish a system for the optimum control of Russia, and to embark upon the study, plan, and implementation of a control system aimed at the restructuring of the societies of the world so that they will dovetail into a cybernated Communist Russia is a fantastic task. The task was not undertaken lightly. A comprehensive study was conducted from 1959 to 1961 for the purpose of determining the broad structure of the program and its consonance with Marxism-Leninism. Then in June 1962 the Soviet Council of the Academy of Sciences, the Scientific Council on the Philosophical Problems of the Natural Sciences, and the Party Committee of the Presidium of the Academy of Sciences met together in a joint conference on cybernetics. Over 1000 participants represented all the sciences connected with cybernetics. This all-union conference mapped out the implementation of the tasks set for cybernetics by the 22d World Communist Party Congress.

The general structure of the program has been analyzed and ably presented by Professor John J. Ford of American University. He believes that the 20-year plan approved by the 22d Party Congress is designed to test and implement the model. The model and its application to Russia is to be largely tested by 1981. Subsequent indications strongly support Ford’s analysis, e.g., a quote from the Technical Cybernetics All-Union Conference at Odessa in 1965: “Today, it is clear that the methods of technical cybernetics are finding growing applications in the control of the entire Soviet economy.”

Anyone with a deep interest in Soviet developments who wishes to understand Soviet activities through the next 10 to 20 years must take into consideration the Soviet cybernetics model. Scholars who continue to employ traditional concepts of Soviet behavior will surely be missing an important part of the picture.

The plan encompasses the development of a pattern for sociocultural, material-technical, and ideological subsystems. Each pattern must provide a “nervous structure” and “control center.” Similarly, each must be automatically operative but adapted to the goals of the “brain.” Harmonious transition of the parts toward a higher degree of centralized organization of social structure is thus insured. 6

This 20-year plan is based on the thesis that social (and biological) change is inevitable, but more important, the social change should be purposeful and progressive (i.e., toward Communism). To quote Professor Ford:

The strategy for social progress dictated by this general model calls for the establishment of a “nervous system” to tie together the system’s “sensors” of internal and external environments at all levels with the highest decision centers which can then determine optimal (in relation to system goals) courses of action and then transmit information to the effector organs of the social system (ministries, production complexes, schools, defense installations, people and so on). The cycle is then repeated. If the new behavior of the system brings it closer to the goals thereof as predicted, or moves away therefrom because the prediction was incorrect, the sensors once again detect the change and transmit the information upward in a continuous process analogous to that by which a helmsman steers a ship toward its destination.7

A model of world social structure seemingly visualized in this description is not attractive to most Americans, since it is deterministic and authoritarian. However, from a Communist viewpoint the whole process of “national liberation” and revolution involves the destruction of “capitalistic institutions” and the development and **** of Communist institutions in a purposeful mode.

transition of “capitalist societies”
to “socialist societies”

The transition of “capitalist societies” to “socialist societies” is the central aim of world Communism. It is the object, the content, and the substance of Communist activities across the world.

There are Communist parties in some 105 nations of the world. In certain countries there are more Communist parties than one, but for our purpose we will assume these parties are factions and that ultimately these factions either coordinate, cooperate, or are controlled by the dominant party in their struggle for take-over of the specific country.

Some 16 of these 105 nations are now controlled by the Communists. Each of the 16 is in fact ruled by the Communist Party therein. It is generally accepted that the world Communist movement is no longer monolithic but that polycentralism and a system of “World Commonwealth of Communist Nations” is evolving and expanding through subversive aggression.8 In spite of these and other doctrinal changes, a Marxist-Leninist model exists for the stages of Communist penetration and takeover in a target country. This doctrine elaborates five steps (called “stages” in Marxist-Leninist doctrine) in the “transition to a Marxist-Leninist Society”:

Step One is infiltration into the target country and the formation of a Communist Party.

Step Two is the infiltration of Communist Party members into the target country’s key institutions, parliament, political parties, unions, industry, communications services, police, military forces, and other important elements of the national life. The members who infiltrate the key institutions form units that are called fractions.9 When fractions are formed in most of the key institutions, a united national front is then organized to coordinate policy and action among all the fractions.

Step Three is the decision to seize power. According to the doctrine there exist both the objective and subjective situations in a target country. The objective situation is the current real-life situation in the target country. The subjective situation is the “power” of the Communist Party. Evaluation of this power involves assessment of the number of hard-core members and their deployment throughout the target country’s key institutions, together with the power that the members exert over the nation by virtue of the National Front. The doctrine states that when the subjective situation of the Communist Party is in favorable balance with the objective situation in the country as a whole, the decision is then made to seize power.10 This does not mean that an attempt to seize power is made at this time, but the decision is made. Then the action committees are organized and prepared for the eventual take-over. The process of determining the favorable revolutionary balance situation is obviously an extremely difficult and complex process. It is clear, for example, that the Communists misjudged the revolutionary balance in Indonesia at least twice in recent times.11

Step Four is to seize power. This step is initiated with the announcement of the time when power will be seized—and the timing is critical. The action committees are then armed, and direct operations are initiated against the anti-Communist, non-Communist, or national power in being. Insofar as possible, the Communist Party attempts to present this “seizure of power” in the light of a national revolution, a national uprising, or some similar camouflage for the Communist take-over. 12

Step Five is to consolidate the Communist control of the nation. This involves the progressive elimination of all anti-Communist, uncooperative control and influence in the nation and leads to the purges. This is the sort of operation we saw in China when Mao Tse-tung instituted his program to “let a hundred flowers of internal criticism grow,” and then when internal criticism appeared the critics were eliminated.13 It is the type of purge we have seen in Cuba since Castro seized power.

It may be claimed that our model for Communist subversive aggression against free nations is too simple. Communist manuals, doctrine, pamphlets, and publications have devoted hundreds of thousands of pages to the elaboration of the tactics and techniques of take-over, or the “transition of power from the capitalistic monopolies to the working class,” as they call it. The basic Communist bible, Fundamentals of Marxism-Leninism, devotes over 500 pages to the subject. There have been many variations in this model, and there will be many more. But how can cybernetics serve Communist subversion and take-over?

The key step in the process is the decision and timing of the take-over. Note the relationship that must be satisfied for the Communist take-over: One could write this very simply as

P= S/O

where P represents potential for take-over, S the subjective power of the Communist Party in the target country, and 0 the objective situation in the country itself. Now it can readily be seen that experience will be necessary to determine the proper values of P for evaluating take-over potential. It can also be seen that the quotient of S divided by O is essentially a summation of the Communist potential for takeover in each of the key institutional structures as related to the stabilizing anti-Communist elements in the country. It is the problem of measuring Communist potential for take-over in a national power structure sense that “scientific programs” using statistics, content analysis, sociological and anthropological social structure analysis, and experience factors, that we see as the task for cybernetics. The process can be shown as the objective situation deriving from real life in the target country feeding into the reference model (the Communist model) and with effectors and sensors from the Communist Party in its central role of subversion, take-over, command, and control, as shown in Figure 1.

Figure 1. Model for Communist take-over

The tremendous upheaval and social reorientation of Cuba which have been produced by the Castro regime may be seen as an example of Communist transition of society toward a “higher stage of social evolution” and as a transition toward the Soviet model.

Through a series of trade and finance agreements the Castro Regime has moved toward the adaptation of Cuba’s economy and industrial plan to that of the Sino-Soviet Bloc. . . . The degree to which Cuba has become economically dependent on the Bloc is evidenced by the fact that 80% of its trade is now tied up in arrangements with Iron Curtain countries. At the beginning of 1960 only 2% of Cuba’s total foreign trade was with the Bloc.

Cuba, under the Castro Regime, is rapidly becoming oriented toward the Sino-Soviet Bloc. This orientation is not taking the form of a merely cultural interchange with communist countries such as several Western countries are conducting. On the contrary, the emerging pattern is one of extensive cultural identification with the Bloc in which Cuban cultural patterns are being rapidly altered and the traditional cultural ties with countries of this hemisphere and Western Europe are deliberately severed. This is to be seen in the comprehensive cultural agreements, the exchange of students, performing artists, and exhibitions with the Soviet Union, Communist China and their satellites, the impediments placed before students wishing to study anywhere except in Iron Curtain countries, the virtual halting of the flow of movies, books and magazines from free countries with a commensurate rise in the influx of these materials from the Sino-Soviet Bloc, and the attacks on Western culture in general and that of the United States in particular.14

Thus one sees the total social, economic, and cultural restructuring of Cuba to fit the Communist model. Meanwhile, the Communist model appears to be moving toward a cybernetics model. This may lead to increased rationalization of Communist subversive aggression against free nations.

Under a cybernetic scheme the Communists need not export traditional ideology. Instead they need to export “scientific social changes” which fit the cybernetic model of the economy and sociological structure of scientific Marxism-Leninism now being built in Russia.

the drive for military superiority

The Soviets have consistently pushed for worldwide military superiority. Stalin supported this goal, and so did Khrushchev, on balance.

Some top American nuclear scientists believe that Soviet nuclear weapons technology is at least equivalent to if not ahead of U.S. in some areas. In the area of high-yield weapons it is conceded that they have the edge. They have demonstrated a device of 60 megatons which we believe could be weaponized or turned into a weapon at about a hundred megatons.

We were somewhat surprised in 1948 that the Soviets copied our B-29 (which they called TU-4). More surprising was that they built a significant number and built them at the expense of more rapidly rejuvenating the war-torn civilian economy.

Through the 1950’s the Soviets built modern fighters in large numbers, built bombers, and then moved into building and deploying ballistic missiles.

There is no question that the U.S. Minuteman and Polaris missiles remain superior to those of the Soviets, but the Russian weaponeers are not resting on their laurels. According to Hanson Baldwin, they are continuing to develop and deploy large numbers of new weapons of widely varying types.15

The Soviet development of new missiles appears to be most dramatic, and the evidence is that they are also developing new aircraft (e.g., the AN22, a huge transport) and modernizing their army and navy. The 1965 spring military parade in Moscow and again the 7 November 1965 parade showed new generations of ICBM’s, IRBM’s, “global rockets,” and anti-ICBM missiles, as well as many new army vehicles.

The Soviets apparently are building and deploying all these weapons. It is important that we recognize that they can, that they have the economic power to do so. In 1962 Secretary of Defense McNamara elaborated before Congress the new missiles, aircraft, antimissile missiles, agricultural improvements, and civilian consumer improvements that could be made by the Russians and then concluded that they could not do all these things—that they must make a choice. It would seem that they have made the choice at the expense of the civilian economy and that they have moved rapidly forward in strategic weapons.

One of the primary strengths of the Soviet R&D and production program is the use of scientific planning (cybernetics) throughout their weapons programs. Scientific planning, gaming theory, optimum solution of complex problems, development of block-aggregate computing systems, creation of the scientific basis for the synthesis of automatic control, and hundreds of similar subjects, all pertinent to the most modern techniques of scientific planning and development of aerospace weapon systems, appear in Soviet cybernetics literature.16 The hypothesis is suggested that analysis of overall Soviet power must now take into account the increased efficiency of the early applications of integrated cybernetic systems optimized for the creation of Soviet military and national security.

Similarly, cybernetics can be seen to impact on the Soviet space effort.

the thrust in space

Soviet work in space probably started in the early Forties with the work of Tsilkovskii, the Soviet Goddard. In the late Forties and early Fifties it appears that the basic technologies and vertical firings of components were accomplished. In the late Fifties we saw the first Sputnik and the beginning of the Soviet space spectaculars. Figure 2 shows the Soviet concentration on spectaculars—manned flight, near-earth orbital work, and some military and military support types of programs. There has been little direct evidence that any of these spectaculars will lead to direct Soviet military space capabilities, but there have been repeated Soviet references to the military uses of space. One of the first we saw was in Major General Pokrovsky’s book, Science and Technology in Contemporary War, published in 1956, in which he refers to the coming importance of the war in space. Since 1957 there have been innumerable Soviet references to orbital bombardment, orbital rockets, rockets from spaceships, attack or delivery of weapons from space, and the like.

Figure 2. Soviet space firsts

It would seem prudent to assume that the Soviets plan to use space for military purposes as rapidly as possible. The Soviet space effort is huge—surely as large as if not larger than that of the U.S. There is no record of the Soviets’ having made anything like this type of effort in aerospace research and development without a resultant direct enhancement of their military power.

In the U.S. we argue variously that space offensive nuclear-delivery forces are less efficient than ICBM’s, less accurate, and less credible. But when the Soviets are dedicated to offensive world objectives, the special effects of space military offensive forces may appear very useful—namely, prestige, terror, persuasion, coercion, pressure, psychological warfare, and demoralization. The sight and sound of Soviet military orbital forces in the free skies of the world day and night, plus Communist satellite television propaganda tuned into sets around the world, would not be attractive to contemplate in the service of Soviet goals of worldwide Communist domination.

Such major steps in space could not be taken except for the progress that the Soviets are seeking through cybernetics. This has been recognized by Soviet scientists and has been openly stated by several. A description of the impact of Soviet cybernetics on their space program is included in V. Denisov’s “Cybernetics and the Cosmos” (1962). Denisov describes the active flight of “The Cosmic Ship,” its automatic control features, and its manual control features. But, “No matter what the degree of automation of the engineering process of controlling the cosmic ship, the managing and organizing role always remains with man.  Hence, we must deal with complex cybernetic ‘man-machine’ systems in space ships. . . . Man is the controlling element or operator in the ‘man-machine’ system and the machine is the controlled object.” Denisov goes on to describe the working of the cosmic ship in detail and then projects developments into the future: “It can be that the foot of man will not take the first step on other planets, . . . but the foot of a cybernetic automaton may.” He then goes on to extend man’s influence into the cosmos through travel and communications, basing his predictions on progress in cybernetics as well as in astronautics and related sciences.

In cybernetics there is unquestionably a promise for improvement of the welfare of all humans. Robert Theobold, author and economist, proposes a minimum basic income for all adults in America based on the use of cybernetics by U.S. industry and economy, an income ensuring a standard of living by which one can live with dignity. He also makes the astounding point that a modern nation can produce anything it decides to produce.17 But Theobold decries the U.S. government’s inattention to these “facts,” stating that these facts demand new value systems in America.

There is not much question that cybernetics is seen by the Soviet elite not only as the path to Communist utopia but also as the road to development of a worldwide system of socialist states under Communist control. This view is reflected even by the American Communist Party.

Is there an inner compulsion in technological development which will transform the private appropriation of profit in America and the immense, unprecedented political power it brings, into an innocent surplus managed for the whole of society by the same small top group wearing different hats? . . . No . . . Once the profit motive is no longer a sacred absolute, the machines can be controlled, and, especially in the centralized society of today, cybernation can be developed and applied at a rate and in a manner that is in the interest of society as a whole. . . and this will come. . . only when the American people make a daily struggle in a progressive direction [toward Communism].18

If we wish to follow events in Soviet Russia and developments in worldwide Communism reasonably intelligently, we should begin to view them in terms of the changes wrought by the massive cybernetic program in Russia and in the worldwide Communist movement. Moreover, if cybernetics promises such a “paradise” for socialist countries and enables, in effect, a technological penetration of free nations, it behooves us to define the parameters of possible impact and the promise and direction of national and international automation in free societies as a counter. There is no doubt at all that American computer technology, program theory and application, and automation lead the world. But the proliferation of computers, computer languages, and computer centers has become truly an electronic Tower of Babel. In contrast, in Russia the computer centers, languages, and networks are planned and programmed to optimize control of the entire country. Does this lead to an efficiency of resource utilization that enables the Soviets, with a gross national product in 1965 of $303 billion—compared to $664 billion for the U.S.—to challenge the U.S. for world leadership and military superiority? Surely the American system with its redundancy, flexibility, and free choice is much more attractive to us, but is it too wasteful of resources? And is this American redundancy and flexibility optimized to meet aggressive, purposeful international competition? Will truly wide redundancy, flexibility, and choice invite penetration and restriction by a centrally controlled, integrated, and optimized system—a system optimized for the announced goal and program of world domination?

These are interesting questions that only time and intensive analysis will answer. Most Americans, if given the choice, would vote for the redundancy, individualism, flexibility, and optimization of private opportunity as opposed to the centralized authoritarian-imposed optimized control. However, the parameters of redundancy, individualism, flexibility, control, optimization, purposefulness, and private opportunity may have to be subjected to the burning crucible of public discussion and definition in the light of national interests before we have a national understanding of both the benefits and penalties of the promise of cybernetics to America and their portent in the world arena.19 We cannot begin to discuss and understand the national and international potential of cybernetics unless we devote adequate effort to the job. And this we are not doing—at least, not at a level of effort that is competitive with the Soviets.

The Soviet effort and progress are a definite technological threat to the U.S. because their multidiscipline attack on major problems has no counterpart in the U.S., and their broad intensive effort simply must produce, in due course, significant breakthroughs in sociological, economic, governmental, and military areas that we in the U.S. must be prepared to meet. This threat is, therefore, a challenge to military superiority, to social control, to economic/industrial advance, and to world power.

Unless we Americans as a people, and we in the Air Force in particular, understand these momentous trends, we may not have much choice. The system could be imposed upon us from an authoritarian, centralized, cybernated, world-powerful command and control center in Moscow.

Foreign Technology Division, AFSC


1. Roger Levien and M. E. Maron, “Cybernetics and Its Development in the Soviet Union,” RAND Memo 4156-PR, p.25.

2. C. Olgin, “Soviet Ideology and Cybernetics,” Bulletin of the Institute for the Study of the U.S.S.R., February 1962, from Kommunist, Vol. 37, No.9 (June 1960), p. 23.

3. Roshan Lal Sharma, “Information Theory in the Soviet Bloc,” June 1965, pp. 1-2, a study done for the Foreign Technology Division by McGraw-Hill, Inc.

4. Ibid.

5. A. I. Berg, “The Science of Optimum Control,” U.S. Department of Commerce. Translation JPRS-26, 581, 28 September 1964, p. 55.

6. John J. Ford, “Soviet Cybernetics,” a paper presented at Georgetown University Symposium on Cybernetics and Society, 19-20 November 1965.

7. Ibid.

8. Tan F. Triska, David O. Beim, and Noralou Roos, “The World Communist System,” Stanford Studies of the Communist System, Stanford University, 1964.

9. “Party Fractions in Non-party Organizations (Fronts),” International Press Correspondence (INPRELOR), 27 February 1924, and V, 25 (April 1925), 340-43.

10. Fundamentals of Marxism-Leninism, (second impression; Moscow: Foreign Language Publishing House, 1961); see parts four and five, especially pp. 609-20.

11. Ebed Van der Vlugt in Asia Aflame discusses earlier unsuccessful attempts of the Communists to seize power in Indonesia, pp. 160-202.

12. Fundamentals of Marxism-Leninism, pp. 585-620. Note that the manual describes many forms of the “transition to a socialist revolution.”

13. Roderick MacFarquhar, The Hundred Flowers Campaign and the Chinese Intellectuals (New York: Frederick A. Praeger, 1960). Some may criticize the author’s conclusion that this Chinese Communist criticism campaign became a general Communist purge technique. Of course, self-criticism has become an accepted feedback system of communication throughout the Communist countries and in certain instances clearly has led to severe purges for the fundamental purpose of optimizing Communist control.

14. “The Castro Regime in Cuba,” U.S. Department of State pamphlet, 1965.

15. Hanson W. Baldwin, “U.S. Lead in ICBM’s Is Said To Be Reduced by Buildup in Soviet Union,” New York Times, 14 July 1966.

16. Text of a Resolution Passed at the Third All-Union Conference on Automatic Control, Odessa, 1965, page 1, translated by L. A. Zadeh.

17. Robert Theobold, Free Men and Free Markets, Chapter 3.

18. Richard Loring, Communist Commentary on the Triple Revolution (Los Angeles. California: Progressive Book Shop, May 1964). (Italics are the author’s.)

19. Dr. Richard Bellman, “Russian Progressive Cybernetics and Its Relevance to Military Power,” a study done for the Air Force by McGraw-Hill, Inc.
2  General / General Discussion / Re: Anti_Illuminati for dummies. The ultimate study guide for the layman. on: January 10, 2011, 07:31:14 pm
Power, Seduction and War - February 24, 2007

OODA and You

Posted by Robert Greene at 1:14 PM

A few weeks ago I gave a talk at a company convention in southern California. This company has offices worldwide, is very successful in its line of work, but on the horizon are some dangers. They brought me in to address those dangers. The specifics here do not matter much, only to say that, like a lot of companies that were successful in the 80s and on up to the present, they have come to rely upon a particular business model that is part circumstance and part design.

Loosely put, their upper-tier employees operate more like entrepreneurs, each one out for him or herself. Each office tends to think of itself as an island, competing with the other branches across the globe. This works to some extent, as these entrepreneurs are very motivated to expand the business. On the other hand, it makes it very difficult to create an overall esprit de corps.

As I was preparing the speech, for some reason an image kept coming to mind--the jet-fighter pilot, and the theories of Colonel John Boyd as it pertains to this form of warfare. Many of you might be familiar with Boyd's most famous theory: the OODA loop. I will paraphrase it for those who are not familiar with it, with the understanding that it is much richer than the few words I am devoting to it here.

OODA stands for Observation, Orientation, Decision, Action. A pilot is constantly going through these loops or cycles in a dogfight: he tries to observe the enemy as best he can, this observation being somewhat fluid, since nothing is standing still and all of this is happening at great speed. With a lightning-quick observation, he then must orient this movement of the enemy, what it means, what are his intentions, how does it fit into the overall battle. This is the critical part of the cycle. Based on this orientation, he makes a decision as to how to respond, and then takes the appropriate action.

In the course of a typical dogfight, a pilot will go through maybe a dozen or so of these loops, depending on how complicated the fight, and how fluid the field. If one pilot can make faster decisions and actions, based on the proper observations and orientations, he slowly gains a distinct advantage. He can make a maneuver to confuse the enemy. After a few such maneuvers in which he is slightly ahead in the cycles, the enemy makes a mistake, and he is able to go in for the kill. Boyd calls these fast transients, and if you are ahead in these transients, the opponent slowly loses touch with reality. He cannot decipher what you are doing, and as he becomes increasingly cut off from the reality of the battlefield, he reacts to things that are not there, and his misreactions spell his death.

Boyd saw this theory as having application to all forms of warfare. He went backwards in military history and showed how this was relevant to the success of Belisaurius, the Mongols, Napoleon Bonaparte, T.E. Lawrence. He saw it as also deeply relevant to any kind of competitive environment: business, politics, sports, even the struggle of organisms to survive. In reading about the OODA loop for the first time, I was struck by its brilliance, but I was not quite sure what to make of it. How exactly does this apply to my own battles, my own life, or to those whom I advise in their affairs?

Then, working on the speech, the image and the idea began to coalesce. A fighter pilot is in a unique spot. He is a rugged individualist who can ultimately only depend on his own creative maneuvers for survival and success. On the other hand, he is part of a team, and if he operates completely on his own strategy, his personal success will translate into confusion on the battlefield.

At the same time, the battlefield itself is so incredibly fluid that the pilot cannot think in traditional linear terms. It is more like complex geometry, or three-dimensional chess. If the pilot is too slow and conventional in his thinking, he will find himself falling further and further behind in the loops. His ideas will not keep pace with reality. The proper mindset is to let go a little, to allow some of the chaos to become part of his mental system, and to use it to his advantage by simply creating more chaos and confusion for the opponent. He funnels the inevitable chaos of the battlefield in the direction of the enemy.

This seemed to me the perfect metaphor for what we are all going through right now in the 21st century. Changes are occurring too fast for any of us to really process them in the traditional manner. Our strategies tend to be rooted in the past. Our businesses operate on models from the 60s and 70s. The changes going on can easily give us the feeling that we are not really in control of events. The standard response in such situations is to try to control too much, in which case everything will tend to fall apart as we fall behind. (Those who try to control too much lose contact with reality, react emotionally to surprises.) Or to let go, an equally disastrous mindset. What we are going through requires a different way of thinking and responding to the world, something I will be addressing in my next two books in great detail. (I am happy to report that these two books have now been sold, and that is why I have been away for a while.)

In essence, speed is the critical element in our strategies. (See the chapter on formlessness in POWER and the blitzkrieg in WAR.) Speed, however, is something that is rarely understood. Napoleon created speed in his attacks because of the way his army was organized and structured. If you read Martin Creveld's book on command, he explains that the speed of Napoleon's army is comparable to any contemporary army, but with the technology of two-hundred years ago. This speed comes from the mission-oriented structure in which his field marshals had great liberty to react in real time and make quick decisions, based on Napoleon's overall strategic goals, and with the incredibly swift communications up and down the chain of command.

Napoleon increased the speed of his army by loosening up the structure, allowing for more chaos in the decision-making process, and unleashing the creativity in his marshals. Speed is not necessarily a function of technology. Technology, as Creveld showed, can actually slow an army down. Look at the North Vietnamese versus the US in the Vietnam War.

We are all in the position of those fighter pilots. Those among us who succeed in this environment know how to play the team game in a different way, not being an automaton, yet not completely a freelancer. We are comfortable working on our own initiative, but also find pleasure in making our individuality fit into the group. We are able to embrace change, to let go of old patterns of operating, and to stay rooted in the moment, observing the battlefield for what it is, not cluttered by preconceptions. We can think fast, let go of the need to control everything, stay close to the environment in which we operate (the streets, our clients), and experiment.

It is a new kind of beast that thrives in this new order.

Your mind is the key that will turn this to advantage, not your wealth, the technology at your command, the number of allies you possess. Whatever success you are now experiencing will actually work to your detriment because you will not be made aware of how slowly you are falling behind in the fast transient cycle. You think you are doing just fine. You are not compelled to adapt until it is too late. These are ruthless times.

3  General / General Discussion / Re: Anti_Illuminati for dummies. The ultimate study guide for the layman. on: January 10, 2011, 07:30:16 pm

Colonel John (Richard) Boyd (January 23, 1927 – March 9, 1997) was a United States Air Force fighter pilot and Pentagon consultant of the late 20th century, whose theories have been highly influential in the military, sports, and business.

Military theories
During the early 1960s, Boyd, together with Thomas Christie, a civilian mathematician, created the Energy-Maneuverability, or E-M, theory of aerial combat. A legendary maverick by reputation, Boyd was said to have "stolen" the computer time to do the millions of calculations necessary to prove the theory, but it became the world standard for the design of fighter planes. At a time when the Air Force's FX project (subsequently the F-15) was foundering, Boyd's deployment orders to Vietnam were canceled and he was brought to the Pentagon to re-do the trade-off studies according to E-M. His work helped save the project from being a costly dud, even though its final product was larger and heavier than he desired. However, cancellation of that tour in Vietnam meant that Boyd would be one of the most important air-to-air combat strategists with no combat kills. He had only flown a few missions in the last months of the Korean War, and all of them as a wingman.

With Colonel Everest Riccioni and Pierre Sprey, Boyd formed a small advocacy group within Headquarters USAF which dubbed itself the "Fighter Mafia". Riccioni was an Air Force fighter pilot assigned to a staff position in Research and Development, while Sprey was a civilian statistician working in Systems Analysis. Together, they were the visionaries who conceived the LFX Lightweight Fighter program, which ultimately produced both the F-16 and F/A-18 Hornet, the latter a development of the YF-17 Light Weight Fighter. Boyd's acolytes were also largely responsible for developing the Republic A-10 Thunderbolt II or "Warthog" ground-support aircraft, though Boyd himself had little sympathy of the "air-to-mud" assignment.[4]

After his retirement from the Air Force in 1975, Boyd continued to work at the Pentagon as a consultant in the Tactical Air office of the Office of the Assistant Secretary of Defense for Program Analysis and Evaluation.

Boyd is credited for largely developing the strategy for the invasion of Iraq in the first Gulf War. In 1981 Boyd had presented his briefing, Patterns of Conflict, to Richard Cheney, then a member of the United States House of Representatives.[1] By 1990 Boyd had moved to Florida because of declining health, but Cheney (then the Secretary of Defense in the George H. W. Bush administration) called him back to work on the plans for Operation Desert Storm.[1][5][6] Boyd had substantial influence on the ultimate "left hook" design of the plan.[7]

In a letter to the editor of Inside the Pentagon, former Commandant of the Marine Corps General Charles C. Krulak is quoted as saying "The Iraqi army collapsed morally and intellectually under the onslaught of American and Coalition forces. John Boyd was an architect of that victory as surely as if he'd commanded a fighter wing or a maneuver division in the desert."[8]

The OODA Loop
Boyd's key concept was that of the decision cycle or OODA Loop, the process by which an entity (either an individual or an organization) reacts to an event. According to this idea, the key to victory is to be able to create situations wherein one can make appropriate decisions more quickly than one's opponent. The construct was originally a theory of achieving success in air-to-air combat, developed out of Boyd's Energy-Maneuverability theory and his observations on air combat between MiGs and F-86s in Korea. Harry Hillaker (chief designer of the F-16) said of the OODA theory, "Time is the dominant parameter. The pilot who goes through the OODA cycle in the shortest time prevails because his opponent is caught responding to situations that have already changed."

Boyd hypothesized that all intelligent organisms and organizations undergo a continuous cycle of interaction with their environment. Boyd breaks this cycle down to four interrelated and overlapping processes through which one cycles continuously:

    * Observation: the collection of data by means of the senses
    * Orientation: the analysis and synthesis of data to form one's current mental perspective
    * Decision: the determination of a course of action based on one's current mental perspective
    * Action: the physical playing-out of decisions

Of course, while this is taking place, the situation may be changing. It is sometimes necessary to cancel a planned action in order to meet the changes.

This decision cycle is thus known as the OODA loop. Boyd emphasized that this decision cycle is the central mechanism enabling adaptation (apart from natural selection) and is therefore critical to survival.

Boyd theorized that large organizations such as corporations, governments, or militaries possessed a hierarchy of OODA loops at tactical, grand-tactical (operational art), and strategic levels. In addition, he stated that most effective organizations have a highly decentralized chain of command that utilizes objective-driven orders, or directive control, rather than method-driven orders in order to harness the mental capacity and creative abilities of individual commanders at each level. In 2003, this power to the edge concept took the form of a DOD publication "Power to the Edge: the Information Age" by Dr. David S. Alberts and Richard E. Hayes. Boyd argued that such a structure creates a flexible "organic whole" that is quicker to adapt to rapidly changing situations. He noted, however, that any such highly decentralized organization would necessitate a high degree of mutual trust and a common outlook that came from prior shared experiences. Headquarters needs to know that the troops are perfectly capable of forming a good plan for taking a specific objective, and the troops need to know that Headquarters does not direct them to achieve certain objectives without good reason.

In 2007, strategy writer Robert Greene discussed the loop in a post called OODA and You. He insisted that it was "deeply relevant to any kind of competitive environment: business, politics, sports, even the struggle of organisms to survive", and claimed to have been initially "struck by its brilliance".

Foundation of theories

Boyd never wrote a book on military strategy. The central works encompassing his theories on warfare consist of a several hundred slide presentation entitled Discourse on Winning & Losing and a short essay entitled Destruction & Creation (1976).

In Destruction & Creation, Boyd attempts to provide a philosophical foundation for his theories on warfare. In it he integrates Gödel's Incompleteness Theorem, Heisenberg's Uncertainty Principle, and the Second Law of Thermodynamics to provide a context and rationale for the development of the OODA Loop.

Boyd inferred the following from each of these theories:

    * Gödel's Incompleteness Theorem: any logical model of reality is incomplete (and possibly inconsistent) and must be continuously refined/adapted in the face of new observations.
    * Heisenberg's Uncertainty Principle: there is a limit on our ability to observe reality with precision.
    * Second Law of Thermodynamics: The entropy of any closed system always tends to increase, and thus the nature of any given system is continuously changing even as efforts are directed toward maintaining it in its original form.

From this set of considerations, Boyd concluded that to maintain an accurate or effective grasp of reality one must undergo a continuous cycle of interaction with the environment geared to assessing its constant changes. Boyd, though he was hardly the first to do so, then expanded Darwin's theory of evolution, suggesting that natural selection applies not only in biological but also in social contexts (such as the survival of nations during war or businesses in free market competition). Integrating these two concepts, he stated that the decision cycle was the central mechanism of adaptation (in a social context) and that increasing one's own rate and accuracy of assessment vis-a-vis one's counterpart's rate and accuracy of assessment provides a substantial advantage in war or other forms of competition.

Elements of warfare
Boyd divided warfare into three distinct elements:

    * Moral Warfare: the destruction of the enemy's will to win, via alienation from allies (or potential allies) and internal fragmentation. Ideally resulting in the "dissolution of the moral bonds that permit an organic whole [organization] to exist." (i.e., breaking down the mutual trust and common outlook mentioned in the paragraph above.)
    * Mental Warfare: the distortion of the enemy's perception of reality through disinformation, ambiguous posturing, and/or severing of the communication/information infrastructure.
    * Physical Warfare: the destruction of the enemy's physical resources such as weapons, people, and logistical assets.

Military Reform
John Boyd's briefing Patterns of Conflict provided the theoretical foundation for the "defense reform movement" (DRM) in the 1970s and 1980s. Other prominent members of this movement included Pierre Sprey, Franklin 'Chuck' Spinney, William Lind, Assistant Secretary of Defense for Operational Testing and Evaluation Thomas Christie, Congressman Newt Gingrich, and Senator Gary Hart. The Military Reform movement fought against what they believed were unnecessarily complex and expensive weapons systems, an officer corps focused on the careerist standard, and overreliance on attrition warfare. Another reformer, James G. Burton, disputed the Army test of the safety of the Bradley fighting vehicle. James Fallows contributed to the debate with an article in The Atlantic Monthly titled "Muscle-Bound Superpower", and a book, National Defense. Today, younger reformers continue to use Boyd's work as a foundation for evolving theories on strategy, management and leadership.

   1. ^ a b c d e f Coram, Robert (2002). Boyd: The Fighter Pilot Who Changed the Art of War. Little, Brown & and Company. p. 355. ISBN 0316881465.
   2. ^ (Hammond, 1997)
   3. ^ a b Hillaker, Harry (July 1997). "Tribute To John R. Boyd". Code One Magazine. Retrieved 2007-01-25.
   4. ^ Daniel Ford, A Vision So Noble (2010), p. 11
   5. ^ Ford, Daniel. A Vision So Noble: John Boyd, the Ooda Loop, and America's War on Terror. CreateSpace (May 4, 2010) p. 23-4.
   6. ^ Wheeler, Winslow T. and Lawrence J. Korb. Military Reform: A Reference Handbook. Praeger; 1 edition (September 30, 2007) p. 87.
   7. ^ Wheeler, Winslow T. and Lawrence J. Korb. Military Reform: A Reference Handbook. Praeger; 1st ed. (September 30, 2007) p. 87.
   8. ^ Hammond, Grant Tedrick. The Mind of War: John Boyd and American Security. Smithsonian Books; Illustrated ed. (May 2001) p. 3.
   9. ^ United States Marine Corps, 1997
4  General / General Discussion / Re: Anti_Illuminati for dummies. The ultimate study guide for the layman. on: January 10, 2011, 07:29:35 pm

The OODA Loop model was developed by Col. John Boyd, USAF (Ret) during the Korean War. It is a concept consisting of the following four actions:

    * Observe
    * Orient
    * Decide
    * Act

This looping concept referred to the ability possessed by fighter pilots that allowed them to succeed in combat. It is now used by the U.S. Marines and other organizations. The premise of the model is that decision-making is the result of rational behavior in which problems are viewed as a cycle of Observation, Orientation (situational awareness), Decision Making, and Action. Boyd diagrammed the OODA loop as shown in the figure below:

Cycling Through OODA

An entity (whether an individual or an organization) that can process this cycle more quickly than an opponent can “get inside” the opponent's decision cycle and gain the advantage.


Scan the environment and gather information from it.


Use the information to form a mental image of the circumstances. That is, synthesize the data into information. As more information is received, you "deconstruct" old images and then "create" new images. Note that different people require different levels of details to perceive an event. Often, we imply that the reason people cannot make good decisions, is that people are bad decisions makers — sort of like saying that the reason some people cannot drive is that they are bad drivers. However, the real reason most people make bad decisions is that they often fail to place the information that we do have into its proper context. This is where "Orientation" comes in. Orientation emphasizes the context in which events occur, so that we may facilitate our decisions and actions. That it, orientation helps to turn information into knowledge. And knowledge, not information, is the real predictor of making good decisions.


Consider options and select a subsequent course of action.


Carry out the conceived decision. Once the result of the action is observed, you start over. Note that in combat (or competing against the competition), you want to cycle through the four steps faster and better than the enemy, hence, it is a loop.

Interactive Web

The loop doesn't mean that individuals or organizations have to observe, orient, decide, and act, in the order as shown in the diagram above. Rather, picture the loop as an interactive web with orientation at the core, as shown in the diagram below. Orientation is how we interpret a situation, based on culture, experience, new information, analysis, synthesis, and heritage

Thus, the loop is actually a set of interacting loops that are kept in continuous operation.


“OO-OO-OO!” is the sound of a broken OODA Loop.  The goal is to get your adversary to never make decisions or to act on them.  You want your adversary to constantly observe and orient over and over again until they are overwhelmed.
5  General / General Discussion / Re: Anti_Illuminati for dummies. The ultimate study guide for the layman. on: January 10, 2011, 07:28:49 pm
Read this 52 page white paper on Information Warfare:

A Research Paper
Presented To
The Research Department
Air Command and Staff College
In Partial Fulfillment of the Graduation Requirements of ACSC
Major Mary M. Gillam
March 1997

6  General / Wikileaks operation and the concept of controlled opposition / Alex Jones on C2C talking about Wikileaks NOW ON YOUTUBE! on: December 10, 2010, 12:41:37 pm
Alex Jones on C2C talking about Wikileaks NOW ON YOUTUBE!

Coast To Coast AM Timothy Schultz, Alex Jones WikiLeaks Controversies 7 12 2010 1 - 12

Alex starts about 8min 50sec in on video above.

There are 12 parts.

Double click on above video, to get parts 2-12.
7  General / Wikileaks operation and the concept of controlled opposition / Re: Did the State Dept. release the cables to Wikileaks on purpose to usher in IPv6? on: December 10, 2010, 12:39:34 pm

May 18, 2007 • Volume 5 • Number 5


Next Generation’s Four Challenges

IPv6 faces formidable, but not insurmountable challenges.

“Culturally people know what IPv6 is today,” states Education's Peter Tseronis.

“I’m known as the IPv6 guy at Education. I get the forwarded emails or what have you and the phone calls. People at least are talking about it. A year ago it was, what? And you say Internet is really known as IPv4 and people say what? Now I get it, IPv6 is the next generation.” For Tseronis, the Challenge 1 is culture. Change never comes easy, but he sees more IPv6 acceptance. 

Challenge 2 is Money. Take a cue from OMB. It’s the opportunity to look at your refresh dollars and say ‘hey look at your network does it need to be refreshed? Will you refresh it?’

Tseronis says put that procedure, that process in place. “You still may not get the money or the funding but at least you can build a business case for getting investments to upgrade your network and by the way you might as well buy a procure an IPv6 compliant product.”

Challenge 3 is Policy.

“We are in the midst of defining some acquisition policies, testing policies, accreditation policies. We are working with the vendor community on issues that have to be ironed out before we go and buy a product and say hey I want this device and I want to make sure it’s an IPv6 compliant router, or switch, or firewall,” Tseronis explains.

“If I’m a customer in the federal government, I’m going to Cisco saying ‘hey I want an IPv6 product.’ Well I want them to say OK on this approved product list or what have you, these are the products that you can purchase. So just to say I want everything IPv6 compliant or the application of the hardware that exists today. We are still defining the regulations around creating something that can be a pick and choose type of scenario.”

Challenge 4 is “Thinking Out of the Box”.

And lastly it’s really the thinking out of the box challenge.

“It really comes down to you have to think outside of what you want to be doing with this in the future and how you want to be doing it and make the assumption that IPv6 infrastructure is going to enable that. We didn’t think about that when the internet first popped up, we just thought it was cool to surf the net. Now we are saying there’s going to be a new infrastructure you will be able to do more things like auto configuration, recovery, etc., etc. But people are a little bit hesitant because people are well what we have today isn’t broken, so don’t try to fix it.”

The funding issue isn’t lost on State Department’s Charlie Wisecarver.

“Clearly for the Department of State it’s a funding question.” There are competing requirements out there. To become IPv6 compliant by June 2008 will require a significant amount of funding and State’s widespread organization also presents some challenges that will be alleviated as COTS products become more available.

“Right now to try to sell IPv6 to senior executives in the State Department, it’s not sexy, there’s nothing really there for them to grab on to. They’d much rather fund legacy programs or other types of activities. I think that the good news is, as more and more COTS products become available, that’s going to make it a much easier sell. Folks will begin to realize how important it is to transition to IPv6.”

For Commerce’s John McManus, selling what’s important over the the long haul is the key.

“I think the big challenge, this is a day to day challenge, is to get everyone to understand that we need to be thinking about the long term,” said McManus.

“We shouldn’t be selling IPv6. We need to be selling the capabilities that IPv6 brings to our mission. That really is a very large challenge because when you go in to talk to senior leadership, when we did the network evolution at NASA, as we are doing our network evolution at the Department of Commerce, I don’t mention IPv6 other than to say our gear will be compliant with the mandate. I talked about how is this going to enable the Department of Commerce, NOAH, the National Weather Service, to provide better services to the citizens or to provide better capability to our internal users.”

McManus says there can’t be too much focus on the protocol itself; and that the protocol is bringing new capability. The messaging has to really be focused on enabling new capabilities that allow us to do new things for the citizens and new things for our users.

For GSA’s Fred Schobert, the major challenges are in terms of training and service support. “There will be transition challenges for our agency customers we are going to have to address and work with them on; and then basic overall training. One of the things we need to do also is become more crystal clear I believe at the user level on what the benefits for IPv6 are. But I have to think any investments made will be based on their understanding of what the ultimate benefits will be. So I think as a group and as an industry government team we need to become clearer on exactly what are the benefits for making that investment.”

Cisco’s Dave West is encouraged that the 2008 deadline has gotten government moving.

“I think people looked at the deadline and thought that everything needed to be IPv6 capable by the deadline, that government agencies would have transitioned by the deadline, when in reality, all it did was energize the government to start planning and preparing for this transition,” said West.

West also notes there is still a lot of work to do from the product perspective, from a solution perspective and education within the government

“There’s absolutely work that needs to be done in terms of preparing government agencies and entities for this transition; preparation needs to be done to make sure that any move towards any new protocol doesn’t impact day to day operations.”

West sees a lot of movement to get the job done over the next 24 to 36 months.

“I think as government looks and agencies look as what services they want to provide, what services they want to enable for IPv6 they may take different approaches on how they enable those services. How they take advantage of what that protocol offers.”

Education is also top of mind for Command Information’s Tom Patterson.

“The more people that understand their day job, the more that they understand the new capabilities that are in the Internet that they already have, the less frightened they are of the change and some of the key reasons that were bandied about last year are really falling by the wayside,” states Patterson.

According to Patterson there’s no v6 to buy so you don’t have to go out and get a line item for a big v6 thing. You are already buying routers, you are already buying computers, you are already buying the phones, you just need to specify as GSA Networx did, that when you buy these services they should support the new versions of the internet. So that has really taken that big fear away. Education then unlocks the art of the possible.

“In reality if you talk to them, if you educate about what they do in their language, and that’s what our whole series of training exercises do, is talk about supply chains, talk about telework, talk about cars and mobile and all these things that the government lives off of, that run our government. If you talk about how it affects that, then they tend to pick it right up.”
8  General / Wikileaks operation and the concept of controlled opposition / Re: Did the State Dept. release the cables to Wikileaks on purpose to usher in IPv6? on: December 10, 2010, 12:39:16 pm

May 18, 2007 • Volume 5 • Number 5


On Your Mark

One year later. It’s time to mark progress on IPv6.

A year ago, the Federal Executive Forum presented one of the first top level discussions of IPv6 and it’s implications. Now, one year later, this Federal Executive Forum panel has reconvened to talk about successes and continuing challenges.


Commerce’s John McManus is a leader on the IPv6 government transition committee. He has spent a good deal of his time extolling the virtues of IPv6.


“When we got together a year ago we were really in the early stages of moving out on IPv6 and over the last year we’ve really been focusing on communications, planning, and relationship building,” says McManus.


“We’ve spent a lot of time out making sure that there’s a clear scope for the federal transition; that each of the agencies is getting the fundamental steps of their planning done so that we are working towards common success criteria and a common goal.”


The IPv6 Committee has been doing lot of outreach across the federal government to build the relationships to allow the smaller agencies to leverage the strength and experience of the larger agencies. “We’ve also spent a lot of time focusing on the opportunity and working to get people to understand that there really is a long transition,” said McManus.


“This is a part of our normal network evolution and that we need to look past just June of 2008 when we will bring IPv6 onto our core network and start communicating those new capabilities into our customer community so that they can start developing programs and projects that leverage those.


So in the past year we’ve really been focused on communications, planning, and then getting the message out on those new capabilities and I think we’ve really started to see a strong uptake in the user community for ways that they now envision IPv6 adding value to them.”


For committee co-chair, Education’s Peter Tseronis the last year has been a collaborative learning and awesome experience from the standpoint of the federal government taking charge.


“A year ago, June 30, 2008 seemed a long way off. And there were a lot of early milestones and requirements that were put on agencies by way of memos and so forth. And it was a lot of action really quick and I like to think that since then, since the last time we chatted, this is kind of that lull period. OK we’ve got these early deliverables; we’ve got time to get to June 30, 2008.”


A year later and there is still much to do collaborating with DoD to do and more to be done presenting there’s a unified front from the federal government perspective that “we are one unified team”, says Tseronis.


“We are trying to synergize with the vendor community and not just treat this as another government mandate. But really look at it beyond the fact that it is some technical plumbing infrastructure perspective. This is an opportunity for the federal government to look and say how do I want my network to look in the future?


What can I do to modernize it and take advantage of this opportunity versus looking at it as this is something that we have to do and then it’s going to be over with. It’s really laying the foundation for an infrastructure to do these really neat things that we’ll probably be talking about a little later.”


Charlie Wisecarver at the State Department has also been very active in the participation of the public/private partnerships, working with the vendors out there and understanding what products are going to be available and when they are going to roll. “We’ve been doing a watching, studying and planning role right now and we’ve been working on that very hard over the last year. We are watching the market place drivers,” explains Wisecarver.


“We are also looking beyond the IPv6 and how that’s done in some of our lab testing and what are going to be the next practices and it’s very, very important to the Department of State. We have diplomats in over 260 missions around the world so we see this as a very exciting opportunity and it’s really going to enable our diplomats in their work.”


GSA is in a little bit of a different position according to GSA's Fred Schobert, Networx program manager “We had to deal with IPv6 we had to figure out how we were going to specify in the requests for proposals to industry that have now been awarded. And what we did is IPv6 is clearly specified as a requirement in the Networx Universal and Enterprise programs; and we put specs and interfaces in there that contractors have to be able to meet and deal with from a backbone standpoint.”


For Schobert, the next step GSA is looking at is how “to better support agency customers in terms of certification and testing, training, those kinds of things and we are looking right now what to do to stand up a leadership role to be able to support our customers.”
9  General / Wikileaks operation and the concept of controlled opposition / Re: Did the State Dept. release the cables to Wikileaks on purpose to usher in IPv6? on: December 10, 2010, 12:38:49 pm
All of this chatter on a "Digital Pearl Harbor" going back since the 1990s, and on the very anniversary of the real Pearl Harbor, Assange gets set-up and arrested for some feminist sex "crime".  This acts as a catalyst for the hacktivism community to DDoS financial sites such as Visa and MasterCard.  This is the very same attack that CSIS, Heritage, CATO, FrontPage, CFR, NCOIC, DHS, DoD, etc have been talking about for at least a decade.

While this is unfolding, Richard Clarke (the guy who coined the phrase "Digital Pearl Harbor") and General Michael Hayden and Jeffery Carr are meeting at Georgetown to discuss this "Digital Pearl Harbor",823.msg2776.html#msg2776

Then you have the State Department Chief Information Officer claiming that they need IPv6 to protect their cables back in 2007!

This is way too insane.

The coincidence theorists are going to have fun with this one.
10  General / Wikileaks operation and the concept of controlled opposition / Re: Did the State Dept. release the cables to Wikileaks on purpose to usher in IPv6? on: December 10, 2010, 12:37:16 pm

Wisecarver's bio

Charlie Wisecarver
Deputy Chief Information Officer and Chief Technology Officer
Department of State

Mr. Wisecarver, a career member of the Senior Foreign Service, assumed the duties of the Deputy Chief Information Officer and Chief Technology Officer of the Department of State on June 5, 2006. Previously he was the Director of the State Messaging and Archive Retrieval Toolset (SMART) Program Office. SMART is the Department's top information technology priority and will modernize disparate legacy-messaging systems and establish a single centralized system for all types of documents including telegrams, memoranda, e-mails, and Diplomatic Notes.

Approximately half of Mr. Wisecarver's career has been spent working as an Information Management Specialist in overseas missions. He has served in Ecuador and Mexico managing the Embassy's computer systems and telecommunications networks. In domestic assignments Mr. Wisecarver served as the Director of the State Department's Messaging Systems Office for four years. In this position he was responsible for enterprise telegram and e-mail delivery. Prior to Y2K he oversaw the modernization of all consular computer systems around the world.

Before joining the State Department, Mr. Wisecarver served as a computer programmer analyst for Department of Defense and a Peace Corps Volunteer in Niger. Mr. Wisecarver is married with two children.
11  General / Wikileaks operation and the concept of controlled opposition / Did the State Dept. release the cables to Wikileaks on purpose to usher in IPv6? on: December 10, 2010, 12:36:45 pm

May 18, 2007 • Volume 5 • Number 5


Digital Pearl Harbor

Will IPv6 make us more secure? Experts give their opinions.

It seems like in the past with all new technologies come new vulnerabilities, said Jim Flyzik during the Federal Executive Forum on IPv6.

“Often times new technologies hit the market and then we are catching up later trying to get the security fixes in place because the so called ‘bad guys’ out there find ways to exploit new technologies. There are some concerns today about a digital Pearl Harbor or a terrorist attack taking down networks, attacking networks.”

The question is: will IPv6 improve security. Federal Executive Forum panelists weighed in on the issue.

Command Information’s Tom Patterson put the issue in perspective this way.

“Keep in mind that the Internet we use today, and we just call it the internet. We don’t know what version number it is and no one really cares. It was designed in the ‘70s and the concept was you had to be a trusted person before you were allowed to connect at a university or a research division or something like that. The concept of the general person coming along and connecting to the internet wasn’t part of the design.”

That means all the security now is place has been “added on”. According to Patterson there actually is a really good security standard now called IPSec. The problem is not enough people use it. The banks use it for very high volume transactions; maybe the State Department will use it for a top secret cable or something. But the rank and file people, it’s not being used to protect their credit cards, to safeguard their privacy and it can be.

“So when IP version 6 came out and started to be thought of as the next generation, leave everything old still working, but let’s see what we need to fix,” explained Patterson.

“One of  the first things that we fixed was let’s take whatever we know how to do really well, that is IPSec, the best security that we know how to do, make that default to the on position instead of an off position. So that someone will be able to, you don’t have to be a rocket scientist in order to use good security now.”

However that’s not the whole thing and it’s certainly no security silver bullet. “It’s also when WiFis came out. If you remember that a lot of CIOs said we don’t have a WiFi problem because we don’t allow it. And then there were all these chalk marks outside their building saying this is where you get free WiFi access; because people were just putting it in because it’s easy. That is possible now with IPv6 but you can’t just ignore it. And just outlaw it in your organization because it’s built into Apple, it’s built into Windows XP, it’s built into half the cell phones you are buying today. And some people are going to turn it on.”

So you need to be addressing the security implications. Security changes absolutely if you address it on a proactive basis, it changes for the better.

According to Commerce’s John McManus, there’s a lot of work going on looking at security in the IPv6 world. “There’s a lot of groups going on looking at security in the services that we provide today. And I think that Tom made a critical point. Those risks exist today. When you go and look at when IPv4 was designed, it has matured. Security has been bolted on to IPv4. In  IPv6 we’ve had the opportunity to actually design that in.”

When you employ a new technology there usually is a period of increased risk. And that risk comes from the simple fact that no matter what testing you do in the lab, and I think we do test very thoroughly, when you hit the wild, you hit some situations that you have not tested for.

“So one of the key things that we are doing now is working together as a community, there’s a working group that’s a part of the IPv6 working group, we are doing outreach into the DOD, outreach into all the carriers and equipment providers to start testing that equipment in a live environment on test networks so that when we go live we are sure that we are achieving at least the level of security, if not better, than we have in the networks we have today.”

“I just wanted to add that when you think of security regardless of the Internet protocol, you think of confidentiality,” says Education’s Peter Tseronis. “You think of integrity and authentication. And IPv6 isn’t going to be the panacea that says I’m going to take care of your mis-configured server, your poorly designed application, your poorly protected Internet sites. You need to have the skills to implement and maintain.”

Tseronis knows that not everything will be smooth and there will be some Internet engineers and systems engineers’ folks out there who are ready now, but others who are running for the foothills saying we don’t need to go there.

“But at the end of the day, you still have to maintain your security in such a way that, whether it’s IP stack or some other method, you are still going to have to protect it. So it’s not that it’s more secure, it just isn’t going to be less secure. You still have to maintain those policies in your network.”

Security is also on the mind of Cisco’s David West.

“A move to anything new, any new capability, produces threats and risk.  But if you do proper planning, validation, testing, a phased implementation of how you are going to introduce something new, you minimize those risks,” says West.

“One of the things that we are trying to make sure occurs is that as they make this transition, and they integrate this new service, they do it well thought out. What’s more interesting I think in terms of security, is the new application services that will be enabled as a result of the protocol.”

“We’ve got now a very large address face where many devices can have addresses. That introduces a potential security risk but again with proper planning, with consideration of what needs to happen from the vendor community in testing and validation, you could minimize those risks and really start to take advantage of what the protocol offers.”

At GSA, according to Fred Schobert, “We fully realize that with IPv6 there’s a lot of promise with security but we realize there’s a lot of work that remains to be done to be able to implement it with the agencies. When we talk with the agencies about IPv6 we are talking about things like IPSec but you are also talking about encryption and if you think about it, the security standards need to be defined, they need to be precise. The information security tools that the agencies will use need to be developed and they need to be there.”

Schobert thinks they are going into network monitoring and management facility overall to monitor a network, but that FISMA guidance needs to be considered because right now we have to go through certification and accreditation and if there are any holes we won’t be able to do anything. And finally he thinks they need to take a look at what we need to do in the application area to best support the IPv6 and what applications are required.

“We do take security very, very seriously, said Charlie Wisecarver, State Department CIO.

“I think IPv6 is going to introduce some new security concerns but ultimately we will be better off as we become smarter about this and adjust our policies and procedures. The denial of service possibilities is always a very, very serious concern for us as so much of our work is done through the internet. I think this can all be mitigated through some monitoring tools. The intrusion detection system, we haven’t heard too much about those types of tools that will help us identify those intrusion sets and how we can mitigate this quickly.”
12  The War Room / Cyber False Flags / Re: The "Digital Pearl Harbor" on: December 10, 2010, 12:30:45 pm

March 9, 2007 • Volume 5 • Number 2



MODERATOR/HOST Jim Flyzik, The Flyzik Group


·         Patti Titus, Chief Information Security Officer, TSA

·         Dennis Heretick, Chief Information Security Officer, Department of Justice

·         Dr. Ron Ross, Chief Computer Scientist,- NIST

·         Phil Heneghan, CIO,- USAID

·         John McCumber Strategic Program Manager, Public Sector Group, Symantec Corporation

·         Tim Kelleher Vice President, Enterprise Security Services, Federal Systems, Unisys Corporation






We are coming to you from the University of Maryland, University College Cyber Security Conference. Today we will discuss critical issues facing government and industry leaders in the field of information technology security. With me today on the show are (list of panelists).  Let’s get right into the issues and first level set the audience by having each of our panelists talk a little bit about your role in cyber and information systems security. Just go right down the table and start with Dr. Ross. Can you give us an idea of what your roles are?

Good afternoon Jim. My role at NIST is to lead the FISMA implementation project; that’s the group that develops all of the implementing security standards and guidelines that the Federal government needs to employ to be FISMA compliant.

Dennis Heretick, over at Justice. I know that Justice has done a lot in the area of cyber security. Can you tell us your roles there Dennis and your responsibilities?

Sure Jim. I’m the Deputy CIO for Information Security at Justice and as such I’m responsible for our agency wide IT security program. That includes requirements for risk negation, as well as implementation strategies and our performance.

Having worked on law enforcement in the past I know how critical some of those issues are. Tim Kelleher at Unisys Corporation, give us a sense of what your roles are there Tim?

Thanks Jim. As you said I am the Vice President of the Enterprise Security at Unisys and that’s a fairly large group of people who support the Federal government agencies and it’s a pretty full spectrum operation; everything from consulting to systems integration to full service support capability for government agencies.

Great. I know a lot of industry, a lot of companies are putting more emphasis into that cyber security area as a field that you need to grow. Patti Titus over at TSA where I’m sure there are a lot of unique challenges being a relatively new agency in town. Patti, can you give us an idea of your role at TSA?

Sure. At Transportation Security Administration I was charged in the early days with standing up and developing an IT security office. We had the absolute pleasure of designing that based on the NIST standards so we are probably one of the few organizations that are solely based on NIST because we are such a new organization. Part of the role of the CISO is also looking at the transportation sector so we are starting to branch off into that area, taking what we have learned within TSA and moving that into the sector itself, so we are looking forward to that challenge as we grow and mature further.

Quite a challenge, to not only deal with the subject matter but to do it in a start up environment where you have to, you mentioned that you started from scratch, so we appreciate everything you are doing over there.  Phil Heneghan, who is a CISO but also an acting CIO, a little later in the show we’ll come back and talk about CIO roles versus security officer roles, but Phil perhaps you could give us some idea of how you are working now at USAID.

My role there as the Chief Information Security Officer also includes the role of Chief Privacy Officer, obviously the two are greatly connected. And we are a small enough agency that it’s all in one place. On the other hand we are a world wide organization with offices in 80 countries around the world, so the security challenge is pretty unique.

Sure, I bet in terms of looking at world wide standards and differences in what is going on in this country versus other parts of the world. John McCumber at Symantec, I guess when we all think about security companies; Symantec is one that comes to mind. I know Symantec has expanded quite a bit over the years also, but could you give us an overview of your role there at Symantec?

Certainly Jim and I hope you do think of Symantec when you think of security. One of the challenges and one of my key responsibilities is ensuring that Symantec’s solutions and services are able to address the needs of our Federal government.  We want to make sure that Symantec’s technology and their services as well as our ability to bring in information across the internet are targeted to help our government agencies be able to protect their infrastructure, their information and their interactions.

Terrific. Let’s get into some of the key issues that you are dealing with, some of the priorities. We’ll first talk priorities and then talk some challenges. Let’s start with Tim Kelleher at Unisys. Tim, what do you think are some of the major priorities right now that you are addressing in your day to day work?

Well, like most companies we are always looking at what our customers’ needs are and where we need to align our capabilities and our services to meet our customers’ needs.  Right now, I see two or three primary areas that we are seeing a demand for help at this point in time. First is the whole identity and access management arena. Protecting data and making sure that only the right people have access to data, and of course all government agencies are under the gun a bit in terms of meeting the HFPD 12 imperative so we are kind of gearing up to support that goal as well.  The second area that we are seeing a lot of need for support is around the whole FISMA need for certification and accreditation of systems. There are a lot of systems obviously in any enterprise and certainly the Federal government is not short on quantities of systems out there, and it’s a pretty robust process that everybody’s obligated to go through to certify these systems and it takes a lot of help from a lot of the private industry people like Unisys as well, who has actually supported in the last couple of years just under three hundred engagements of supporting Federal agencies to get those certifications completed.  And the final is one that I mentioned earlier is around the whole notion of managed services. Managing security is a difficult thing, it is getting more and more complex every year and it costs a fair amount of money to buy the tools and get the equipment to really manage that environment so many people are now turning more towards private industry to help with that, and that is that whole area of managed security services.

Great. FISMA’s come up a couple of times already, it makes me jump back to Dr. Ross, and I know NIST does a lot of work with FISMA and standards for FISMA and so forth. What are the priorities that you face Dr. Ross today? What are some of your key priorities?

Read the bulk of the transcript here:

Skipping ahead...

We’ve got roughly about 10 minutes left in the show and I want to key in, we usually try to end the show with more of a vision kind of discussion and thinking about the future. I want to give you a few opinions. My opinion is that we still remain somewhat reactive, but we are getting better. And I think I’ve heard from many of you who are a bit more proactive.  To date I feel like there’s been a lot of hype around viruses and different types of malicious software and things like phishing attacks. But I would argue, correct me if you disagree, that to date most of our problems have been expensive annoyances. They’ve been costly. They clearly have been costly, and they have been an annoyance.  However when you begin to think perhaps to the future of things like cyber terrorism or sophisticated IT tools in the hands of those trying to harm us, attacks that we’ve seen from other foreign countries that emanate, or viruses that find a way into FAA systems or nuclear reactor sites or whatever.  I’m making things up here but there’s this one school of thought with some books actually predicting that if we remain or if we don’t get more proactive we could be facing the day when the United States could be attacked by a so-called digital Pearl Harbor. I’m curious about how each of you would react to that question. Is it hype? Is it something we need to be concerned about? John at Symantec, what do you think?

I’ll be happy to address that. I believe the term digital Pearl Harbor was coined by John Markhof for the New York Times and if memory serves me correctly that was in 1994. I actually kept a copy of that article. What we see transpire and I really mean the attacks of 9/11, and other kinds of evolution have really put that into perspective I think and it’s really changed our focus as to how information attacks and threats to our information infrastructure have evolved.  One of the other things that you’ll notice is in the last two years you haven’t seen the Washington Post or the New York Times publish a report on a wide-spread malicious code attack. It used to be something you’d see every six months. Now you see that has evolved and that the threat has evolved to become much more targeted. And you see that specifically in the empirical studies that we’ve done. So part of understanding this is keeping track of that threat as it evolves and moves that way, and then determining and separating that from these terminologies that people use they use these terminologies to build a program or sell newspapers or sell books. Or does it fit within that constellation of the risk model of threat, vulnerability, assets all counterbalanced by the various countermeasures we deploy. And then take a prudent approach in dealing with that.

Well said. Phil, what do you think?

As Dr. Ross just alluded to, there is always a residual risk, so a digital Pearl Harbor can happen and we all have to accept that. How you build your infrastructure and how you manage says how well you can deal with that when it comes, if it comes.  Again, USAID since we are so widely distributed again in 80 countries around the world, it’s sort of easy to lose a part of it and still work. So from my perspective and I  realize that I’m looking at this selfishly and not futuristically, I think that we are OK because we can continue to operate if there is a major problem in a single place.


I think it’s a reality, I think it’s a very real threat. The residual risk acceptance that we have on a daily basis with our systems with our vulnerability acceptance where you need to get something operational and you have to accept some residual risk with that, I think it is a reality.  It’s there and it is very possible. I think that you need to have very strong contingency testing, you need to have disaster recovery planning, you need to, as you said earlier, identify your critical assets so that you know what you need to reconstitute if that happens.  So I think it’s very possible and I think that as CISOs we would be hard pressed to say otherwise. It’s getting the visibility into the problem and situation and be able to be nimble enough to react. The whole concept of telecommuting is actually helping in that we have a possibility to be able to work remotely, but it also increases the possibility of the threat of that digital Pearl Harbor.

Well said. I guess 9/11 has forced us to think the unthinkable. So you can’t just dismiss this stuff any more. Tim, what do you think?

Well, I’ll admit that in preparing for this I actually did a Google search on digital Pearl Harbor and I got no less than 1.25 million hits. So it’s clearly a juicy topic and as with most juicy topics opinions vary widely out there. From one end of the spectrum which is it’s not up for discussion, it’s already happened, some would claim the single slammer which knocked out 13,000 Bank of America ATM cards is an example of it.  The MS blast worm which is near and dear to Marylanders here, that virus actually shut down the Maryland Department of Motor Vehicles.  And there is unsubstantiated speculation that that MS blast worm actually had a lot to do with the root cause of the 2004 blackout that hit the north east US and Canada.  And I think something of that scale fits into the category of a digital Pearl Harbor. So that’s one end of the spectrum that says it has already happened. Clearly if that’s true, it can happen again. We do need to be diligent. I think the other side of the equation is the fact that long before cyber security, when security was just security, it’s always been a fact that the worst security threats were from insiders.  So while we speak of cyber security from the chatterers across the pond, I still think it’s also very true today that you’ve got to be watching inside, which is where people have access, know what they are looking for, and can gain access.

Good point. Make a visit to the Spy Museum here down town and hear about all those insider threats. Dennis?

Well Jim we are totally dependent on our IT infrastructure and on the information, so there’s no doubt that it has that impact. And I don’t think there’s, I guess there’s one thing about living a long time and that is you learn a lot and I don’t get up in the morning that I don’t look in the mirror and not want to pick up my cell phone because I don’t want to have to deal with it till I get to work if I don’t know about it already. It’s like a bumper sticker I saw a few weeks ago that said inside every old person is a young person wondering what the hell happened.  And I think each of us in this business worries about coming in to work and wondering what the heck happened. We have been put a huge emphasis on incident response and contingency planning. Part of my DOD experience, we run an annual exercise in the Department of Justice, it’s a department wide exercise and the CIOs participate in that and we go through the steps of escalating an event and working that and I think that’s just critical.  No matter what you do that’s proactive that we talked about that we are so proud of, you know that it takes just one small event to escalate into a very disastrous situation.

And that domino effect of the intra-connectivity amongst so many computers these days and systems, that domino effect can quickly take things down faster than one can get in front of it to stop the process. Dr. Ross, what are your thoughts on digital Pearl Harbor?

I agree with Tim very strongly. I think that if you look at Pearl Harbor it was an isolated attack that did serious damage but it certainly didn’t bring down the entire country and I think the digital Pearl Harbor analogy has been made to seem like everything would stop working in a few seconds. I think we’ve already experienced these kinds of attacks.  Clearly our Federal agencies are under attacks every day from very serious adversaries, very sophisticated tools they are using to try to get into these very critical systems. I think it’s already here. The question is with our current cost (sounds like) technology and our best policies, procedures and practices can we do enough in a defense in depth strategy to try to withstand these kinds of attacks. I think we are doing better but we still have a long way to go.

Great. Thanks very much. Let me take a couple of summary notes here that I think what we heard from today’s panelists. I think what I heard was the fact that we need to reframe the conversations and talk about risk and risk management and the need for agencies both within their own agency or corporation as well as looking at those who are dependent on the supply chains those you are working with and can you trust those other entities.  I think identity management techniques and things like that come into play as well as RF ID tagging and so forth which are a whole other set of subjects that we can talk about some day. I also heard I think from the panelists a lot of very positive comments about proactivity, trying to push this idea that we’ve got to be more proactive in addressing these cyber security issues and vulnerabilities and identifying and getting out in front so I think we also heard from the last question that it’s probably not feasible to identify every known vulnerability and threat because as the technology changes so do the vulnerabilities and so do the threats. So in order to be in a position to adjust or react to a major threat we need to be in a situation where we have resilience in place or back up and contingency plans.


With that I want to thank my guests.

13  The War Room / Cyber False Flags / Re: The "Digital Pearl Harbor" on: December 10, 2010, 12:30:09 pm

Telcordia Warns of ‘Digital Pearl Harbor’

Be afraid. Be very afraid.

According to three cyber security experts at Telcordia Technologies Inc. , the networking industry is headed for a “digital Pearl Harbor” — a security breach so serious that it creates major outages and serious economic damage.

US government officials, including the Obama White House, are well aware of the danger, which is one reason the President appointed cybersecurity czar Howard Schmidt. But the telecom and computing industries also need to be engaged in the process, which will require some changes in the way business is done today, says James Payne, senior VP/general manager of Telcordia’s National Security and Cyber Infrastructure unit.

Payne says imposing security standards on today’s converged-yet-diverse Internet service delivery community can be so complex that it even has some of those who are meeting to discuss the security challenge pining for the good old days, when monopoly ATT Inc. (NYSE: T) ruled the roost and could have gold-plated security, albeit at its ratepayers’ expense.

Convergence, the move to an all-IP infrastructure, and the best-effort nature of IP all play a role in the growing security challenge, as does the fact that organized crime, working on behalf of its own greed or rogue nations, now runs much of the cybercrime activities.

Payne quoted former national security adviser Richard Clark as saying recently that the cyber cartels are generating more money than drug cartels because of known exploitable vulnerabilities.

Multiple industry standards groups are attempting to address the issues, says John Kimmins, Telcordia fellow in security services and solutions. These include Alliance for Telecommunications Industry Solutions (ATIS) , the Internet Engineering Task Force (IETF) , the International Telecommunication Union (ITU) , 3rd Generation Partnership Project (3GPP) , and various government agencies such as the Department of Defense and Homeland Security, but no one agency is in charge of the effort.

“There is not one place to go and plug it [security] in — each standards group has what it embraces, which is one of the problems,” Kimmins says.

One thing that would help immediately, Payne says, is legislation similar to that passed in preparation for perceived Y2K dangers — the kind that protects companies that admit vulnerabilities from being subsequently sued by their investors. Without such protection, it is hard for service providers, hardware companies, and software vendors to engage in “honest dialogue” about what the real dangers are, for fear of legal entanglements.

“At a policy level, let’s get serious about having a dialogue to discuss moving away from the best-effort model,” Payne says. “It’s not about putting everything back together [like the old ATT] but it’s beginning of a dialogue that will enable us to avoid an event so serious” that repercussions might be unimaginable.

In advance of any standards or legal changes, however, there are things that the telecom and computing industries can be doing to mitigate some of the danger, Payne, Kimmins, and Petros Mouchtaris, executive director of information assurance and security, told press and analysts at Telcordia’s New Jersey headquarters last Friday.

Those things include:

    * Greater testing and hardening of hardware and software products before they are released on the market. The industry needs to move away from the attitude of release now, patch later, Kimmins says.

    * Greater discipline in developing and deploying patches when they are needed. There is a lag between when vulnerabilities are discovered and when patches are released, and again between when patches come out and when they are deployed. The bad guys take advantage of those lag times, sometimes even using the information released about vulnerabilities and patches to select their targets.

    * Eliminate marketing hype around standard terms such as “five-nines” and “no single point of failure.” Too many vendors are playing games with those terms, defining them in a limited way to make their gear sound more secure that it is.

      Give consumers the information they need to protect themselves. Consumer broadband with its “always on” feature created an army of bot-net computers because consumers weren’t made aware of the dangers and what they needed to do to protect themselves, Kimmins says.

    * Take a more disciplined approach to testing configuration management and correcting configuration mistakes, Mouchtaris says. More than 50 percent of downtime is caused by configuration errors, which occur for many reasons, and cyber criminals exploit those mistakes, he says. Configuration testing needs to be part of a regular disciplined approach to preventing such attacks.

Telcordia has a horse in this race, providing consulting and expertise as well as tools to test configuration management, among other things. But Krimmins says he believes the company is well positioned to be a trusted partner because it isn’t using security as a way to sell more routers, software upgrades, or firewalls.

— Carol Wilson, Chief Editor, Events, Light Reading

Article source:
14  The War Room / Cyber False Flags / Re: The "Digital Pearl Harbor" on: December 10, 2010, 12:29:48 pm

May 18, 2007 • Volume 5 • Number 5


Digital Pearl Harbor

Will IPv6 make us more secure? Experts give their opinions.

It seems like in the past with all new technologies come new vulnerabilities, said Jim Flyzik during the Federal Executive Forum on IPv6.

“Often times new technologies hit the market and then we are catching up later trying to get the security fixes in place because the so called ‘bad guys’ out there find ways to exploit new technologies. There are some concerns today about a digital Pearl Harbor or a terrorist attack taking down networks, attacking networks.”

The question is: will IPv6 improve security. Federal Executive Forum panelists weighed in on the issue.

Command Information’s Tom Patterson put the issue in perspective this way.

“Keep in mind that the Internet we use today, and we just call it the internet. We don’t know what version number it is and no one really cares. It was designed in the ‘70s and the concept was you had to be a trusted person before you were allowed to connect at a university or a research division or something like that. The concept of the general person coming along and connecting to the internet wasn’t part of the design.”

That means all the security now is place has been “added on”. According to Patterson there actually is a really good security standard now called IPSec. The problem is not enough people use it. The banks use it for very high volume transactions; maybe the State Department will use it for a top secret cable or something. But the rank and file people, it’s not being used to protect their credit cards, to safeguard their privacy and it can be.

“So when IP version 6 came out and started to be thought of as the next generation, leave everything old still working, but let’s see what we need to fix,” explained Patterson.

“One of  the first things that we fixed was let’s take whatever we know how to do really well, that is IPSec, the best security that we know how to do, make that default to the on position instead of an off position. So that someone will be able to, you don’t have to be a rocket scientist in order to use good security now.”

However that’s not the whole thing and it’s certainly no security silver bullet. “It’s also when WiFis came out. If you remember that a lot of CIOs said we don’t have a WiFi problem because we don’t allow it. And then there were all these chalk marks outside their building saying this is where you get free WiFi access; because people were just putting it in because it’s easy. That is possible now with IPv6 but you can’t just ignore it. And just outlaw it in your organization because it’s built into Apple, it’s built into Windows XP, it’s built into half the cell phones you are buying today. And some people are going to turn it on.”

So you need to be addressing the security implications. Security changes absolutely if you address it on a proactive basis, it changes for the better.

According to Commerce’s John McManus, there’s a lot of work going on looking at security in the IPv6 world. “There’s a lot of groups going on looking at security in the services that we provide today. And I think that Tom made a critical point. Those risks exist today. When you go and look at when IPv4 was designed, it has matured. Security has been bolted on to IPv4. In  IPv6 we’ve had the opportunity to actually design that in.”

When you employ a new technology there usually is a period of increased risk. And that risk comes from the simple fact that no matter what testing you do in the lab, and I think we do test very thoroughly, when you hit the wild, you hit some situations that you have not tested for.

“So one of the key things that we are doing now is working together as a community, there’s a working group that’s a part of the IPv6 working group, we are doing outreach into the DOD, outreach into all the carriers and equipment providers to start testing that equipment in a live environment on test networks so that when we go live we are sure that we are achieving at least the level of security, if not better, than we have in the networks we have today.”

“I just wanted to add that when you think of security regardless of the Internet protocol, you think of confidentiality,” says Education’s Peter Tseronis. “You think of integrity and authentication. And IPv6 isn’t going to be the panacea that says I’m going to take care of your mis-configured server, your poorly designed application, your poorly protected Internet sites. You need to have the skills to implement and maintain.”

Tseronis knows that not everything will be smooth and there will be some Internet engineers and systems engineers’ folks out there who are ready now, but others who are running for the foothills saying we don’t need to go there.

“But at the end of the day, you still have to maintain your security in such a way that, whether it’s IP stack or some other method, you are still going to have to protect it. So it’s not that it’s more secure, it just isn’t going to be less secure. You still have to maintain those policies in your network.”

Security is also on the mind of Cisco’s David West.

“A move to anything new, any new capability, produces threats and risk.  But if you do proper planning, validation, testing, a phased implementation of how you are going to introduce something new, you minimize those risks,” says West.

“One of the things that we are trying to make sure occurs is that as they make this transition, and they integrate this new service, they do it well thought out. What’s more interesting I think in terms of security, is the new application services that will be enabled as a result of the protocol.”

“We’ve got now a very large address face where many devices can have addresses. That introduces a potential security risk but again with proper planning, with consideration of what needs to happen from the vendor community in testing and validation, you could minimize those risks and really start to take advantage of what the protocol offers.”

At GSA, according to Fred Schobert, “We fully realize that with IPv6 there’s a lot of promise with security but we realize there’s a lot of work that remains to be done to be able to implement it with the agencies. When we talk with the agencies about IPv6 we are talking about things like IPSec but you are also talking about encryption and if you think about it, the security standards need to be defined, they need to be precise. The information security tools that the agencies will use need to be developed and they need to be there.”

Schobert thinks they are going into network monitoring and management facility overall to monitor a network, but that FISMA guidance needs to be considered because right now we have to go through certification and accreditation and if there are any holes we won’t be able to do anything. And finally he thinks they need to take a look at what we need to do in the application area to best support the IPv6 and what applications are required.

“We do take security very, very seriously, said Charlie Wisecarver, State Department CIO.

“I think IPv6 is going to introduce some new security concerns but ultimately we will be better off as we become smarter about this and adjust our policies and procedures. The denial of service possibilities is always a very, very serious concern for us as so much of our work is done through the internet. I think this can all be mitigated through some monitoring tools. The intrusion detection system, we haven’t heard too much about those types of tools that will help us identify those intrusion sets and how we can mitigate this quickly.”
15  The War Room / Cyber False Flags / Re: The "Digital Pearl Harbor" on: December 10, 2010, 12:29:24 pm

Countering the art of information warfare
Published on October 15, 2007 by Peter Brookes

Now is the time to take heed of Chinese intrusions into government computer systems, urges Peter Brookes

While France, Germany, the UK and the US do not see eye to eye on everything, there is one thing they probably can agree on: the growing problem of Beijing's intrusions into their government computer systems.

Indeed, in the last few weeks, all four capitals have pointed an accusatory finger at Beijing for attempting to infiltrate - or having succeeded in penetrating - their diplomatic or defence establishment computer networks.

While snooping by the People's Liberation Army's (PLA) cyber-soldiers on unclassified government websites and e-mail might be expected, the recent rash of incidents shines a spotlight on a burgeoning game of Internet cat and mouse.

In the case of China, Beijing's increasing aggressiveness (indeed, ham-handedness) and capability to infiltrate the computer networks of key countries is setting off alarms across the security establishment - and rightfully so. Take the US: while modern warfare is increasingly dependent on advanced computers, no country's armed forces are more reliant in the Digital Age than those of the US. This is both a great strength and a damning weakness.

Today, the US Department of Defense uses more than 5 million computers on 100,000 networks at 1,500 sites in 65 countries worldwide. Not surprisingly, potential adversaries have taken note of the US's slavish dependence on bits and bytes.

In an average year, the Pentagon suffers upwards of 80,000 attempted computer network attacks, including some that have reduced the US military's operational capabilities.

Also, in the last few years, the US Army's elite 101st and 82nd Airborne Divisions and 4th Infantry Division have been "hacked".

While it is difficult to determine the source, according to the Pentagon, most attacks on the US digital Achilles' heel originate in China, making Beijing's information warfare (IW) operations an issue we had better pay close attention to.

IW, including network attacks, exploitation and defence, is not a new national security challenge. Cyberwarfare was the rage in the late 1990s, but has faded since 9/11 in comparison to the mammoth matters of Islamic terrorism, Iraq and Afghanistan.

IW appeals to both state and non-state actors, including terrorists, because it is low-cost, can be highly effective and can provide plausible deniability of responsibility due to the ability to route strikes through any number of surrogate servers along the way.

An IW attack can launch degrading viruses, crash networks, corrupt data, collect intelligence and spread misinformation, effectively interfering with command, control, communications, intelligence, navigation, logistics and operations.

Not surprisingly, rising power China is serious about cyberwarfare, making the development of a robust IW capability a top national-security priority. China's military planners recognise US - and others' - dependence on computers as a significant vulnerability.

The PLA has invested heavily in developing its cyberwarfare capabilities, including openly expressing a desire to develop information warfare expertise - and boasting of its growing sophistication in the field.

The PLA has incorporated cyberwarfare tactics into military exercises and created schools that specialise in IW. It is also hiring top computer-science graduates to develop its cyberwarfare capabilities and, literally, creating an 'army of hackers'.

Despite its unprecedented military buildup, the Chinese realise, for the moment, they still cannot win a conventional war against the US and are, naturally, seeking unorthodox - or asymmetric - ways to defeat the US in a conflict over Taiwan or elsewhere.

China is developing weapons, including the so-called 'assassin's mace' that will allow China to balance the US's military superiority by attacking 'soft spots' such as its high-value computer networks.

The idea that a less-capable foe can take on a militarily superior opponent also aligns with the thoughts of the ancient Chinese general, Sun Tzu. In his Art of War, he advocates stealth, deception and indirect attack to overcome a stronger opponent. Overlaying the still-influential Sun Tzu onto modern Chinese military thought could lead one to conclude the PLA believes a Chinese 'David' could, in fact, slay a US 'Goliath' using an asymmetrical military option such as cyberwarfare.

The PLA's US target list is expansive, including command, control, communications, computers and intelligence nodes, airbases and even aircraft carrier strike groups - China's bête noir in a Taiwan contingency.

Industrial espionage against government and private defence research, development and production concerns is also a priority for Chinese cyber-spies, cutting costs and time in support of Beijing's massive effort to develop a world-class defence industry.

Even more troubling, however, is the assertion among analysts that potential Chinese cyber-strikes are not limiting themselves to just diplomatic and security-related targets. Private-sector financial and economic institutions may also be on the PLA's hit list.

Nor is China limiting itself to the US, France, Germany and the UK. Beijing is looking for cyber-dominance over other key potential regional rivals such as Delhi, Moscow, Seoul, Tokyo and Taipei. Wellington also recently reported an incident.

China's IW efforts and activities provide a cautionary tale to US and other policymakers. Fortunately, many governments have devoted significant resources to cyber-security, including measures against terrorists and amateur hackers.

The recent Chinese intrusions, however, clearly demonstrate remaining vulnerabilities and IW is here and now, making it increasingly important - and complementary - to the broad spectrum of modern warfare.

A 'digital Pearl Harbor' for any country is by no means a certainty, but then again, no one believed that terrorists would fly aircraft into buildings. The time to take heed of the cyber threat - Chinese or otherwise - is now.

Peter Brookes is a Heritage Foundation senior fellow and former US deputy assistant secretary of defense.
Send this report to a friend
Your Information Yes, I'd like to receive news about Heritage via email Your Friend's Information
Your Message

First appeared in the Jane's Defense Weekly
16  The War Room / Cyber False Flags / Re: The "Digital Pearl Harbor" on: December 10, 2010, 12:29:03 pm

A Cyber Pearl Harbor Day

Today we remember the tragic events that occurred back in 1941 at Pearl Harbor.  This was the threshold level event that drew the United States into World War II.

As we remember the tragedy of that day and the toll it took on the United States, we must remind ourselves to be vigilant and not let a repeat of that event to ever take place.  Last week the question of an electronic Pearl Harbor was asked over and over again.  Is it possible?  The answer is yes.  Is it probable?  That is where the debate comes in.

There are a number of groups that would like nothing more than to bring the United States to its knees.  There are certainly vulnerabilities that could be exploited in the nation’s critical infrastructure that could cause substantial disruption of critical services. 

For nearly two decades now cyber warfare capabilities have been recognized as a strategic power and many believe this power is on par with weapons of mass destruction. Many governments around the world have awoken and seen the strategic value of cyber weapons and have integrated cyber capabilities in the military doctrine and plans. What is equally as concerning is the pursuit of these weapons by terrorists.  Last week Northrop Grumman announced the formation of a Cyber Security Research Consortium to help secure the nation’s critical infrastructure and to counter the growing threats from cyber attacks.

As former Director of National Intelligence Mike McConnell put it – “We will not get focused on this problem until we have some catastrophic event.”   While there is movement, the bottom line is an electronic Pearl Harbor might be what happens before appropriate level of action is taken.

– Kevin Coleman

Read more:

17  The War Room / Cyber False Flags / Re: The "Digital Pearl Harbor" on: December 10, 2010, 12:28:37 pm
ZDNet / Video
Will there be a digital Pearl Harbor?
On April 23, 2009
4min 51 sec


Will there be one major catastrophe, or just smaller disasters? Panelists discuss what security issues we should be watching out for, where the threat might come from, and the difficulties in predicting the unpredictable. Panelists include: Whitfield Diffie, vice president and chief security officer for Sun Microsystems; Ronald Rivest, Viterbi Professor of Electrical Engineering and Computer Science at MIT; Adi Shamir, professor of computer science at the Weizmann Institute of Science in Israel; and Bruce Schneier, chief security technology officer for BT Counterpane. Moderating the panel is Ari Juels, chief scientist and director of RSA Laboratories.
18  The War Room / Cyber False Flags / Re: The "Digital Pearl Harbor" on: December 10, 2010, 12:28:16 pm
4-20-2010 WASHINGTON D.C.—Central Intelligence Agency director Leon Panetta told 300 Sacramento Metro Chamber Cap-to-Cap delegates that the next “Pearl Harbor” is likely to be an attack on the United States’ power, financial, military and other Internet systems.

Panetta addressed the Sacramento delegation that includes 43 elected officials and hundreds of business and civic leaders who are in Washington D.C. for the annual program that advocates for the region’s most pressing policy issues. He spoke on Monday, April 19, during the Cap-to-Cap opening breakfast.

“Cyber terrorism” is a new area of concern for the CIA, Panetta said. The United States faces thousands of cyber attacks daily on its Internet networks. The attacks are originating in Russia, China, Iran and from even hackers.

“The next Pearl Harbor is likely to be a cyber attacking going after our grid…and that can literally cripple this country,” Panetta said. “This is a whole new area of threat.”

Read rest of article here
19  The War Room / Cyber False Flags / Re: The "Digital Pearl Harbor" on: December 10, 2010, 12:27:51 pm

I Was Wrong: There Probably Will Be an Electronic Pearl Harbor
Ira Winkler says the emerging smart grid makes doomsayers' unlikely predictions more likely
By Ira Winkler
November 29, 2009 — CSO —

For 15 years now, I have been publicly lambasting all of those people who have made their careers, or at least made fleeting news headlines, based on their declaration of an imminent Electronic Pearl Harbor. My disdain is based on several factors, but predominantly the lack of accountability for such statements. One industry analyst, for example, stated that there will be such an event by the end of 2003. Six years later, I didn't see anyone revisit the utter lack of such an event.

However, I now see things developing to the point where there can be a strategic attack on computer infrastructures. The key word is Strategic.

Another major issue I have with the people who stake their fame in information warfare is the lack of apparent understanding in the concept of military and geopolitical issues. Specifically, strategy implies long term impacts, generally at least 3-6 months. Tactical attacks have short term impacts. Yes, we have had many tactical attacks against different infrastructures. However, comparing these attacks to Pearl Harbor is insulting.

Pearl Harbor was a preemptive strike against the US Pacific Fleet. It significantly degraded the US Naval capability for several years. If the aircraft carriers were in Pearl Harbor as the Japanese expected, it could have been a complete knockout blow. So the question becomes, what can make a computer attack strategic?

Over the last 15 years, it now appears that the electrical grid is not only extremely vulnerable, they are in the process of exponentially increasing its vulnerability. At this point, the vulnerabilities in the power grid are well documented. I highlight how there are many points where control networks overlap business networks. The GAO published a report a month later highlighting this problem at the Tennessee Valley Authority [pdf link]. The Wall Street Journal highlighted how Russian and Chinese intelligence agencies have already planted malware in the power grid. Then there was the Idaho National Lab Aurora video, where they demonstrated that a generator SCADA system can be remotely hacked to blow up the generator. Then there was the recent 60 Minutes piece.

I have to admit that even with all of the above, I wasn't convinced that there could be a true strategic attack. You can probably blow up a few generators, but the fact is that the power grid itself is resilient enough to withstand the effects. Another issue is that while Russia and China could potentially coordinate a much more devastating attack, they do not have the motivation to cause such damage. While terrorists and some other parties might want to try, it is unlikely that they have the coordination and resources to accomplish a truly strategic attack.

However, the smart grid changes all of that. The researchers from IOActive demonstrated that smart grid boxes can be hacked and that they can spread worms. Not only that, the boxes themselves will be connected to every home and be available to anyone. Anyone therefore has access to the smart grid. With tens of millions of the boxes planned to be distributed throughout the United States, potential attackers can easily get their hands on the systems to tear apart and find new vulnerabilities and attacks. More important, when there is a vulnerability found, how will it be mitigated?

There is a perfect storm brewing where the skills and resources required to launch a significant attack is being drastically lower. Depending upon the effects of a possible worm on the smart grid boxes, and the vulnerability of the generators, there can be a combined attack that does have strategic impact.

Again, I am not legitimizing the doomsday criers who have been doing this for decades. However, I have come to realize that there is gross negligence in how the power grid has been maintained, and how it is evolving. While I will not cry wolf and say it is imminent, I sadly realize that an Electronic Pearl Harbor is now very possible.
20  The War Room / Cyber False Flags / Re: The "Digital Pearl Harbor" on: December 10, 2010, 12:27:27 pm

Is a 'digital Pearl Harbor' in our future?

    * By William Jackson
    * Dec 04, 2009

Dec. 7 is the anniversary of the Japanese attack against Pearl Harbor that crippled the U.S. Pacific fleet and brought this country into World War II. What have we learned in the 68 years since that world-changing day?

The threat in our age is less to ships and aircraft than to the technology that controls so many aspects of our lives. Many observers have warned that our defenses are not adequate to protect our nation’s critical infrastructure, and the phrase Electronic or Digital Pearl Harbor has been commonly used to describe a surprise cyber attack that could cripple our military and commercial capabilities. Dire as these warnings are, we should take them with a grain of salt.

Although cyber threats are real, the chances of a Digital Pearl Harbor remain small. This is due not so much to the success of our cyber defenses, which in many places remain inadequate, but to the realities of warfare and networking. Blowing a fleet out of the water is not easy, but taking down a network—-I mean really taking it down, to the point where it is gone for good—-is even harder.

There are those who disagree. Ira Winkler, former employee of the National Security Agency and now a consultant and writer, for years scoffed at the idea and called comparisons digital attacks to Pearl Harbor “insulting.” But in a recent blog posting tellingly titled “I Was Wrong: There Probably Will Be an Electronic Pearl Harbor,” he changes his opinion somewhat.

What changed, he writes, is the smart grid. By creating a vulnerable, ubiquitous infrastructure that is tied in with our national power grid, we have greatly increased the potential for a strategic attack doing long-term damage, he said. “While I will not cry wolf and say it is imminent, I sadly realize that an Electronic Pearl Harbor is now very possible.”

But doing systematic, long-term damage to a network is much harder than compromising a vulnerability. And even if such damage were possible, what would be the point?

The Japanese were able to severely damage the U.S. Pacific Fleet at Pearl Harbor because so many resources were vulnerable at one time and place, and could be put out of action with one blow. But even then, our aircraft carriers escaped and, as it turned out, came to be the dominant military factor in the Pacific war.

Networks are even more complex than a fleet. Being able to exploit a vulnerability does not mean being able to exploit all vulnerabilities, or every instance of the same vulnerability. And even if networks are interconnected, they are not a homogenous whole. If network administrators have difficulty managing their own large networks because they are too large, flexible and changeable to accurately inventory and map, imagine the difficulty for a malicious outsider in bringing one down.

Of course, elements of it can be interfered with, damaged or even destroyed. But networks are typically too fragmented and redundant to stand or fall as one. Our networks have never been reliable enough to depend upon completely, so they are full of backups, workarounds and overrides that ensure that much of the work gets done even when the parts fail.

And it is important to remember that Pearl Harbor was not an end in itself. Japan gained little or nothing from destroying the fleet in Hawaii. The value of the attack was in the Imperial Navy’s ability to follow it up with attacks in Guam, the Philippines and other locations that enabled them to take and hold strategic military positions.

What good would it do for an attacker to take down vital U.S. networks? While the damage to this country could be great, the benefit to an attacker would be nil if it could not be followed up. The real threat of cyber warfare is not in stand-alone attacks, but in attacks coordinated with military action. At this point, there are very few parties out there with both the ability and inclination to take on the United States militarily, whether our networks are up or down. Terrorists could score points with a devastating cyber attack, of course, but without the ability to follow it up militarily, it would not rise to the level of a Pearl Harbor.

This is not to say that cyber attacks are not a serious concern, that our systems are not vulnerable, or that we do not need to pay attention to the growing threats posed by cyber intrusion. But we should address the issues realistically and understand the scope of the problem.
21  The War Room / Cyber False Flags / Re: The "Digital Pearl Harbor" on: December 10, 2010, 12:27:05 pm

To Forestall a 'Digital Pearl Harbor,' U.S. Looks to System Separate From Internet
11-17-2001 Yahoo! News

WASHINGTON, Nov. 16 The Bush administration is considering the creation of a secure new government communications network separate from the Internet that would be less vulnerable to attack and efforts to disrupt critical federal activities.

The idea for such a system, called GovNet, is the brainchild of Richard A. Clarke, a counterterrorism expert whom President Bush recently named his special adviser for cyberspace security.

Mr. Clarke, who has been warning for some time of the possibility of a "digital Pearl Harbor" if the nation does not invest more in cybersecurity, began working on the idea of a government network before the terrorist attacks of Sept. 11. But he says the attacks showed that it is imperative to imagine the ways terrorists could disrupt the nation's information infrastructure and the computer networks that control telecommunications, the electric grid, water supplies and air traffic.

"Prior to 9/11," he said in an interview, "there were a lot of people who thought that the only thing the terrorists could do is what they have already done. Now we know they can do something really catastrophic."

"The worst case here," he said of a cyberspace attack against the government, "is that we might not be able to communicate for essential government services. And it might happen at a time when we're at war. It might happen at a time when we're responding to terrorism."

Mr. Clarke said a critical question for the administration would be how much a government computer network would cost. No one is quite sure of that sum, although he speculated that it could be in the hundreds of millions of dollars.

Read rest of article by clicking here
22  The War Room / Cyber False Flags / Re: The "Digital Pearl Harbor" on: December 10, 2010, 12:26:30 pm

Digital Pearl Harbor
Schedule information
Event    Digital Pearl Harbor
When    Wednesday, December 8, 2010 from 10:00am to 12:45pm
Where    Copley Hall Copley Formal Lounge
Ticket/RSVP    Requires ticket or RSVP This event requires a ticket or RSVP
Event details
Details    A day after commemorating Pearl Harbor Day, several of the world’s leading cyber and national security experts will discuss the threat of a “Digital Pearl Harbor.' These pre-eminent thought leaders will address the likelihood that a “Digital Pearl Harbor”, long warned about, could actually happen as well as where our critical infrastructure and Federal networks are most vulnerable. Topics including cyber espionage and the ongoing theft of financial, intellectual, and national security data will also be discussed. Is cyber war a real threat or all hype? Confirmed panelists are:

Richard Clarke, author of Cyber War and chairman of Good Harbor Consulting
General Michael Hayden, former director of NSA and CIA
Jeffrey Carr, author of Inside Cyber Warfare

Please contact the Institute for Law, Science, and Global Security at for more information.
Access    » This event is limited to Georgetown University students, faculty and staff.
Sponsors    Institute for Law, Science, and Global Security
Calendar    Institute for Law, Science and Global Security
» Information about this calendar
» Other events on this calendar
» All events on the Master Calendar
23  The War Room / Cyber False Flags / Re: The "Digital Pearl Harbor" on: December 10, 2010, 12:26:09 pm
Hyping the future false flag cyber attack.  This particular piece calls for an EMP to be launched, and guess which nation will be blamed for it???
24  The War Room / Cyber False Flags / Re: The "Digital Pearl Harbor" on: December 10, 2010, 12:25:32 pm
The following is an excerpt from this UNIDIR:

Conclusion: where do we go from here?
While the potential of a "digital 9/11" is not great in the near future, the Internet has come of age
since 2001. Both terrorism and the Internet are significant global phenomena, reflecting and shaping
various aspects of world politics. Due to its global reach and rich multilingual context, the Internet
has the potential to influence in manifold ways many different types of political and social relations.
Unlike the traditional mass media, the Internet's open architecture means that efforts by governments
to regulate Internet activities are restricted, and this has provided users with immense freedom and
space to shape the Internet in their own likeness. Included within this cohort are terrorists who
increasingly employ new media to pursue their goals. The terrorists of today, like those of yesteryear,
are keen to exploit the traditional mass media while also recognizing the value of more direct
communication channels.
As far back as 1982, Alex Schmid and Janny De Graaf conceded that:
If terrorists want to send a message, they should be offered the opportunity to do so without
them having to bomb and kill. Words are cheaper than lives. The public will not be instilled
with terror if they see a terrorist speak; they are afraid if they see his victims and not himself
[…] If the terrorists believe that they have a case, they will be eager to present it to the
public. Democratic societies should not be afraid of this.19
Not everybody is in agreement with this position, however. Over time, both state and non-state
actors have endeavoured to curb the availability of terrorism-related materials online with varying
degrees of success. Authoritarian governments have met with some success by deploying technologies
that constrain their citizens' ability to access certain sites. There are fewer options for restriction
available to democratic governments, however, and although recently more restrictive legislation has
been promulgated in a number of jurisdictions, it is not yet clear that it will be any more successful
than previous attempts at controlling, for example, cyber-hate. In terms of terrorist web sites and
their removal, private initiatives instituted by a range of substate actors in conjunction with ISPs have
been much more successful. But the activities of individual hacktivists raise a number of important
issues relating to limits on speech and who can and should institute these limits. The capacity of
private political and economic actors to bypass the democratic process and to have materials they find
politically objectionable erased from the Internet is a matter for concern. Such endeavours may, in
fact, cause us to think again about legislation, not just in terms of putting controls in place—perhaps,
for example, outlawing the posting and dissemination of beheading videos—but also writing into law
more robust protections for radical political speech.[/size]
25  The War Room / Cyber False Flags / Re: The "Digital Pearl Harbor" on: December 10, 2010, 12:24:42 pm

Strategic Information Warfare
By Robert K. Hiltbrand
Originally published Spring 1999

Before I begin this discussion, I must add this disclaimer. The research information I have gathered for this
paper come from open sources. It is my personal belief that the United States, and in particular the Department of
Defense and its related civilian agencies, have some tremendous capabilities that the American public won’t ever find
out about until a national or international crisis arises. I have attempted to present, through the available
unclassified sources, what our national strategic information warfare capabilities are and what some perceived
weaknesses in our national information infrastructure. This is just a general discussion of the facts.
The Internet, as we know it today, started out as a program for the Department of Defense in 1969. Back
then it was called ARPANET and one of its goals was to link up the computer systems of several universities and
colleges that were doing research for the United States Military.
"Almost 30 years after the US Defense Department created the Internet as a communications system
invulnerable even to a nuclear attack, the global web of computer networks is itself now viewed as a national security
risk by the Pentagon and other military security chiefs." (1) Cyberspace soldiers have a finder on the mouse,
Business Times, Technology Section November 2, 1997.
The concept of guarding the national infrastructure -- especially its critical components -- against attack is
also referred to as cyberwar and in a broader context, as strategic information warfare. (2) Strategic Information
Warfare: A New Face of War, Roger C. Molander, Andrew S. Riddile, Peter A. Wilson, 1996 RAND Corporation.
As a result of the rapid growth in information technology, the Department of Defense, like the rest of
government and the private sector, has become extremely dependent on automated information systems. To
communicate and exchange unclassified information, the Department of Defense relies extensively on a host of
commercial carriers and common user networks. This network environment offers the Department of Defense
tremendous opportunities for streamlining operations and improving efficiency, but also greatly increases the risks
of unauthorized access to information. (3) Report to Congressional Requesters, May 1996, Information Security -
Computer Attacks At Department Of Defense Pose Increasing Risks, Government Accounting Office/AIMD-96-84,
Defense Information Security (511336).
Several federal civilian and national defense agencies estimate that more than 120 countries around the
world have established computer attack capabilities. In addition to this fact, most countries are believed to be
planning some degree of information warfare as part of their nation's overall strategy. (4) Cyberspace soldiers have
a finder on the mouse, Business Times, Technology Section November 2, 1997.
This means that the United States, in order to maintain its present position as a World leader, must maintain
its own Information Warfare strategy.
Strategic Information Warfare is the deliberate sabotage (electronically) of a nation-state's national
information infrastructure. This could take the form of crashing the financial markets of a nation. Or it could also be
the deliberate shutting down of the power grid in the capital city of an adversary. The worst case scenario could be
the infiltration of an enemy's military computer networks with the intent to destroy those very systems and thus
prevent those military forces from deploying to the field of battle.

Why is Information Warfare important to the United States? Because we live in a society where computer
networks are all around us. Power grids are controlled by complex computer networks. Financial transactions are
more and more being conducted electronically. The advent of the Internet. The popularity of email. Air traffic
controllers using computer to sort our the traffic in the skies. Any where there is a computer, there is the potential
for someone to electronically tamper with the information on it as well as the hardware and equipment it controls.
During the American Revolution and the American Civil War, there was armed conflict in the Continental United
States. During World War Two, with the attack against Pearl Harbor, more armed conflict was brought to American
soil, but not to the Continental United States. But with the advent of Information Warfare, damage can be done
directly to the Continental United States without an adversary ever having to physically be near the North American
continent. There are no clearly drawn front lines. Anyone and everyone can be affected.
The following quote comes from an American defense analyst, "Another characteristic of information attacks
stems from the loss of sanctuary. Attacks of this sort, particularly when they consist of more than an isolated
incident, create a perception of vulnerability, loss of control, and loss of confidence in the ability of the state to
provide protection. Thus, the impact can far exceed the actual damage that has occurred. This non-linear
relationship between actual damage and societal damage makes the problem of digital war a particularly challenging
one because it creates a mismatch between rational defensive responses and their effectiveness." (5) Defensive
Information Warfare by Dr. David S. Alberts.
As the United States enters the Twenty-first Century with the intention of being a World Leader, we, as a
society, must defend our national information infrastructure against attack. Successful attacks against it will have
severe, and as yet unknown, economic, political, and societal consequences because of America's heavy reliance upon
computer networks. We will discuss America’s vulnerability later.
Now that we know what Information Warfare is, the next logical question to ask is, "Who can wage it?" Well,
the answer is -- anyone. Individual "hackers," terrorist organizations with political, economic, or military objectives,
or nation-states that would not be able to go head-to-head with a traditional military power such as the United States
might be more successful on the cyber battlefield. However, the organizations and nation-states still need the
services of the individual hackers to accomplish their goals. We will focus on the hackers because they are the key
personnel in offensive Information Warfare. The word "hacker" has many definitions. Webster's New World College
Dictionary defines a hacker as a talented amateur of computers, specifically one who attempts to gain unauthorized
access to files in various systems. The New Hacker's Dictionary defines a hacker as a person who enjoys exploring
the details of programmable systems and how to stretch their capabilities.
A 1996 federal government report about Pentagon computer security states, "Today the term (hackers)
generally refers to unauthorized individuals who attempt to penetrate information systems; browse, steal, or modify
data; deny access or service to others; or cause damage or harm in some other way." (6) Report to Congressional
Requesters, May 1996, Information Security - Computer Attacks At Department Of Defense Pose Increasing Risks,
Government Accounting Office/AIMD-96-84, Defense Information Security (511336).
Is a hacker some 14-year-old kid from a Chicago suburb who electronically breaks into his high school's
network to change his Home Economics grade from a "D" to an "A"? How about a group of Russian hackers in St.
Petersburg who steal $12 million dollars (US currency) electronically from a Citibank computer located in New York
City? (7) Cable News Network news story, March 25, 1999.
George Tenet, Director of the Central Intelligence Agency, had two statements about hackers and their
effectiveness in today’s globally linked society, "A group calling themselves the Internet Black Tigers took
responsibility for attacks last August (1997) on the e-mail systems of Sri Lankan diplomatic posts around the world,
including those in the United States." (Cool Unclassified Testimony of George J. Tenet, Director of Central Intelligence,
delivered to the Senate Committee on Governmental Affairs, June 24, 1998.
"Italian sympathizers of the Mexican Zapatista rebels crashed web pages belonging to Mexican financial
institutions." (9) id.
Some of the tools used by individual hackers include -
! Logic bombs - this is unauthorized code that creates havoc when a particular event takes place;
! Virus - code fragment that reproduces by attaching to another program. It can damage hardware and/or
software directly, or it can degrade those systems by co-opting resources;
! Trojan horse - independent program that when activated performs unauthorized function (under the guise of
doing normal work). Think of it as a nasty little program within a larger normal program.
There are two types of Information Warfare attack modes - structured and unstructured.
An example of an unstructured threat would be how in March, 1997, a 15-year-old Croatian youth hacked his
way into the networks at a United States Air Force base in Guam. When questioned about it, the boy just wanted to
prove he could do it. (10) Bracing for guerrilla warfare in cyberspace, Cable News Network Interactive by John
Christensen, web posted April 6, 1999 @ 1829 Greenwich Mean Time (GMT).
A structured threat would be undertaken by parties that possess intelligence support, proper funding, and
are part of their organization or nation-state's long-term strategic goals. (11) Statement of Lieutenant General
Kenneth Minihan, United States Air Force and Director of the National Security Agency, to the Senate Governmental
Affairs Committee hearing on Vulnerabilities of the national Information Infrastructure, June 24, 1998. An example of
a structured attack would be shutting down of the power grid of a City just before it is bombed.
Another statement from General Kenneth Minihan of the NSA, "The Chinese present a good example of the
structured threat. In 1995 the Chinese military openly acknowledged that attacks against financial systems could be a
useful asymmetrical weapon." (12) Statement of Lieutenant General Kenneth Minihan, United States Air Force and
Director of the National Security Agency, to the Senate Governmental Affairs Committee hearing on Vulnerabilities of
the national Information Infrastructure, June 24, 1998.
What America must do is find a way to distinguish the difference between structured and unstructured
attacks against the national information infrastructure (both the civilian and military portions of it). Our society must
establish what the normal "noise" level is for it. (13) The Cyber-Posture of the National Information Infrastructure by
Willis H. Ware, March 9, 1997, RAND Corporation (MR-976-OSTP).
What parts of the national information infrastructure is vulnerable to attack? Another statement from
General Minihan of the NSA gives a brief overview,
“The resources at risk include not only information stored in or traversing cyberspace, but all of the
components of our national infrastructure that depend on information technology ....... these include the
telecommunications infrastructure itself; our banking and financial systems; the North American power grid; other
energy systems, such as oil and gas pipelines; our transportation networks; water distribution systems; medical and
health care systems; emergency services, such as police, fire, and rescue; government operations, and military
operations.” (14) Statement of Lieutenant General Kenneth Minihan, United States Air Force and Director of the
National Security Agency, to the Senate Governmental Affairs Committee hearing on Vulnerabilities of the national
Information Infrastructure, June 24, 1998.
Let us look at two of the more important components of our national infrastructure - the power grid and
communication systems.
The North American power grid, which is made up of the nations of Mexico, Canada, and the United States of
America, is a very large and complex system. All of its administrative functions are handled via a vast computer
network. What happens if an adversary, with the proper personnel, tools, and know-how, attacks the power grid
network and shuts it down? Literally millions of people will be left without something that most of us take for granted
- electricity!
The telecommunications infrastructure is the other component. The public switched network (i.e., the
national telephone system) is a singular point of concern because it provides the bulk of connectivity among
computer systems, people, organizations, and functional entities. It is the backbone of interpersonal and organization
behavior. (15) The Cyber-Posture of the National Information Infrastructure by Willis H. Ware, March 9, 1997, RAND
Corporation (MR-976-OSTP).
The communications infrastructure is particularly vulnerable because it is used for both military and civilian
voice, video, and data communications. These systems are controlled by the companies (such as AT&T, MCIWorldCom,
Sprint) who own the fiber optics and trunk cable systems that transmit the information. Potential adversaries could
hack their way into the AT&T mainframes and gain control of its systems. Once in control, these adversaries could
redirect, stop, or disable the systems from operating effectively. This would cause a great strain on American
society. Imagine not being able to call you family in another state. Or how about not being able to withdraw money
from an ATM because its communication lines with your bank have been disrupted. Or what about communication
satellites that are directed to stop transmitting signals. How are military leaders in the field supposed to
communicate with their headquarters without secure satellite communications?
“The Defense Information Infrastructure consists of communications networks, computers, software,
databases, applications, and other capabilities that meet the information processing, storage, and communications
needs of Defense users in peace and wartime.” (16) Report to Congressional Requesters, May 1996, Information
Security - Computer Attacks At Department Of Defense Pose Increasing Risks, Government Accounting Office/AIMD-
96-84, Defense Information Security (511336).
There are effective ways that America can protect its national information infrastructure. Some of these
measures include --
! All components of the Defense Department's infrastructure must be brought up to the same level. This means
hardware, software, and personnel. In-fighting between the different service branches needs to give way to cooperation
and resource sharing;
! Policies need to be established setting minimum standards and requirements for key security activities; and
! There must also be clearly assigned responsibility and accountability for ensuring that these minimum standards
are achieved.
Some of the tools that can be used to safeguard the national information infrastructure include –
! Firewalls are hardware equipment and software applications that protect system resources from hackers. A
firewall monitors all incoming traffic and attempts to block all unauthorized intrusions;
! Encryption is the transformation of original data into ciphered (altered) data. Only those who have a key to the
encryption program can un-encrypt the data; and
! Authentication can be used for network security to prove that a system user is who he/she is supposed to be and
that he/she has a right to use the system. Some examples could be for each system user to identify
himself/herself with a finger print or retinal scan identification.
There are several civilian agencies and military commands that are responsible for protecting the national
information infrastructure.
The following are some of known agencies –
! In 1988, the Department of Defense established the Computer Emergency Response Team (C.E.R.T.). It is based
at Carnegie-Mellon University.
! In December 1992, the United States military initiated its formal Defensive Information Warfare program.
Specifics for the program are as yet, available.
! In its December 1995 Defensive Information Warfare Management Plan, the Pentagon defined a three-pronged
approach to protect against, detect, and react to threats to the Defense Information Infrastructure. Again,
specifics are unavailable.
! In 1996, the Air Force established the Air Force Information Warfare Center (I.W.C.). That same year, the Navy
established it’s Fleet Information Warfare Center (F.I.W.C.) and the Army established it’s Land Information
Warfare Activity (L.I.W.A.). The main focus of each of these Commands is to conduct Offensive Information
Warfare and to protect the Defense Information Infrastructure against attacks.
! In December 1998, the Pentagon establishes the Joint Task Force for Computer Network Defense. This task force
is supposed to be an effort between all of the different service branches to share resources.
! The Defense Information Security Agency (allegedly chartered in the mid-90’s) has established a Global Control
Center. The Center is staffed by the Automated Systems Security Incident Support Team (A.S.S.I.S.T.) to provide
a centrally coordinated around-the-clock Department of Defense emergency response team to attacks on United
States military computer systems. Because of the nature of the global information network, A.S.S.I.S.T. can
support United States military installations located around the world.
! The National Security Agency is a government agency which is heavily involved in all aspects of Information
Warfare. They employ lots of “code-breakers” and have their own stable of hackers.
Offensive Information Warfare is now being integrated into battle plans along with conventional strategies
such as bombing an adversary. The Air Forces’ I.W.C., the Navy’s F.I.W.C., and the Army’s L.I.W.C. are all alleged to be
the military commands that will be conducting the Offensive Information Warfare campaigns of the future. The
following statement from United States Senator John Glenn sheds some light on America’s Offensive Information
Warfare capabilities -
"We are rapidly getting to the point where we could conduct warfare by dumping the economic affairs of a
nation via computer networks." (17) Cyberspace soldiers have a finder on the mouse, Business Times, Technology
Section November 2, 1997.
Again, please remember that much of U.S. military’s capabilities are unknown to the general public, so I can
only outline, in general terms, what they do.

American society is moving more towards full integration of the national information infrastructure. You will
pay your bills, perform bank transfers, and make dinner, hotel, or airline reservations from your home PC or a public
information terminal. America’s national information infrastructure will become a major component of the larger
global information network. This System will become more open as more and more people conduct their affairs online.
But also measures will be taken for system users to identify themselves (retinal scan or fingerprint, if not a DNA
sample). Hackers, as always, will find simple and effective ways around these enhanced security measures. They
might do things like piggy-back their own programs on the connections that legitimate system users are generating.
Another important aspect of the future of cyber space will be the evolution of the hackers. They will evolve into cyber
mercenaries. They will advertise their services on the global information network. These hackers will be hired by
governments, organizations, and individuals. As for future warfare, instead of threatening another nation-state with
nuclear war (physical destruction) governments will threaten to destroy the national information infrastructure of a
potential adversary (of course this won’t work on organizations or individuals). However, this will be a double-edged
sword because as the global information network becomes truly global, a disruption in one node of the System could
have unknown consequences throughout the rest of the Network. The is would called, “cyber collateral damage.”
There will be a "digital" Pearl Harbor. And there are multiple countries, organizations, and individuals that have the
technical know how to devise and conduct such an attack. The United States Department of Defense will continue to
develop its own Defense Information Infrastructure. This will be made as secure as possible as the military become
more and more dependant on the free (and secure) flow of data to enable it to meet its commitments around the

Rob Hiltbrand

About the author
26  The War Room / Cyber False Flags / Re: The "Digital Pearl Harbor" on: December 10, 2010, 12:23:59 pm

In July 2002, Gartner and the U.S. Naval War College hosted a three-day, seminar-style war game called "Digital Pearl Harbor" (DPH). Gartner analysts and national security strategists gathered in Newport, Rhode Island, with business and IT leaders from enterprises that control parts of the national critical infrastructure. Our objective was to develop a scenario for a coordinated, cross-industry cyberterrorism event.

Results of a post-game survey indicate that the DPH game experience had a profound impact on the participants: 79 percent of the gamers said that a strategic cyberattack is likely within the next two years.

DPH participants played the roles of terrorists, devising coordinated attacks against four national critical infrastructure areas: the electrical power grid, financial service systems, telecommunications and the Internet. Their goal was to determine if a cyberattack could create a crisis of confidence that would shift the strategic balance of power, at least temporarily. Since the game did not test defenses against cyberterrorism, the questions of whether a real attack would achieve the goals set in the game and how much economic damage it would cause are still open.

The question as to whether cyberterrorism is a realistic threat is resolved. DPH skeptics abound, of course, and level many criticisms, but two criticisms stand out.

The first criticism is that by engaging in this type of exercise, we are opening Pandora's box, showing those with malicious intent what could be done. Good point, but before we started, we ran this issue by national security officials, and as one of those officials succinctly put it: "The bad guys already have the knowledge of these systems, and they know what they are going to do." The purpose of the DPH game was to get inside the opponents' heads. All of the data and information created in the DPH game underwent a national security review before we published our analyses.

The second criticism is that there are no new lessons to be learned from the DPH game. Good point, and really a very daunting criticism. Yet, how often do we hear from these same critics: "If only enterprises (or users) would follow good IT security practices ..." But good practices are very difficult to follow. How many readers have ever installed a new operating system or application on their home PC, only to spend the next several days trying to get the PC to work again? Multiply that experience by thousands when you are talking about enterprises installing new applications, security patches and system connections on hundreds or thousands of servers, mainframes and PCs. Preventing such downtime requires deliberate, linear steps that take time, people and money. DPH-type exercises help identify the threats, improve risk management processes and, in turn, prioritize resources for IT security activities. As one military commander put it: "We must shoot the closest wolf first."

Nevertheless, the skeptics have history on their side (as do all Luddites at the dawn of a new era) — there has never been a cyberterrorism event. Or has there? Electrical power grid failures in some parts of the world, such as Western India, are so common that tampering with the grid to test cyberattacks could go unnoticed. This path leads to conspiracy theory oblivion, which is one of the reasons we ran the DPH game: determine what is really possible by a cyberattack.

Even skeptics of a DPH-type attack must acknowledge that our enterprises are under small-scale cyberattacks every day; hence, we are confident most readers will find our analyses of the DPH war game at least somewhat useful and very interesting.

Featured Research

'Digital Pearl Harbor' War Game Explores 'Cyberterrorism' By French Caldwell, Richard Hunter and John Bace

Security Best Practices Will Do Most to Foil Cyberterrorists By Paul Schmitz, John Mazur and Rich Mogull

Cyberterror Poses Growing Threat to Financial Services By John Bace, Annemarie Earley, Vincent Oliva and David Furlonger

Utilities Should Upgrade the Security of Their Operations By John Dubiel, Kristian Steenstrup and Paul Pechersky

Prepare for Cyberattacks on the Power Grid By John Dubiel, Kristian Steenstrup and Paul Pechersky

Telecom Is Secure but Not a Cause for Complacence By David Fraley and Ron Cowles

Could Terrorists Bring Down the Public Switched Telephone Network? By David Fraley and Ron Cowles

Terrorists Could Hijack the Internet By Ron Cowles and John Mazur

Recommended Reading and Related Research

Force Vendors to Make Software More Secure By Arabella Hallawell and Rich Mogull

Cyberattacks and Cyberterrorism: What Private Business Must Know By Rich Mogull and Richard Hunter

Dealing With Cyberterrorism: A Primer for Financial Services By David Furlonger
27  The War Room / Cyber False Flags / The "Digital Pearl Harbor" on: December 10, 2010, 12:22:53 pm
This following thread is a compilation of white papers, presentations, etc. where the globalist think tanks are warning us about the possibility of a "Digital Pearl Harbor".

Please read this document in its entirety:

Cybersecurity and Critical Infrastructure Protection
James A. Lewis
Center for Strategic and International Studies, January 2006

Cybersecurity entails the safeguarding of computer networks and the information they
contain from penetration and from malicious damage or disruption. Since the use of computer
networks has become a major element in governmental and business activities, tampering with
these networks can have serious consequences for agencies, firms and individuals. The question
is to what degree these individual-level consequences translate into risk for critical infrastructure.
Analyses of asymmetric, unconventional attacks at first assumed that potential opponents
would be drawn to the use of cyber weapons. These opponents could include conventional
nation-state opponents and “non-state actors.” Cyber weapons were considered attractive for
asymmetric attacks because they could offer low-cost means of exploiting the potentially
damaging vulnerabilities that are found in most computer networks. Some analysts go further
and argue that a cyber weapon could create destruction equal to a kinetic or blast weapon, or
could amplify the effects of an attack with these kinds of weapons.

The term “Digital Pearl Harbor” appeared in the mid 1990s, coinciding with the
commercialization of the internet. Digital Pearl Harbor scenarios predicted a world where
hackers would plunge cities into blackness, open floodgates, poison water supplies, and cause
airplanes to crash into each other. But no cyber attack—and there have been tens of thousands of
cyber attacks in the last ten years—has produced these results. The dire predictions arose from a
lack of insight into the operations of complex systems, from an overestimation of both the
interconnectedness of critical infrastructures and the power and utility of software as a weapon to
be used against them.

Determining the actual degree of risk posed by computer network vulnerabilities requires
an estimate of the probability that a computer malfunction will damage a critical infrastructure in
ways that will affect the national interest. For this to occur, a number of simultaneous or
sequential events must take place to let a digital attack in cyberspace have a physical effect. This
is not a simple transformation. Computer networks are indeed vulnerable, but this does not mean
that the critical infrastructures these networks support are equally vulnerable. Terrorists are
attracted to different kinds of weapons, particularly explosives, which are more reliable and
which better meet their political and psychological need for violence. Infrastructures are robust
and resilient, capable of absorbing damage without interrupting operations and accustomed to
doing so after natural disasters, floods, or other extreme weather conditions. In short, the cyber
threat to critical infrastructure has been overstated, particularly in the context of terrorism.1
This initial overstatement does not mean, however, that we should ignore cybersecurity in
planning for critical infrastructure protection. First, as the use of computer networks grows,
vulnerabilities will increase. Second, a more sophisticated opponent will not use network attacks
in an attempt to cause physical damage or terror, but instead target the information stored within
computer networks. Nation-states are likely to be attracted to this approach: penetrate networks,
collect information and observe activities without arousing suspicion and, should a conflict
begin, use that access to disrupt databases and networks that support key activities. This is a
different kind of threat from what much of the planning and organization for critical
infrastructure protection at first had in mind, and addressing it may require a reorientation of our
thinking and our actions on cybersecurity. This chapter discusses reasons and goals for

Political Context for Cybersecurity and Critical Infrastructure Protection
There is now a general recognition that cybersecurity was overemphasized in the initial Federal
efforts at critical infrastructure protection. Cybersecurity was, at the end of the 1990s, the
dominant theme in policy documents and public discussions of critical infrastructure protection.2
The overemphasis was the result of several factors. Critical infrastructure came of age in
the era when the Internet seemed to have upended all rules. The mentality of the dotcom era
underlay many of the assumptions on the scope and linkages of critical infrastructure and
cybersecurity. The newness of critical infrastructure protection as an area for security analysis—
the U.S. had not contemplated attacks on infrastructure (other than by strategic nuclear weapons)
for decades—introduced a degree of imprecision into early analyses. The heightened concern
over Y2K, when IT experts warned that ancient programming errors associated with
the millennial change would make computers around the world go haywire at the stroke of
midnight on New Years and plunge the globe into chaos, helped focus attention on cyber
networks as a new and dangerous vulnerability.

Analyses of critical infrastructure protection were also shaped (and continue to be
shaped) by a change in American political culture. Evidence for this change is (yet) diffuse and
anecdotal, but American government has become progressively more risk-averse since the
1970s. The reasons for this include a loss of confidence among governing elites, decreased
public trust of government (with concomitant increases in accountability and oversight
requirements) and a more partisan and punitive political environment. The consequences of a
more risk-averse political culture are far reaching and have yet to fully play out for the United
States, but an exaggerated aversion to risk affects the discussion of strategies for critical
infrastructure protection (even if the actual implementation of those strategies is at times lax
enough to appear to welcome risk with open arms).

This set of political changes is important for understanding critical infrastructure
protection and cybersecurity’s place in it. Planning for critical infrastructure protection involves
an assessment of risk (the probability that a damaging attack can be made). A risk-averse
individual will estimate the probability of a damaging attack as higher than a more neutral
approach might suggest. This overestimation of risk has been a standard element of discussions
of cybersecurity.

Assessing Risk
Determining the importance of cybersecurity for critical infrastructure protection must begin
with an estimate of risk. This has proven to be difficult to do, for some of the reasons suggested
above. A neutral approach to estimating risk would look at the record of previous attacks to gain
an understanding of their causes and consequences. It would estimate the likelihood of a
potential attacker selecting a target and which weapon or kind of weapon an attacker would be
likely to use against it (and this involves an understanding of the attackers’ motives, preferences,
strategic rationale, goals, capabilities, and experience). It would match attacker goals and
capabilities against potential infrastructure vulnerabilities, in effect duplicating the analysis and
planning process of potential attackers, as they identify targets and estimate the likelihood of
success in achieving their goals an attack using a particular weapon and tactics.
The importance of cybersecurity revolves around how we define risk and how much risk
a government or society is willing to accept. Homeland Security Policy Directive 7 (HSPD 7),
which lays out federal priorities for critical infrastructure protection
, begins by noting that it is
impossible for the U.S. to eliminate all risk and calls on the Secretary of Homeland Security to
give priority to efforts that would reduce risk in “critical infrastructure and key resources that
could be exploited to cause catastrophic health effects or mass casualties comparable to those
from the use of a weapon of mass destruction.”3 For the purposes of this article, the definition of
risk used to assess the need for cybersecurity will be the probability of an outcome that (a)
causes death and injuries, (b) affects the economic performance of the United States and (c)
reduces U.S. military capabilities.

Using these criteria, there have been no successful cyber attacks against critical
infrastructure (much less attacks that produced terror among the population). Even if we use a
minimal definition of risk, that an attack results in a disruption in the provision of critical
services that harms the national economy and rises above the level of annoyance, there still have
been no successful cyber attacks on critical infrastructure.4
An even more rigorous approach would limit risk to outcomes that affect the
macroeconomic performance of the United States and reduce U.S. military capabilities. Every
society has the ability to absorb a certain amount of death and destruction without serious
consequence. 2005 saw Hurricane Katrina lay waste to much of the Gulf cost and cost perhaps
2,000 lives (initial and hysterical claims by local officials that Katrina would close New Orleans
for years and cost 10,000 deaths were very wrong). Despite the damage and suffering, there was
only a small blip in GDP (economists suggest that U.S. economic growth would have reached
4% instead of 3.8% if not for Katrina), and there was no degradation of military capabilities.
One way to estimate the effect a cyber attack is to ask whether a foreign power, using
cyber weapons, could stop U.S. military forces from deploying. How, for example, could China
prevent a carrier battle group in San Diego or Hawaii from heading for the Taiwan Straits using
cyber weapons? Interfering with the telecommunications systems might slow the recall of crew
members on leave (if China was able to successfully disrupt the multiple cellular networks in
addition to the fixed telecom network and email). Interfering with the traffic signals could make
it more difficult for the crews to assemble, as could interfering with the electrical grid, which
could also complicate and slow preparing the ships for departure. Hackers could take over
broadcast radio and TV stations, to play Chinese propaganda or to change broadcast parameters
in the hopes of creating radio interference.

Yet this is a poor start to securing naval victory. If China or another opponent were able
to turn off telecommunications, electricity and the traffic light system, it would have little effect
on the ability of the carriers to deploy. Further, this sort of attack creates the risk for nationstates
(as opposed to non-state actors) of exacerbating tensions or widening conflict in exchange
for very little benefit.
The counterargument to these neutral approaches is that they ignore the political effects
of a successful attack. The most important of these political effects are the damage to a
government’s credibility and influence, and the risk of an overreaction by security forces that
does more damage than the attack itself.5 Some scenarios even contemplate non-state actors
launching a cyber attack with the knowledge that while its actual effect would be feeble, the
overreaction by security forces would be damaging (the history of the Transportation Security
Agency and the air passenger business, where large costs to consumers and tax payers are traded
for a modest reduction in risk, demonstrates this effect). While the “self-inflicting strategy” may
not appeal to violence-prone attackers like Al-Qaeda or other jihadi groups, it is one scenario
where the subtle use of cyber attack by a national state could trigger long-term economic
But the political consequences of an attack, cyber or otherwise, can be hard to predict.
We know that in many instances, the effect of an attack is to actually harden resistance and
increase support for an incumbent government. Even unpopular governments will benefit.6
Political leaders who put forth the right message of steadfast resolve in the face of attacks will
actually improve their standings. While the political investigations that followed September 11
called into question the competence of both the Bush and Clinton Administrations, the
immediate political effect was to generate a wave of support for the incumbent President. This
support can be lost if the response to an attack is seen as ineffective, but if a government puts
forward the right messages, avoids self-inflicted damage, and is seen as making progress in
reducing risks of further attacks, any political harm may very well be limited.

Computer Networks and Critical infrastructures
The United States has identified a long list of industries as critical. They include, according to
the National Infrastructure Protection Plan, food and water systems, agriculture, health systems
and emergency services, information technology and telecommunications, banking and finance,
energy (electrical, nuclear, gas and oil, dams), transportation (air, road, port waterways), the
chemical and defense industries, postal and shipping entities and national monuments and icons
The nature and operations of most of these infrastructures suggests that cybersecurity is not a
serious problem for them.

An infrastructure is judged to be critical because it meets some standard of importance
for the national interest—in that the goods or services it provides are essential to national
security, economic vitality and way of life. To meet this standard, there is an implicit
assumption that the disruption of the infrastructures would reduce the flow of essential goods or
services and create hardship or impede important government or economic operations. In the
interest of deciding where cybersecurity makes a useful contribution to critical infrastructure
protection, we can refine this standard by introducing two additional concepts—time and

Time and location help explain why cybersecurity is not of primary concern for many
critical infrastructures. If there are immediate problems when a system goes off-line, not
problems that emerge after weeks or months, that system is critical. Problems that take longer to
appear allow organizations to identify solutions and organize and marshal resources to respond,
and thus do not present a crisis. The ability of industrial societies to respond to problems, to
innovate and to develop alternative solutions or technologies, suggests that in those
infrastructures where disruption does not produce immediate danger and was not prolonged for
an unreasonable period of time, there would be little effect on national security, economic
vitality or way of life.

There is also a geographic element to criticality. National infrastructures are composed
of many local pieces, not all of which are equally critical. Specific elements of the larger
infrastructure provide critical support to key economic and governmental functions, not entire
networks or industries. It is harsh to say, but Hurricane Katrina in 2005 demonstrated that large
cities or sections of the country can be taken offline and, if the political consequences are
managed, have little effect on national power—either economic or military. Certain high-value
targets—the national capital region, military facilities, a few major cities, or nuclear power
plants—require greater attention across the board, while other places, where disruption or
destruction would not impair key national capabilities, can be assigned a lower priority.
The concerns of cybersecurity can transcend this geographic focus in some instances.
There are a few, very few networks that are national in scope and interconnect thousands of
entities in ways that make them mutually dependent. However, these networks—finance,
telecommunications, electrical power—are among the most critical for national security and
economic health, and their interconnectedness, national scope and criticality may make them
more attractive targets for cyber attack.8 Fedwire, the financial settlement system operated by the
Federal Reserve Banks, provides a crucial service to banks. Interfering with Fedwire would
cripple (temporarily) the U.S. banking system. The Federal Reserve has expended considerable
effort to harden FedWire, and the Fed’s desire to prevent online bank robbery provides an
incentive to continue these efforts.

The U.S. electrical system is composed of several thousand public and private utilities
organized into ten large regional grids. There is a substantial degree of interconnection within
these grids and computer networks play an important role in managing grid operation and the
production of electrical power. The grids themselves suffer form the consequences of
underinvestment and deregulation. Newer industrial control systems use commercial computer
operating systems and IP protocols as they are cheaper and easier to use. However, the new
technologies replace older control systems that used with specialized proprietary software and
dedicated networks that were difficult for hackers to access and exploit. The move to
commercial software and IP increases vulnerability.

Vulnerability is not the same as risk, however, and a number of factors limit the increase
in risk created by this transition to “off-the-shelf” control systems. There have been thousands of
hacking incidents aimed at power companies, but as of yet, none have produced a blackout.9 In
the larger national context, blackouts are common in the U.S. and often do not even attract
national attention. In 2002, an ice storm blacked out the 20th largest city in the U.S. with a
population of 600,000 for several days. The event had no effect on economic or military power
and barely merited attention in the national press. Power companies cooperate to respond
quickly to these events. Many critical facilities have installed backup power generation
equipment. A localized blackout outside of few major cities can be of minor importance to the
nation—witness the recent Los Angeles blackout. The real risk may lie in interconnection, and
the ability of an attacker to access one vulnerable producer and cascade this attack into a
blackout of one of the big regional grids, but an attack that succeeds in blacking out a single
facility might only be seen as an annoyance.

Telecommunications services are another national-level network. The telecom backbone
that supports the internet and voice communications is comprised of a number of large networks.
An attack that disrupted the services provided by several of these large networks could disrupt
communications traffic. However, the presence of multiple overlapping connections means that
there is no single point of failure. The use of satellites in communications services also
introduces a degree of redundancy. Since the 1970s, telecommunications networks have been
hardened to allow for some continuity of service even after a strategic nuclear exchange.
Additionally, telephone companies developed and use packet switching technology
(which breaks messages into many small “packets” of data that can be sent separately) to allow
voice communications to persist without a continuous end-to-end connection. The internet relies
on packet switching and benefits from the robustness provided by this technology. The internet
itself was designed to automatically route around damage to complete transmissions.
Communications may be slowed or disrupted, but there is no single point to attack that would
easily allow the national telecommunication system to be disabled.

Before deregulation and the breakup of the national monopoly, the U.S. telecom network
was built (with Federal guidance) to provide survivability and redundancy in the event of attack,
accident or system failure. After deregulation, when telecom companies were less able to make
investments solely to meet the requirements of national security, a highly competitive
environment and rapid technological development became the source of a high level of
redundancy. In contrast to an attack that destroyed facilities, a cyber attack would (a) require
sustained, successful re-attack to overcome network operators’ repair efforts and (b) would have
to disable multiple communications systems (wireless, fixed line, internet) to degrade

The complexity of successfully carrying out a cyber attack against national
infrastructures like telecommunications or the electrical grid, combined with a lower probability
of success than a physical attack, may make it unattractive to terrorists. Terrorists want
screaming people to run in terror past mangled bodies in the street—an attack that only produces
a busy signal is likely to be dissatisfying. In theory, the idea of a cyber attack against
telecommunications systems in coordination with a physical attack is attractive, as it could
compound damage and terror, but coordinating two simultaneous attacks adds a degree of
complexity that may overwhelm a terrorist cell’s planning capabilities while increasing the
chances of detection.

The same constraints do not apply to a nation-state attacker. Such an attacker would have
the resources for coordinated attacks. Surreptitious economic warfare during peacetime may be
attractive, but an opponent would want to weigh the benefits of an attack that produced a longterm
drag on the target’s economy against the risk and damage of discovery. In the event of a
conflict, however, a nation-state opponent is likely to use cyber weapons to attempt to disrupt
these large U.S. national networks.

The Internet as a Critical Infrastructure
Some point to the Internet as a single large infrastructure that could be attacked with cyber
weapons.10 The first point to bear in mind, however, is that it is a shared global network. An
attack against it will affect both target and attacker. An attacker may calculate that the U.S.
might suffer more as a result, or it could plan to use some alternative or backup system to replace
the internet while the target struggled to respond, giving it a temporary advantage.
The internet is very robust. It is a network designed to continue to function after a
strategic nuclear exchange between the U.S. and the Soviet Union. Its design and architecture
emphasize survivability. The internet (building on earlier technological improvements created
by packet switching in telecommunications) could deal with disruption by automatically
rerouting to ensure that a message would arrive despite the complete destruction of key nodes
from the network. The internet addressing system, which is critical to the operations of the
system, is multilayered, decentralized, and can continue to operate (albeit with slow degradation
of service) even if updating the routing tables that provide the addressing function is interrupted
for several days. Some of the core protocols upon which the internet depends appear vulnerable
to attack. BGP (Border Gateway Protocol) is responsible for routing traffic and a number of
tests suggest that BGP is vulnerable to attack but an attacker faces the immense redundancy
contained in a network comprised of tens of thousands of subsidiary networks.
There has been at least one effort to attack the Internet. An October 2002 attack by
unknown parties used a Distributed Denial of Service attack against the 13 “root servers” that
govern Internet addresses. The attacks forced eight of the thirteen servers off-line. The attack on
the DNS system did not noticeably degrade Internet performance and went unnoticed by most of
the public, but had it been continued for a longer period (and if the perpetrators remained
undetected) there could have been a significant slowdown in traffic. A successful attack on the
Internet’s DNS system, if successful, would slowly degrade that system’s ability to route traffic,
but this would take several days to have any effect. In response to the attack, the DNS system
has been strengthened since the 2002 attack by dispersing the root servers to different locations
in and by using new software and routing techniques. The new redundancy makes shutting down
the DNS system a difficult task for an attacker.
The difficulty of estimating the actual cost of a cyber attack adds complexity to planning
for critical infrastructure protection. Estimates of damage from cyber attacks at times reflect the
heritage of the boom in cybersecurity—they generally overestimate or exaggerate
damage. Damages are estimated by taking a sample of costs to various users and then
extrapolating them to the affected user population. In some cases, the sample of costs is itself an
estimate. These estimates of the economic damages of cyber attack show considerable variation
in the value they ascribe to cyber incidents. There is also considerable variation in their
methodologies, which are often not made public. Few if any of these efforts use the sampling
techniques derived from statistical analysis that could ensure greater reliability. Statements that
cybersecurity is crucial because of the risk of economic losses that could total in the millions,
hundred of millions or billions of dollars should not be accepted at face value.
It is important to disaggregate the effects of an attack. Analysts often cite the Slammer
worm as a damaging cyber attack, but its effects were, from a national perspective,
inconsequential. One frequently cited example about the damage of Slammer tells how it
affected automatic teller machines (ATM) across the northwest, putting 13,000 of them out of
service. What is important to note, however, is that Slammer affected only one bank and its
ATM network. Other banks were unaffected, and the other major bank in the region did not see
its AMT network go offline at all. In this instance, customers of the first bank were
inconvenienced. The first bank lost revenue and suffered reputational damage. The bank’s
competitors were, in one sense, rewarded for practicing better cybersecurity, as some
transactions that would have been made on the first bank’s ATM network were instead
conducted on their machines.

Another example involves a railroad forced by the ‘sobig’ virus to suspend operations on
23,000 miles of track—but no other railroad was forced to suspend operations. If a cyber attack
damages one company in a critical sector but leaves its competitors operational, it limits the
overall risk to critical national functions. It is difficult to think of a case where a cyber attack
affecting one firm and not others would pose a risk to security.
We do not want to extrapolate the misfortunes of a single company to an entire sector in
estimating the risk to critical infrastructure from a cyber attack. Similarly, we also want to
disaggregate the estimates of opportunity cost to determine whether it is a single company that
suffers or the entire economy. In this case, opportunity cost refers to the income (or production)
lost when a resource cannot be used, a sale made, or a service provided because of cyber attack.
Most of the estimates of the cost of the damage of cyber attacks include an estimate of
opportunity cost and this often makes up a large portion of the estimated damages from an
Opportunity cost can be misleading for security analysis. If one online merchant is
forced offline by a cyber attack, but their competitors remain in operation, customers may choose
to go to the site that works to make their purchase. The vendor forced offline has lost income,
but national income remains unaffected. Other customers may choose to wait and make their
purchase later. Again, national accounts are ultimately unaffected. In other cases, a
manufacturer may see its website or corporate email network go offline but be able to maintain
production—in one case where a virus damaged an auto manufacturer's corporate email systems,
the production for cars and trucks was unaffected.12
This is not to disparage the effects of cybercrime, which can be costly for an individual or
company. However, most cybercrime involves losses in the thousands of dollars (there are
anecdotal reports that a few major banks have experienced much larger losses, but they have not
made these losses public for fear of reputation damage). Cybercrime is prevalent and increasing,
but this does not mean that the risk to critical infrastructure is similarly increasing.
Cybercriminals want money. Their favored tactics include theft of valuable data or extortion
(e.g., the threat to launch denial of service attacks or disrupt networks unless paid). Their first
question will be how to turn a threat to a national infrastructure into financial gain without risk of
arrest. Threatening an attack against multiple firms may either be operationally too difficult or
attract too much attention from law enforcement agencies.

In view of their motives and incentives, attacks against infrastructure by cybercriminals
seem unlikely, but nation-states may adopt the hacking tools and “bot nets” developed by
cybercriminals for use in cyber war efforts. The sophisticated “shareware” available for
cybercrime from hacker or “****” sites can give even inexperienced attackers access to
advanced automated tools and techniques. These range from online hacking manuals and do-ityourself
virus kits to sophisticated attack tools that require some computer expertise to use.
The most interesting of these tools allow a hacker to place surreptitiously malevolent
programs on a computer without the user’s knowledge. The program can then execute damaging
instructions, transmit data or an external address, or provide increased (and invisible) access and
control to the hacker. This “malware” can infect computers through the opening of malicious email
attachments, downloading seemingly harmless programs, or simply through visiting a
malicious Website. Cybercriminals assemble networks of these infected computers for use in
denial of service attacks, for spamming or for advertising and tracking. Using these tools, an
attacker could attempt to disrupt networks and damage or erase data.
However, not all networks are equally vulnerable to the tools of cybercrime. The botnets
mainly infect consumer systems using always-on connections. Damaging these consumer
computers would be annoying, but not threatening to national security or long-term economic
health. Second, cybercrime tools are not aimed at physical infrastructure. Infecting a computer
does not automatically become a risk to an associated infrastructure. This means that while
cybercrime can increase, and it is a growing problem for law enforcement, that does not mean
that the risk or critical infrastructure is also increasing.

One benefit that has come from the attention paid to cybercrime is that the measures that
improve cybersecurity to protect against criminals will also reduce any risk to critical
infrastructure. The use of regular software patching, defensive software (such as intrusion
detection systems and anti-virus and anti-spyware software), better authentication of users and
encryption of sensitive data will make an attackers job more difficult. Improved law
enforcement capabilities to arrest and prosecute cyber criminals will also reduce the
attractiveness of cyber attacks against critical infrastructure. Companies are more likely to spend
money to protect themselves from criminal attacks, since this offers a direct and immediate
benefit their bottom line. Defense is a “public good”13 and the private sector routinely
undersupplies public goods. This is a particularly important point given the U.S. dependence on
the private sector for critical-infrastructure protection. A reading of economic incentives
suggests that companies will spend to improve cybersecurity to prevent cybercrime more than
they would for a nebulous threat to homeland security.
The importance of cybersecurity in protecting critical infrastructures other than finance,
electrical power or telecommunications, rests on the assumption that critical infrastructures are
dependent on computer networks for their operations. The chief flaw in this reasoning is that
while computer networks are vulnerable to attack, the critical infrastructures they support are not
equally vulnerable. Early proponents of cyber attack assumed that many public services,
economic activities and security functions were much more dependent on computer networks
than they are in their actual operation. While the dependence on computer networks continues to
grow, many critical functions remain insulated from cyber attack or capable of continuing to
operate even when computer networks are degraded. It may be more accurate to say that critical
infrastructures are dependent on their human operators, whose actions are supported, reinforced
or carried out using computers and networks. This human element reduces the risk of
cyberattack to critical infrastructures.
A well-known example of the difference between computer vulnerability and system
vulnerability again comes from the 2003 release of the Slammer worm. The worm affected
database software. Some police departments in Washington State saw the computers used in
their 911 emergency response systems slow to the point of uselessness as the worm spread and
implemented its instructions. These departments compensated by using paper notes to record
calls, allowing 911 services to continue uninterrupted.14 The computers were vulnerable and
affected, but the critical service was not.
The debate today in how to approach this task is whether cybersecurity should be an
element of a larger critical infrastructure strategy or whether it deserves its own independent
approach. While the first phase of planning for critical infrastructure protection made
cybersecurity of primary importance, the second phase of thinking about critical infrastructure
protection assumed that cybersecurity only made sense as part of a larger strategy focused on
physical protection. Had the 911 centers in Washington State been the subject of physical attack
or damage, they might very well have had to shut down and 911 services would have been
disrupted (as was the case in New Orleans the post-Katrina flooding). Incidents like this seem to
show that risk comes primarily from physical attack.
While this new approach to critical infrastructure protection has dominated federal
planning for the past few years, it is not universally accepted. There are reasons for this lack of
universal acceptance, some good, some less sensible. The IT industry did not like being
downgraded from the central place it occupied in critical infrastructure protection. A more
cogent argument for a separate approach to cybersecurity involves recognition of the inability of
differing security communities to implement a strategy that unifies cyber and physical security.
A Chief Security Officer in a corporation often thinks in terms of “guns, gates and guards.” The
Chief Information Security Officer thinks in terms of firewalls and software. In most companies,
neither is well placed to execute a unified strategy.
A third approach to critical infrastructure protection might be to recognize that the
importance of cybersecurity varies from infrastructure to infrastructure and with the nature of the
attacker. We should not be surprised that the distribution of vulnerability is not uniform among
or across infrastructures. Cybersecurity is more important for a few networked, interconnected
national infrastructures and less important for many disaggregated infrastructures. Cybersecurity
is less important in planning to defend against terrorist attacks, since these are less likely to use
cyber weapons, but more important in planning for conflict with a national state opponent.
Critical infrastructure protection could distinguish among the many places where cybersecurity is
a tertiary source of risk and the few places where it is of central importance. These key facilities
could be “hardened” with a combination of redundancy, contingency plans for responding to
computer disruption, maintaining non-networked controls for key functions, and by ensuring
additional monitoring of computer and network activities.

HSPD-7 asserts “Terrorists seek to destroy, incapacitate, or exploit critical infrastructure and key
resources across the United States.” This assertion is not entirely accurate. While terrorist do
exploit western infrastructure for transport and communications to obtain a global presence and
capability, it is not clear that they seek to destroy or incapacitate critical infrastructures. Their
strategies do not emphasize economic warfare, but favor a blend of military and psychological
actions that they believe will produce political change. Cyberspace is a valuable tool for
coordination and propaganda for terrorists, but it is not a weapon.15
Nation-states who are potential opponents may see more opportunity in cyberspace.
Intelligence gathering will prompt them to penetrate U.S. computer networks. In the event of a
conflict, nation-states will likely try to use the skills and access gained in intelligence operations
to disrupt crucial information systems. This disruption will also affect critical infrastructures
and, potentially, degrade the services they provide. It remains unclear, however, if even a skilled
opponent can translate the degradation of key infrastructure services into military advantage for a
conflict whose combat phase is likely to be of short duration and depend more on existing
The best path to better cybersecurity may lay outside of critical infrastructure protection.
It is hard to motivate people to defend when risks are obscure or appear exaggerated. However,
the risks of espionage (including economic espionage) and cybercrime are very real for
individuals, firms and agencies. A security agenda that focused on measures to respond to
cybercrime and espionage would produce tangible benefits, win greater support, and reduce
much vulnerability in computer networks used by critical infrastructure. If an emphasis on
cybercrime and counterespionage is the key to better cybersecurity, this suggests that the core of
the problem lies with law enforcement.

Critical infrastructure protection began by making cybersecurity the cornerstone of
defense. This chapter suggests that in fact, if we calculate the risk from cyber attack for most
infrastructures, it is a tertiary concern. The history of critical infrastructure protection has been
to develop expansive plans to cover a broad list of targets and then, in the effort to protect many
things with few resources, achieve little in terms of risk mitigation. Putting cybersecurity in the
context of more precise assessments of the actual threat could help overcome some of this
difficulty by allowing a federal strategy to focus on the few networks of real concern.

About the Author
James A. Lewis is a Senior Fellow and Director for Technology and Public Policy at the Center
for Strategic and International Studies (CSIS), a research institution in Washington. Before
joining CSIS, Lewis was a career diplomat at the Department of State, and a member of the
Senior Executive Service at the Department of Commerce. He received his Ph.D. from the
University of Chicago in 1984.

1 “Assessing the Risks of Cyber terror, Cyber war and Other Threats” provides the fuller discussion of the likelihood
of cyber terrorism.
2 The Joint Security Commission was the first in a series of commissions to identify cybersecurity as a primary
challenge, saying "The Commission considers the security of information systems and networks to be the major
security challenge of this decade and possibly the next century…” Joint Security Commission, “Redefining
Security: A Report to the Secretary of Defense and the Director of Central Intelligence,” February 28, 1994,
Chapter 1, “Approaching the Next Century”
3 “Homeland Security Presidential Directive/HSPD-7: Critical Infrastructure Identification, Prioritization, and
Protection,” December 17, 2003,
4 A fuller discussion of this claim, and use of the concept of ‘opportunity cost’ in assessing economic harm, follows
5 This conclusion reflects the results of government-sponsored cyber war games.
6 The first study to confirm this counterintuitive effect was the U.S. Strategic Bombing Survey, but later studies have
found a similar reaction among target populations. U.S. Strategic Bombing Survey, Summary Report (European
War), 1945. See also Stephen T. Hosmer, “Psychological Effects of U.S. Air Operations in Four Wars,” Rand,
7 The earlier PDD-63 (May 1998) identified the task as protecting “the nation's critical infrastructures from
intentional acts that would significantly diminish the abilities of: the Federal Government to perform essential
national security missions; and to ensure the general public health and safety; state and local governments to
maintain order and to deliver minimum essential public services; and the private sector to ensure the orderly
functioning of the economy and the delivery of essential telecommunications, energy, financial and transportation
services.” The Patriot Act and HSPD 7 also provide similar but not identical lists of infrastructures deemed
8 Oil and gas pipelines could be considered a national network, but there are alternative transport modes that could
mitigate an attack. Air traffic control may appear national, but is conducted in discrete segments on a local and
regional basis.
9 “Energy and power companies experienced an average of 1,280 significant attacks each in the last six months,
according to security firm Riptech Inc…. The number of cyber attacks on energy companies increased 77 percent
this year (2002).” CBS News, “Hackers Hit Power Companies, July 8, 2002,
10 For more on this, please see the chapter by Aaron Mannes in this volume.
11 National Infrastructure Advisory Council, “Prioritizing Cyber Vulnerabilities,” October 2004, Page 5, at
12 The Ford Motor Company received 140,000 contaminated e-mail messages in three hours. It was forced to shut
down its email network. E-mail service within the company was disrupted for almost a week. Ford reported, “the
rogue program appears to have caused only limited permanent damage. None of its 114 factories stopped
production. Computerized engineering blueprints and other technical data were unaffected. Ford was still able to
post information for dealers and auto parts suppliers on Web sites that it uses for that purpose.” Keith Bradsher,
“With Its E-Mail Infected, Ford Scrambled and Caught Up,” The New York Times, May 8, 2000
13 A “public good” provides benefits to an entire society with very little incentive for any one person to pay for it.
14 Wells, R. M., “Dispatchers go low-tech as bug bites computers” Seattle Times, January 27, 2003,
15 See, for example, Office of the Director of National Intelligence, “Letter from al-Zawahiri to al-Zarqawi, October
11, 2005,”
28  Science & Technology / Smart Grid / Smart Power / Chevy Volt tyranny car made by Government Motors is SmartGrid capable on: December 07, 2010, 11:18:19 pm

UT students check out new Chevy Volt
Updated: Dec 03, 2010 6:30 PM

TOLEDO, OH (WTOL) – Folks got a change to catch a glimpse of the new Chevrolet Volt Friday when an  auto industry executive visited The University of Toledo.

OnStar Chief Information Officer Jeffrey Liedel gave a lecture on the Chevrolet Volt and its on-board telematics.

Liedel, a 1988 UT grad, also talked about the mobile application and some of the "smart grid" capabilities.

Additionally, a Volt was on display as part of the free event that was open to the public.

Liedel also spent part of the morning talking with UT College of Engineering  students.

Oh boy.  Onstar Enterprise Architecture, combined with smart grid interoperability, and made by Government Motors, bailed out with your taxpayer money!

Chevy Volt is the ultimate NWO enslavement machine.

About Jeff Liedel

CIO — Jeff Liedel is as much a car guy as he is a computer guy. That much becomes clear when he's discussing his 20-year career track and the businesses he's served: Ford (F), Covisint, GM and now OnStar, the in-vehicle communications company and GM subsidiary, where he is CIO.

Liedel deftly moves the conversation among a range of topics: embedded telematics and mobile application capabilities in a Chevy Volt electric vehicle, the energy efficiency of internal-combustion engines, and how BI tools can help OnStar. He seems to be equally at home in a data center or on the floor of the 2010 Consumer Electronics Show, where OnStar announced a mobile app for the iPhone, iPod Touch, Blackberry Storm and Motorola (MOT) Droid that allows drivers to monitor and control the Volt's electrical functions.

His responsibilities cover IT systems that deliver safety, navigation, vehicle diagnostics, telephony and other services to OnStar's 6 million U.S. and Canadian subscribers. Every month, those systems process a wide range of interactions, including: 2,600 automatic crash responses, 10,400 emergency services, 600 stolen vehicle assistance and 62,700 remote door unlocks, according to OnStar data.

Read the rest of his bio here:

Read more about how Big Brother and dangerous OnStar is here:,728.0.html
29  Science & Technology / Smart Grid / Smart Power / Re: 2011 National Town Hall Meeting to increase "consumer acceptance" of Smart Grid on: December 07, 2010, 11:05:05 pm
Customer Education is the Key to Consumer Acceptance for Smart Meters, According to New Survey

Korea IT Times [1]
Monday, December 6th, 2010

As the momentum of utility smart grid initiatives continues to increase, questions of consumer acceptance are of paramount importance for the industry. During 2010, the industry discussion of consumer issues has intensified, particularly in the wake of loud consumer pushback related to smart meter deployments in the Pacific Gas & Electric and Oncor service territories. Utilities are seeking effective ways to communicate the benefits of smart meters to their customers, while at the same time addressing consumer concerns about billing, privacy, control, and safety issues.

A new consumer survey from Pike Research [2] finds that consumer familiarity with smart meters is a critical element in fostering positive impressions of these new devices and their benefits. The survey, based on a nationally representative sample of more than 1,000 U.S. adults, finds that among respondents who were "extremely familiar" with smart meters, 67% stated that they had an "extremely" or "very" favorable opinion on the devices. This level of favorability was dramatically higher than the total base of survey participants, in which only 29% provided a favorable rating for smart meters.

"There is a direct correlation between consumer familiarity with smart meters and their favorable views toward the technology," says research director Bob Gohn. "Most consumers in our survey still don't understand what smart meters are all about, and this lack of knowledge is a real barrier to ultimate acceptance. To ensure the success of smart meter deployments and avoid repeating the mistakes of several early rollouts, it is incumbent on utilities to increase their customer outreach and education programs."

Other key findings of the survey are as follows:

    * 56% of survey respondents described themselves as "not very" or "not at all" familiar with smart meters. Among this group, the number of respondents giving favorable ratings was extremely low. Ambivalence to smart meters was the most common response.
    * Increased consumer access to electricity usage information was identified as an important benefit by 52% of respondents, making this the most frequent benefit cited. Improved reliability of electricity service was not far behind, with 46% of consumers identifying this benefit as important to them.
    * The most popular reason for an unfavorable opinion about smart meters, chosen by 59% of respondents, focused on concerns that the devices would increase electricity bills.
    * Smart meters were the least popular of the four consumer smart grid concepts covered by Pike Research's survey, with a 29% favorability/interest rating. Other more popular concepts were home energy management (47%), smart appliances (44%), and demand response services (33%).

30  Science & Technology / Smart Grid / Smart Power / Mitsubishi research project to integrate electric automobiles into SmartGrid on: December 07, 2010, 11:01:42 pm
Japan: Mitsubishi and NEDO in smart grid project

Monday, December 06, 2010,

Mitsubishi Motors, Mitsubishi Corporation and Mitsubishi Electric Corporation (MELCO) have agreed to collaborate on a smart grid system research project by the New Energy and Industrial Technology Development Organization (NEDO). The three Mitsubishi partners have been working on smart grid research since March this year. They applied to NEDO to participate in the project and were accepted on 24 August. Under a subsidy contract, NEDO will contribute two-thirds of total project costs.

The Tokyo Institute of Technology is acting as advisor to the project, which aims to develop smart grid systems that employ electric vehicles and the lithium-ion batteries used to power them.

The project will see photovoltaic systems, electric vehicles, and lithium-ion batteries recycled from electric vehicles installed at Mitsubishi Motors’ electric vehicle factory in Okazaki (Nagoya Plant). The electric vehicles used by the factory workers to commute to and from work will also be involved in the project. The batteries will be recharged using the photovoltaic and grid systems, and a Factory Energy Management System (FEMS) will be developed to optimize energy use and conserve energy at the factory.

The project will also focus on developing an Electric Vehicle Integration System (EIS), which will control charge-discharge amounts and rates to minimise burden on the vehicles, as well as employ the FEMS to use the vehicles as batteries for supplementary energy supply.

Published on Monday, December 06, 2010
31  Science & Technology / Smart Grid / Smart Power / Re: General Electric to monitor what appliances you use at what times. on: December 07, 2010, 10:34:18 pm


Energy Efficient Products
GE Appliances Creates Home Energy Management Business

    * Innovative line of smart energy products to help consumers and utilities manage electricity consumption and costs.
    * GE uniquely positioned to provide energy generation, management and storage solutions to address America's tough energy needs.

Louisville, KY—November 30, 2010—GE Appliances & Lighting plans to be the first major appliance company to provide a whole-home solution for energy management by going beyond the kitchen to provide insight into energy usage in the family room, the basement, the home office and all other rooms of the house. From the GeoSpring™ Hybrid Hot Water Heater, Nucleus™ energy manager , and programmable thermostats, to GE Profile™ Appliances enabled with Brillion™ technology and GE smart meters, GE is developing solutions to help consumers better manage and control their energy use and costs.

Dave McCalpin was recently appointed General Manager of GE's Home Energy Management (HEM) business. The new business will develop and commercialize GE's full line of energy-management solutions that empower consumers to make smarter, more informed energy choices. As utilities roll out smart grid technologies in response to the critical energy challenges we face as a nation, there's never been a better time for GE to launch this business.

GE Energy products, like solar PV, advanced energy storage, thin-film solar, small wind and residential electric car charging stations, in addition to next-generation products being developed at GE's Global Research Center, will also play an integral part in helping both consumers and utilities realize the true benefits of the smart grid.

Increasing energy demand: Global energy consumption is predicted to triple by 2050¹, and the numbers prove just how important home energy management will be in curtailing sky-rocketing demand:

    * Residential housing consumes 37 percent of the electricity produced in the U.S.
    * Appliances, lighting and HVAC represent 82 percent of household energy consumption.

“It makes economic and environmental sense for the world to better utilize the power we already generate rather than create more capacity to meet our escalating peak-power needs,” said McCalpin. “If we can better manage when and how we use power, we can control the demand without compromising people's lifestyles. This is where global smart grid initiatives and GE's new Home Energy Management products come into play.”

Where smart grid meets the home: To help manage the growing demand, and improve the performance of the nation's electrical grid, utilities across the U.S. and the world are implementing smart grid technologies – including smart meters on homes. These new technologies can help improve grid efficiency and reduce electrical demand, particularly during “peak” periods (typically 2-7 p.m.). Reducing this peak demand will help limit the number of new power plants needed.

It's estimated that 40 million smart meters, which allow two-way communication between the utility and the home, will be installed on U.S. homes between now and 2012.² Among other benefits, these smart meters will enable “time-of-use” pricing programs that incent consumers to lower their consumption during “peak-demand” periods.

This consumer-driven demand response reduction could provide the largest reduction in U.S. peak demand, helping avoid consumption equivalent to the generation of 108 coal plants over 10 years.³ Playing a critical role in demand response, HEM devices could actually communicate with smart meters to automatically reduce power consumption of certain devices when the cost and demand for power is highest, helping consumers save without sacrifice.

A year-long study by the U.S. Department of Energy showed that providing real-time pricing information to consumers via the smart meter helped reduce electricity costs 10 percent on average and 15 percent during peak periods.⁴ “Knowing what is consuming electricity, and how much electricity that appliances are consuming, can be very empowering,” states McCalpin. “People will be able to make smarter choices if they have information. The once-a-month electrical bill provides no insight into your usage habits. We intend to change that.”

GE bridges the utility-consumer gap: Consumers today have little more than a monthly utility bill to understand how much power they're using and how much they're spending. Compare this to your credit card statements, which you can check almost hourly online. New utility pricing plans, along with inevitable increases in electricity costs, beg for solutions to provide consumers with the information necessary to better manage energy.

GE's Nucleus™ energy manager with Brillion™ technology was developed to provide near real-time information for more control over household energy costs and consumption. Along with monitoring consumers' whole-home energy usage, Nucleus will give people the ability to remotely adjust smart thermostats and alter the consumption of GE Profile™ Appliances enabled with Brillion™ technology in response to utility price signals.

GE's Brillion Suite of Home Energy Solutions, being developed by GE's HEM business, will include the Nucleus, a programmable thermostat, an energy display, smart phone applications, and GE Profile Appliances. Future hardware and software upgrades will further enable Nucleus to monitor water, natural gas, and renewable energy sources, as well as plug-in electric vehicle charging. For more information about GE's Nucleus, click here.

“Smart energy management products will offer consumers more convenience, choice and control than ever thought possible,” McCalpin said.

Smart appliances at work: Components of GE's Brillion Suite of Home Energy Solutions are already being tested by five utilities in the U.S. Working in conjunction with the utility's smart meters, GE's Brillion-enabled appliances receive signals from the smart meter and are programmed to avoid energy usage during high-cost periods or to operate on a lower wattage setting. The Nucleus also receives the smart meter data and displays current and historical home energy consumption, giving consumers the insights they need to make better energy choices. Smart appliances have the potential to help lower consumers' energy bills, while giving them the control they desire – even the capability to override these settings.

GE's suite of smart appliances includes ENERGY STAR®-qualified refrigerators, dishwashers, clothes washers and dryers, and the new GeoSpring hybrid water heater, as well as ranges and microwaves.

Follow us on Facebook and Twitter or check out our website for more information
Friend GE Appliances on Facebook to view how-to videos, learn about new GE appliances and join in the discussion with other GE appliance owners. Join today and follow @GE_Appliances on Twitter or just locate detailed information about our products at

¹U.S. Army Corp of Engineers, 2005.
²Parks Associates Study referenced on “Bringing the Smart Grid to the Smart Home: It's not all about the Meter.” January 2010.
³Federal Energy Regulatory Commission: The potential of residential demand reduction programs represents approximately a 7% reduction in total US peak demand, or 65 GW over the period 2009 -2019. This avoided demand is equivalent to the generation capacity of 108 coal plants over a ten -year period, (600 MW typical coal plant)
⁴DOE Pacific Northwest Laboratory, GridWise project. “Department of Energy Putting Power in the Hands of Consumers Through Technology.” January 9, 2008.

About GE Appliances & Lighting
GE Appliances & Lighting spans the globe as an industry leader in major appliances, lighting, systems and services for commercial, industrial and residential use. Technology innovation and the company's ecomagination(SM) initiative enable GE Appliances & Lighting to aggressively bring to market products and solutions that help customers meet pressing environmental challenges. General Electric (NYSE: GE), imagination at work, sells products under the Monogram®, Profile™, GE®, Hotpoint®, Reveal® and Energy Smart® consumer brands, and Tetra®, Vio™ and Immersion® commercial brands. For more information, consumers may visit
32  Science & Technology / Smart Grid / Smart Power / General Electric to monitor what appliances you use at what times. on: December 07, 2010, 10:29:36 pm

GE Forms Home Energy Management Unit

by Pete Danko, December 6th, 2010

When you think of General Electric in the home, you might think of distinct appliances such as ovens, refrigerators and dishwashers. The company is moving aggressively to change that.

In the unfolding era of alternative-energy solutions and a smart grid, GE says it is “going beyond the kitchen” and forming a Home Energy Management (HEM) unit, aiming to tie together a broad-range of products and services.

“Knowing what is consuming electricity, and how much electricity that appliances are consuming, can be very empowering,” Dave McCalpin, the man GE put in charge of the new unit, said in a press release.  “People will be able to make smarter choices if they have information. The once-a-month electrical bill provides no insight into your usage habits. We intend to change that.”

This isn’t a totally surprising move by GE. Back in July, the company announced Nucleus, a smart-meter connected system for collecting, storing and accessing household electricity use and cost data. But the company appears to believe that Nucleus and companion products like programmable thermostats, energy displays, smart phone applications and GE Profile Appliances are cohesive enough — and potentially lucrative enough—to demand their own business unit.
33  Science & Technology / Smart Grid / Smart Power / Plugging Schools Into the Smart Grid on: December 07, 2010, 10:24:52 pm
Plugging Schools Into the Smart Grid
December 6, 2010 by Christine Hertzog

A school district in Silicon Valley is adding a 1.26 MW photovoltaic (PV) solar installation across several campuses to deliver about 45% of their annual electricity needs.  The ground-mounted facilities will be placed as canopies in school parking lots, so the shading provided by the panels can also reduce the air conditioning burden on the cars that would otherwise absorb all that solar radiation.  It’s a win/win situation, and a perfect teachable moment of how the Smart Grid can deliver benefits beyond the usual calculations focused on reductions in electricity and CO2 emissions.  The applications of Smart Grid technologies deliver community-wide benefits.  The ability to reduce the energy bill helps this school district invest its limited funds into education instead of operations, benefiting taxpayers and pupils.

We can expand the integration of Smart Grid technologies even further into school districts, and into making money, not just saving money.  Smart Grid technologies enable the electrification of transportation, and can leverage the energy storage capabilities of electric vehicle (EV) batteries in smart charging scenarios.  Most schools have school bus fleets that operate on very predictable times of use with very predictable ranges.  If school buses operate on electric power, they can charge up directly from the school’s own solar facilities, and because this would be a DC (direct current) to DC charge, optimize that charge and avoid the loss of electricity experienced in a DC to AC (alternate current) conversion.  During the school year, the buses can charge using the power that is generated from school solar facilities, or charge at night when local utility rates are lowest.  The bus fleets can also drive revenues for school districts in two ways.  During the school year, the fleet batteries can be used to provide frequency regulation services for region-wide grid operations.  During the summer recess, the fleet batteries can serve as resources to discharge energy during peak demand periods for local utilities, in addition to supplying the electricity from solar panels.  That’s definitely a win for the taxpayers supporting school districts.

The benefits go even further.  We have a dire need to develop a workforce to fill positions ranging from R&D in renewable technologies, EVs, and energy storage to installation and maintenance of solar facilities, vehicle charging hardware and software, and energy management solutions.  School districts can be living laboratories for students – familiarizing them with the state of the art technologies and readying them to pursue careers in distributed generation, renewables, software, hardware, regulatory policy, and economics.  The educational possibilities extend into awareness of energy use to create engaging social media-based applications that enable energy efficient behaviors, as well as creating technologies that minimize their use of electricity.   

What is required to make this dream scenario a reality?  Enlightened regulatory, legislative, and voter actions at the local and state levels would encourage the rapid proliferation of relevant renewable technologies and EV bus fleets in school districts, stimulating local job growth and setting up the next generation of workers in skilled trades and high-tech sectors.  Changing regulatory policies to allow school districts to participate as electricity providers is required.  The school district that is going solar got its start with a voter-approved bond measure.   It’s one step in the right direction.

About the Author
Christine Hertzog is a consultant, author, and a professional explainer focused on Smart Grid technologies and solutions. As a consultant, she helps clients understand and navigate the intersections of emerging technologies and markets. She is the author of the Smart Grid Dictionary, the first dictionary that explains the jargon, acronyms, and terminology used by utilities, regulators, standards organizations, and manufacturers. She has two decades of experience working with hardware, software, and services companies that range from small start-ups to multi-national corporations, and has recently been involved in the National Institute of Standards (NIST) initiative on Smart Grid Cyber Security and Interoperability standards requirements with a focus on privacy. Based in Silicon Valley, she is a regular presenter at industry conferences and blogs about the challenges and opportunities that Smart Grid solutions deliver to the evolving electricity supply chain.
34  Science & Technology / Smart Grid / Smart Power / Re: Poor NWO, home appliances not yet interoperable with Smart Grid. Maybe next year on: December 07, 2010, 10:18:41 pm
PAP15: Harmonize Power Line Carrier Standards for Appliance Communications in the Home

Read all about it here:


Smart home appliances represent a major part of the Smart Grid vision aimed at increasing energy efficiency and, to achieve this goal, home appliances need to communicate with entities and players in other Smart Grid domains via home networks. The implementation of such home networks must enable plug and play of appliances from the same or different vendors, requiring no manual configurations by homeowners. Power line communications (PLCs) is a potential technology that is used today in home networks and could also be used for appliance communications for Smart Grid applications.

The effective use of PLCs is impeded by the existence of multiple and non interoperable technologies currently under development in various standards setting organizations. Relevant standards include ITU G.Hn (G.9960, G.9961, G.9972), IEEE P1901 (HomePlug ™, HD-PLC™, and ISP), and ANSI/CEA 709.2 (Lonworks™). Since these technologies do not interoperate, their operation in close proximity may cause harmful mutual interference when operating in the same band and at the same time and this may lead to severe performance degradation or even malfunctions in both Smart Grid and home networking applications. This problem is compounded by the fact that PLCs are also used in the same frequency band for other non Smart Grid applications, such as entertainment, A/V, etc.

PAP-15 will address this issue and will focus on the harmonization of PLC standards for appliance communications in the home.

Since one of the potential acceptable outcomes of PAP-15 (see Objectives below) is to achieve coexistence among multiple PLC technologies, a PAP-15 "Coexistence" subgroup was formed. Coexistence is an issue for all PLC technologies operating in the same frequency band and at the same time, not only an issue for appliance communications. Furthermore, coexistence issues apply to both broadband devices (operating in the 1.8 MHz and above band) and narrowband devices (operating below 500 kHz). The purpose of this PAP-15 Coexistence subgroup is to make a recommendation to NIST on what the value of coexistence in the Smart Grid is and which coexistence mechanism(s) should be added to the list of Smart Grid standards.
35  Science & Technology / Smart Grid / Smart Power / Poor NWO, home appliances not yet interoperable with Smart Grid. Maybe next year on: December 07, 2010, 10:14:57 pm
Smart grid standard falls short of interoperability

A group that has been working to set standards for home area networks in the United States has agreed on a co-existence mechanism for powerline networks in the country. However, the government-led group opted not to require interoperability for a number of competing approaches.

The members of the group say that the unanimous agreement on a non-interference approach is a victory. But it's not clear if manufacturers of home appliances and gateways agree that the standards will help them build networked products.

More than a year ago, Whirpool Corp. made a public commitment to ship a million dryers ready to plug into a smart electric grid if there was a suitable networking standard the company could use. Organizers of the government-led standards effort said they didn't want the lack of a standard to make Whirpool renege on that promise.

The powerline agreement came in a December 2 vote of the so-called Priority Action Plan 15. PAP-15 is one of a broad set of standards efforts under the Smart Grid Interoperability Panel convened by the U.S. National Institute of Standards and Technology.

The PAP-15 harmonizes two existing implementations of a co-existence mechanism for powerline. Both IEEE 1901 and ITU use the so-called Inter-System Protocol (ISP) method of co-existence originally proposed by Panasonic.

However, "the two recommendations went through separate comment resolution phases conducted by the two independent groups and this resulted in producing a slight incompatibility between the two mechanisms," said Stefano Galli, lead scientist, Panasonic, who helped define ISP and led the PAP-15 group.

In its latest report, PAP-15 recommended the smart grid group mandate that all current and future broadband powerline networks use one of the now-interoperable non-interference approaches. The move is "a refreshing success story [given] the often acrimonious state of the PLC industry," said Galli.

"Coexistence does not replace interoperability nor [does it] narrow down the choice of which PLC technology to deploy, it only solves the problem of mutual interference created by deploying non-interoperable technologies," said Galli in an email exchange.

Market selection
The market, not a standards group, should define what home network technology is best for appliances and other consumer systems, he said. "Coexistence will allow the industry to align behind the right PLC technology for the right application on the basis of field deployment data and market selection not on the basis of questionable and subjective pre-selection strategies," he said.

The outcome "accomplishes the objective" of the group, said George Arnold, national coordinator of smart grid standards at NIST. He agreed with Galli that market groups, not standards efforts, should make final decisions on which networking approaches are best for smart grid links in the home.

"We have been encouraging the Association of Home Appliance Manufacturers to study the needs of this industry and evaluate the available communications options against their requirements," said Arnold. "They have recently released a report which provides recommendations that should be helpful to the appliance industry in choosing communications interfaces for appliances," he said in an email exchange.

At least four major broadband powerline technologies are currently in use—HomePlug AV, HD-PLC, the LonWorks technology of Echelon and a powerline variant developed by the former DS2, now part of Marvell. Chips for a fifth approach, based on the ITU standard, are in development by as many as eight companies.

Powerline is just one of several media for energy monitoring networks in the home. The Wi-Fi Alliance is also studying use of that technology for such nets.

- Rick Merritt
EE Times


Well to all of those Libertarians out there, at least they want the "market" to determine the best standards to enslave you instead of the goobermint "bureaucrats" at NIST decide.  Isn't the NWO so loving, and so Libertarian?  You should take them up on their offer of the false illusion of choice.
36  Science & Technology / Smart Grid / Smart Power / California Assembly Member Introduces SmartMeter Opt-Out Bill on: December 07, 2010, 10:00:56 pm

California Assembly Member Introduces SmartMeter Opt-Out Bill

Tuesday, 07 December 2010 15:59

In an announcement made yesterday, Jared Huffman, California State Assembly member, said that he is putting forward a bill to give Pacific Gas & Electric’s consumers the option to opt-out of the utility’s SmartMeter program. According to the bill, titled AB 37, the California Public Utilities Commission (CPUC) will have to provide an alternative for consumers who are not interested in getting the new meter installed, and it would require utilities to install a wired alternative to the wireless device.

"I think it's going to represent a pretty significant improvement over the status quo," stated Huffman. "There will be people who think it doesn't go far enough."

According to PG&E’s recent statements, it is also considering the possibility of a wired alternative, which communicates through a series of radio and cellular networks.

The CPUC has not allowed energy users to opt-out of the program thus far, because, according to the commission, deploying 10 million meters across the state is part of a national smart grid that requires every consumer to be included. The commission also ruled against a temporary moratorium on deploying the advanced meters while these issues were examined.

The bill will next go to the Assembly Utilities and Commerce Committee in early spring. From there, the committee could move it out for voting. According to Huffman, the CPUC’s input has been positive. The commission is interested in working with the legislature to provide an appropriate alternative. 

At present, 7.5 million meters have already been deployed in California, and new meter are being deployed continually. Smart meters will be installed in Marin this year, and, apart from Fairfax, the installation has continued according to schedule and is expected to be complete in the county by 2012. PG&E has temporarily stopped installation in Fairfax, the town whose council has passed a one-year moratorium on the meters. According to PG&E, the council has no legal authority over the utility.

"The best I can do is move this bill forward as quickly as I can," stated Huffman.

Earlier in the summer, Huffman requested the California Council on Science and Technology to study the health issues being associated with the meters and the radio frequency and electro-magnetic frequencies standards of the FCC. The study is expected to come out later this month.

The meters are according to current FCC standards; however, Huffman believes that the question of having the choice to opt-out was a separate issue, which is why he wanted to introduce this bill before completion of the study.

"The question is for individuals who believe their health is at risk to have a choice," stated Huffman.

Those concerned about the meters can call PG&E and asked to be placed on a delay list for delaying smart meter installation.
37  Science & Technology / Smart Grid / Smart Power / 2011 National Town Hall Meeting to increase "consumer acceptance" of Smart Grid on: December 07, 2010, 09:56:38 pm

2011 National Town Meeting on Demand Response and Smart Grid to be Held in July

Tuesday, 07 December 2010 15:59

The Demand Response Coordinating Committee (DRCC) said that its 2011 National Town Meeting on Demand Response and Smart Grid will be held this year in the Ronald Reagan Building and International Trade Center, Washington, DC, from July 13 to 14. The annual meeting is a non-profit event that brings together members of the smart grid and demand response "community" and offers roundtable as well as panel sessions, enabling opportunities for learning and for interaction between and among participants and speakers.

The event attracts key government policymakers, consumer advocates, utility and technology company executives, environmental organizations, executives of regional transmission operators, etc.

One of the main points discussed at the meeting will be how to "Overcome the Barriers."

"Too many gatherings now simply keep talking about what those barriers are.  It is time to instead work on how to overcome them," stated Dan Delurey, Executive Director of the DRCC "and that is what will happen at the 2011 National Town Meeting."
According to Delurey, the previous town meetings have played a major role in pointing out the barriers to more demand response and smart grid.

The Town Meeting’s nature can be understood from the example of  its  incorporation of the National Action Plan on Demand Response created by FERC and DOE complying with the Energy Independence and Security Act of 2007 (EISA).  The National Action Plan Coalition, which aims to help in the implementation of the Action Plan, will present parts of its case study work that has identified best practices and will involve the audience for conducting work on the Plan during the event. 

Other topics that will be discussed at the meeting include consumer acceptance of smart meter deployment and time-of-use rates; best practices in deploying smart metering devices and other smart grid technologies; smart appliances; energy storage; intertwinement of efficiency and demand response; renewable energy and the smart grid; dynamic integration of electric vehicles; the convergence of environmental and peak-demand reduction objectives; and the emergence of policy on privacy and data access.

A new location has been selected for the 2011 National Town Meeting. The Ronald Reagan Building and International Trade Center offers more opportunities for exhibitions, networking and breakout sessions, with every event taking place on site.

DRCC was formed in 2004 for increasing the knowledge base in the US about demand response and facilitating exchange of expertise and information related to demand response among interested parties. Members of the DRCC include but are not limited to: Ameren; American Electric Power (AEP); CPower; Itron; Landis+Gyr; NYSERDA; Pacific Gas & Electric; Salt River Project; Southern California Edison; Southern Company; and Wal-Mart.
38  Science & Technology / Smart Grid / Smart Power / CA Residents suing PG&E for overcharging after installing smart meters on: December 07, 2010, 09:50:57 pm

By Jim Hight
Smart Grid 101
Those new, wireless PG&E meters may be a little scary, but the logic behind them is sound

(Nov. 18, 2010)  Did you know you’re getting new digital “smart meters” from PG&E? Instead of a meter-reader coming by to read those little dials, your smart meters will communicate wirelessly with PG&E. And instead of just a monthly bill, you’ll be able to monitor your energy usage hour-by-hour online.

Rather watch mildew grow? Understandable, but strange as it may seem, some of us energy geeks are truly amped about this new technology.

“I’m excited, and a lot of our staff is as well,” said Dana Boudreau, operations manager for the Redwood Coast Energy Authority — and such an energy geek that he is a step ahead of PG&E with a $250 device called The Energy Detective. “I have it hooked up to Google PowerMeter and can log on from anywhere and see what’s happening with my home energy usage.”

What really lights us up is not just managing our own consumption but envisioning the economic and environmental benefits of smart meters, which will enable two-way communication between energy consumers and the power grid.

Smart meters are step one in making the grid more responsive, flexible and intelligent, and this “smart grid” promises “[l]ower electricity bills, fewer new power plants and reduced emissions,” according to Public Utility Commissioner Nancy Ryan.

Most of us energy geeks aren’t as good as Ryan at communicating. We tend to say things like, “The real cost of energy should be more visible to consumers,” then realize that’s an “eat-your-spinach” approach. So we reach for something grander … then remember the last time we bored our friends trying to explain the role of dynamic pricing in integrating renewable energy.

PG&E suffers the same affliction, compounded by bad press.

Last year, hundreds of Bakersfield residents complained or sued PG&E, saying the new meters were overcharging. Tea Party members and environmental activists in Sonoma and Marin say they’ll refuse SmartMeters (the brand PG&E’s contractors are installing) because of privacy risks and the health effects of their electro-magnetic frequency (EMF). At the Journal‘s deadline, Tea Party members were organizing a presentation on the topic to the Fortuna City Council.

As for the lawsuit, a judge agreed that the PUC was the place to resolve the SmartMeter complaints. Soon after, an expert consultant reported to the commission that the Bakersfield SmartMeters worked fine. People’s bills went up for other reasons, like the fact that some of the old meters were under-reporting consumption. The report faulted PG&E for lousy communications, though. PG&E spokesman Paul Moreno told me that the utility has “taken the comments to heart,” improving outreach materials and website (, putting more staff to work in a SmartMeter call center and making more community presentations.

At a recent Arcata City Council meeting, Councilmember Shane Brinton quizzed a PG&E rep about hacking and privacy. He was told the devices were safe.

Smart grid expert Alexandra von Meier, professor of energy management at Sonoma State University, doesn’t take doesn’t take the privacy concern lightly. “If you get someone’s energy demand profile, you basically can tell the rhythm of their life,” she said. “A burglar could know what was the best time to break into your house.” (Or find out who has a really fat marijuana grow.)

Moreno told me PG&E has “a team in place that continuously monitors cyber-threats and is in touch with industry experts and federal authorities [on cybercrime].” A PUC spokesman said the commission will closely monitor PG&E’s cybersecurity practices and those of other California utilities.

If those promises don’t allay your fears, there still may not be any way out of getting a SmartMeter unless you can live without utility service from PG&E. PG&E is installing them because of state policies backed by legislation and PUC decisions.

Back to why us energy geeks are jazzed about SmartMeters. Let’s start with Ryan’s first two promises: lower electricity bills and fewer new power plants.

When Enron and other energy companies manipulated California electricity markets in 2000 and 2001, what gave them leverage was the fact that electrical generation must match electricity consumption instantaneously or really bad stuff will happen. The California Independent System Operator (CAISO) had to buy electricity at exorbitant rates or see its automatic controls shut down parts of the grid. When they couldn’t get enough power, rolling blackouts occurred.

The electricity market has since been tweaked to prevent Enron-type scams, but the physical law that generation must match demand still reigns. And to meet peak demands for air conditioning on the hottest afternoons of the year, CAISO has to call on inefficient and expensive gas-fired plants known as “peakers” that can throttle up quickly.

39  Science & Technology / Smart Grid / Smart Power / NE Public Power District Taps Exalt to increase Ethernet traffic for SmartGrid on: December 07, 2010, 09:50:24 pm
Nebraska Public Power District Taps Exalt for Transition to IP

Exalt native TDM and Ethernet microwave backhaul systems smooth the shift from analog to digital microwave

CAMPBELL, Calif., December 7, 2010 – Exalt Communications today announced that Nebraska Public Power District (NPPD), the largest electric power distributor in Nebraska, is deploying Exalt microwave backhaul systems to upgrade its legacy network of analog microwave radios. This move enables NPPD to handle rapidly increasing Ethernet traffic and to prepare for future initiatives such as SmartGrid.

NPPD, with a chartered territory including all or parts of 91 of Nebraska's 93 counties, delivers power to one million Nebraskans and has operated a statewide microwave network for 30 years. The NPPD microwave network connects power plants, substations, control facilities, local utility offices, and its main office, using wireless hops up to 30 miles long. For the past several years, NPPD has experienced rising demand for Ethernet traffic and IP-based applications on its network, and sought a way to move toward a digital/IP network base.

“Our intention was to upgrade our analog microwave systems so we could handle more Ethernet traffic and prepare for future IP applications,” said Bill Hardy, telecommunications engineer at NPPD. “Most of our traffic is on Ethernet and native TDM, and our goal is migrate to IP at our own pace without disrupting our TDM traffic.”

The Exalt systems deployed by NPPD carry either 4 or 8 T1 lines, with Ethernet as needed. In the future, NPPD will migrate to IP as it completes a system-wide upgrade of its legacy microwave systems with new systems from Exalt.

“Our unique ability to cost effectively support low-latency native TDM and native Ethernet traffic simultaneously on the same link makes Exalt systems particularly attractive solutions for the nation’s public utility companies, which still use a great amount of TDM,” said Amir Zoufonoun, president and CEO of Exalt Communications. “Exalt provides the industry’s  broadest range of microwave backhaul systems that can be added seamlessly into existing networks without disrupting native TDM traffic, and then migrated to IP unobtrusively at the operator’s own pace. This has made us the first choice among utility cooperatives, as well as other commercial and government customers.”

The Exalt Microwave Backhaul Product Portfolio
All Exalt microwave backhaul systems offer guaranteed link availability, guaranteed throughput and low, constant latency. Systems are available in world bands from 2 to 43 GHz and in capacities from 10 Mbps to more than 1000 Mbps per channel, providing a range of options to fit countless network applications. Designed to enable a smooth transition to IP, they offer native support for both TDM and Ethernet, and are fully software configurable and upgradeable. For easy and secure management using third-party network management systems, Exalt systems support SNMP v1, v2c and v3. Data security is provided by available FIPS 197-compliant AES 128-bit and 256-bit encryption that adds zero latency to the transmission. To simplify installation and maintenance, all Exalt systems feature an embedded manual and most include a built-in spectrum analyzer.

About Exalt Communications
Exalt Communications provides next-generation microwave backhaul systems to service providers, government organizations and enterprises worldwide. Exalt systems are designed to solve the network bottlenecks associated with the growing demand for IP- based voice, data and video applications and the resulting migration from TDM to IP-based networks. With a flexible architecture and universal product platform covering multiple market segments, Exalt provides a full range of microwave radio systems that meet the demand for cost-effective and flexible alternatives to fiber and leased lines.
40  Science & Technology / Big Brother / Police State Tech / Re: Darpa IXO. Most comprehensive video explaining high tech NWO tyranny on: December 07, 2010, 08:47:50 pm
Sources in Darpa IXO

This is the resources page that has all of the narrative quotes, and all the rest that I simply couldn't fit in the video. It will function as the "official" discussion / debate page (like there is any debate?) that will be permanently linked underneath it in my profile or wherever.

"We can no longer avoid the need to be prepared to fight in cities."

"The globalization of the world economy will also continue, with a widening between "haves" and "have-nots.""

"Although unlikely to be challenged by a global peer competitor, the United States will continue to be challenged regionally."

"Failed states have cultures and world views that are vastly different from those of the United States."

"Given the global population trends and the likely strategies and tactics of future threats Army forces will likely conduct operations in, around, and over urban areas – not as a matter of fate, but as a deliberate choice linked to national security objectives and strategy"

[The Government is aimed at] "conflicts in high density urban areas against enemies having social and cultural traditions that may be counter-intuitive to us, and whose actions often appear to be irrational because we don't understand their context."

"The objective of the Urban Reasoning and Geospatial Exploitation Technology (URGENT) program is to develop a 3D urban object recognition and exploitation system"

"Capabilities that, for example, allow us to establish surveillance that provides robust, dynamic situational awareness on all the scales of the city."

"Compared to our current airborne capabilities, the new sensor and surveillance systems required must provide far more detailed and fundamentally different information and coverage."

"In addition, we need a network of nonintrusive microsensors, creating the ability to map an entire city, and the activities within it, in all three dimensions and (all the) time."

"The goal is to extend our awareness to the level of a city block so our forces have unprecedented awareness as the fighting begins, a level of awareness that enables them to shape and control the conflict as it unfolds."

"Because of the shrunken time scale of urban operations, these dynamic capabilities must operate in near-real-time."

"Combat Zones That See (CTS) is a project of DARPA to track everything that moves ",shachtman,45399,1.html

"CTS will produce video understanding A.I. embedded in surveillance systems for automatically monitoring video feeds"

"CTS (develops A.I.) for utilizing large numbers (1000s) of cameras to provide the close-in sensing demanded for military operations in urban terrain."

"Global Engagement combines global surveillance with the potential for a space-based global precision strike capability."

"Biomedical status monitoring is the medical equivalent of the Global Positioning System (GPS)."

"The large investments already present in nano-, info- and biotechnology should be coordinated and coupled with efforts in cognition. DARPA, NASA, NIH, and NSF already have major programs that seek to integrate nano-, bio- and info- research."

"The goal of the human performance augmentation effort is to increase the speed, strength, and endurance of soldiers in combat environments."

"The DARPA Augmented Cognition program promises to develop technologies capable of extending the information management capacity of warfighters."

"The goal of this program is to discover new pharmacologic and training approaches that will lead to an extension in the individual warfighter's cognitive performance capability by at least 96 hours and potentially for more than 168 hours without sleep."

"The intent is to take brain signals (nanotechnology for augmented sensitivity and nonintrusive signal detection) and use them in a control strategy (information technology), and then impart back into the brain the sensation of feedback signals (biotechnology)."

"The future requires a symbiosis of human and machine in a way that synergistically exploits the strengths of each. "

"Two of the critical issues for exoskeletons are power for actuation and biomechanical control integration."

"DARPA has a brain-machine interface program about to start. This program seeks human ability to control complex entities by sending control actions without the delay for muscle activation."

"From local groups of linked enhanced individuals to a global collective intelligence"

"We are not alone. We are interconnected as are our cognitive systems."

"Prolific unattended sensors and uninhabited, automated surveillance vehicles under personal warfighter control will be providing high data streams on local situations."

"We believe that, in the future, artificial cognitive systems will continually monitor, record, and assess a warfighter and his activities."

"Embedded, real-time "cognitive" processing for both the warfighter and associated automated systems will be critical to success"

"The J-UCAS vision is that a collection of unmanned, weaponized, high performance aircraft, equipped with the latest contemporary autonomous capabilities"

"The enemy will be at risk from relatively small, relatively inexpensive, unmanned platforms that bring the fight to the opponent while keeping our capital assets out of harm's way."

"The uninhabited air vehicle will have an artificial brain that can emulate a skillful fighter pilot in the performance of its missions."

"Tasks such as take-off, navigation, situation awareness, target identification, and safe return landing will be done autonomously, with the possible exception of person-in-the-loop for strategic and firing decisions.

"Removing the pilot from assault and fighter aircraft will result in a more combat-agile aircraft with less weight and no g-force constraints, and will reduce the risk of injury or death to highly trained warfighters. American public opinion makes this a clear priority.

"Add mobility, and our autonomous platforms can act, not just observe."

"The fighter airplane will likely derive the greatest operational advantages, but similar benefits will accrue to uninhabited tanks, submarines, and other military platforms."

"The Government's vision of an ultimate prompt global reach capability (circa 2025 and beyond) is engendered in a reusable Hypersonic Cruise Vehicle (HCV). This autonomous aircraft would be capable of taking off from a conventional military runway and striking targets 9,000 nautical miles distant in less than two hours. It could carry a 12,000-pound payload consisting of Common Aero Vehicles (CAVs), cruise missiles, small diameter bombs or other munitions. This HCV will provide the country dominant capability to wage a sustained campaign from CONUS on an array of time-critical targets that are both large in number and diverse in nature while providing aircraft- like operability and mission recall capability."

"The BICA program intends to develop artificial systems that can respond to a variety of situations by simulating human cognition that will enable it to learn from experience, reflect on current strategies and adjust them if necessary, and mentally simulate alternate plans and decisions. As the use of autonomous, unmanned, and intelligent systems in the military increases, the need for systems that can understand and respond to new and unique situations is growing dramatically."

"Space provides 24/7, latent, global persistence, just what we need for low-intensity conflicts and the Global War on Terror."

"Space Power  (systems, capa-bilities, and forces) will be increasingly leveraged to close the ever-widening gap between diminishing resources and increasing military commitments."

"For most of history -the Greek, Roman, Spanish and British empires- to be a great power meant to be a sea-faring nation. Maritime dominance remains a critical way to project power. But, for all the reasons just discussed, if the United States is to be a superpower in the 21 st century, we must keep our lead as the world's premier space-faring nation."

[It will] "Strengthen the nation's space leadership and ensure that space capabilities are available in time to further U.S. national security, homeland security, and foreign policy objectives;"

"Enable unhindered U.S. operations in and through space to defend our interests there;"

"The use of space nuclear power systems shall be consistent with U.S. national and homeland security, and foreign policy interests, and take into account the potential risks."

"Control of Space is the ability to assure access to space, freedom of operations within the space medium, and an ability to deny others the use of space, if required."

"We have to retain this high ground, just as we must retain our maritime superiority."

"The world's oceans cover two-thirds of our planet."

"The sea offers strategic, operational, and tactical mobility to those who control it."

"Maritime dominance remains a critical way to project power. "

"It's the medium over which no sovereign can veto our movements. And it's the medium in which US dominance is exercised globally, with stealth."

"If we're to maintain maritime supremacy with a leaner Navy, it must be done by employing diverse contingents of autonomous offboard systems together with the capital platforms of the future Navy. Our naval force will be multiplied by having these systems interconnected by a robust, seamless maritime network that operates above the water, on the water, and in the water."

"Let's envision the future Naval Force. There will be fewer ships casting a wide net over the vast maritime battlespace; a net that's extendable, flexible, and impenetrable; a net that's extendable, flexible, and impenetrable -fleets, squadrons, or units- of autonomous systems distributed around the world doing their jobs."

"the extended reach of networked weapons and sensors will tremendously increase the impact of naval forces in joint campaigns."
"A U.S. warship is sovereign U.S. territory -------------, whether in a port of a friendly country or transiting international straits and the high seas. U.S. naval forces, operating from highly mobile "seabases" in forward areas, are therefore free of the political encumbrances that may inhibit and otherwise limit the scope of land-based operations in forward theaters."
"the extended reach of networked weapons and sensors will tremendously increase the impact of naval forces in joint campaigns. The will be realized by exploiting the largest maneuver area on the face of the earth: the sea."
"SEABASE serves as the foundation from which offensive and defensive fires are projected making SEA STRIKE and SEA SHIELD realities."
"SeaBasing capitalizes on the freedom of action achieved through sea control, and is vital to this nation's ability to fully exploit its unprecedented and unequaled military strength in support of an over-arching national security strategy.
"SeaBasing, which refers to the ability of naval forces to operate at sea, as sovereign entities, free from concerns of access and political constraints associated with using land bases in other countries."

"…[We must] leverage information technology and innovative network-centric concepts of operations to develop increasingly capable joint forces.Our ability to leverage the power of information and networks will be key to our success"
--Former Deputy Secretary of Defense Paul Wolfowitz

"Network Centric Warfare the key to DoD dominating future military operations."

"The Global Information Grid (GIG) vision implies a fundamental shift in information management, communication, and assurance"
--Former Deputy Secretary of Defense Paul Wolfowitz

"The next-generation DoD enterprise network will be taking in sensor information from a variety of sources ?satellites in space, manned and unmanned systems in the air, at sea and on the ground, soldiers in the field, and intelligence from a variety of places, all being transmitted to and from its edge nodes."
"the Global DoD Enterprise Network forms the backbone of the DoD Global Information Grid (GIG)."

"The Net-Centric Enterprise Services (NCES) program will provide secure, collaborative information-sharing environment and unprecedented access to decision-quality information. NCES will enable decision-making superiority that results in increased mission effectiveness and enhanced process execution. It is based upon an emerging concept in the DOD called "net-centricity," which enables systems to provide the right information to the right person at the right time.""

"The implementation must allow both human users of the GIG, and automated services acting on behalf of GIG users, to access information and services from anywhere, based on need and capability. "
--Former Deputy Secretary of Defense Paul Wolfowitz

"Transforming the network from a weapons support system into a weapon itself, that is the thread that runs through the programs that we pursue."

"we must enable the network to defend itself against those adversaries who seek to deny us the use of this valuable combat resource."

"This research thrust area will show automated cyber immune response and system regeneration. The technical approach will include biologically-inspired response strategies, machine learning, and cognitively-inspired proactive automatic contingency planning."

"Desired capabilities include self-optimization, self-diagnosis, "Cognitive immunity" and self-healing."
"As we move to an increasingly network-centric military, the vision of intelligent, cooperative computing systems responsible for their own maintenance is more relevant than ever."

"We need to move from a conventional view of data processing to a cognitive view, one that will allow our systems to be more responsible for their own configuration and maintenance and less vulnerable to failure and attack."

"Simultaneously, the network is invoking its memory, calling up huge databases and vast stores of knowledge. And, it is transmitting all of this to the various brains, the computers, which, in this case, may be distributed around the world."

"Artificial minds will be housed in artificial brains, and we may need some radical changes in our computing foundations to get there."

"These "fourth-generation" technologies will bring attributes of human cognition to bear on the problem of reconstituting systems that suffer the accumulated effects of imperfect software, human error, and accidental hardware faults, or the effects of a successful cyber attack."
"new fourth generation technologies will draw on biological metaphors such as natural diversity and immune systems to achieve robustness and adaptability; the structure of organisms and ecosystems to achieve scalability; and human cognitive attributes (reasoning, learning and introspection) to achieve the capacity to predict, diagnose, heal and improve the ability to provide service."

"The program concentrates on research needed to develop large-scale intelligent systems that can address practical Air Force needs."

"This cognitive program I told you about is actually showing that it is learning, and it is learning in a very difficult environment. This is the program Stanford Research runs for us. "
"We've got the technology to the point where we can now apply it in Iraq to a system that we also developed called CPOF, Command Post of the Future. It is a distributed command and control system."
"The cognitive program's whole purpose in life is really to increase the tooth-to-tail ratio [military-speak for the number of combat troops to the number of support troops]."
"Our cognitive programs whole aim is to have a computer "learn you," as opposed to you having to learn the computer."
"Cognitive computers can be thought of as systems that know what they're doing. Cognitive computing systems "reason" about their environments (including other systems), their goals, and their own capabilities. They will "learn" both from experience and by being taught. They will be capable of natural interactions with users, and will be able to "explain" their reasoning in natural terms."

"ACIP will incorporate biological, cognitive algorithm, and DoD mission challenge clues as inputs to establish the concepts of the effort."

"The goal of the BICA program is to develop integrated psychologically-based and neurobiology-based cognitive architectures that can simulate human cognition in a variety of situations."
"The BICA program intends to develop artificial systems that can respond to a variety of situations by simulating human cognition that will enable it to learn from experience, reflect on current strategies and adjust them if necessary, and mentally simulate alternate plans and decisions."

"The Integrated Learning program seeks to achieve revolutionary advances in Machine Learning by creating systems that opportunistically assemble knowledge from many different sources in order to learn."

"The goal of the Transfer Learning Program solicited by this BAA is to develop, implement, demonstrate and evaluate theories, architectures, algorithms, methods, and techniques that enable computers to apply knowledge learned for a particular, original set of tasks to achieve superior performance on new, previously unseen tasks."

"[They] Will be aware of themselves and able to reflect on their own behavior

{One product of this broad multi-agency initiative is NASA's "Intelligent Archives"}
"Stated goals of NASA's I.A.:
"adapting to events and anticipating user needs"
"Continuously mining archived data searching for hidden relationships and patterns"
"Identifying new data sources and information collaborators, and using available resources judiciously"
"aware of its own data content and usage"
"can extract new information from data Holdings"
"large scale data mining"
"acting on information discovered"
"extracting new information from its data holdings"
"coordination between intelligent archives and intelligent sensors"
"advanced weather prediction"


++++++++++WEATHER CONTROL++++++++++
"In the tele-immersive room, the scientists plan their research forecasts by summoning a vivid holographic 3-D projection of the Earth, and accessing projections of scaled real-time weather conditions."
"Space-based, airborne, and terrestrial sensors will produce weather-related data with varied resolutions, rates, bands, parameters, and volumes."
"Key aspects of a visionary system for advanced weather model building and operation would include:"
"Flexible, intelligent global observing system"
"Cyber infrastructures will comprise distributed system components (e.g., sensors, services, modeling, information & knowledge discovery tools) operating in a high-speed intelligence-based computing environment.
"This interconnected computing environment, in which I.A.'s also operate, provides the collective processing, data management, data persistence, and data interchange services necessary to meet the near-real-time requirements for advanced weather prediction.

"Weather as a Force Multiplier: Owning the Weather in 2025"
"US aerospace forces can "own the weather," as they "own the night" now."
"It could have offensive and defensive applications and even be used for deterrence purposes."
"Though a high-risk effort, the investment to do so would pay high rewards."
"Weather modification offers both the commercial sector and the military greatly enhanced capabilities."
"Its application in the military arena is a natural development as well. Weather modification will become a part of domestic and international security and could be done unilaterally"
"The ability to generate precipitation, fog, and storms on earth or to modify space weather, and the production of artificial weather all are a part of an integrated set of technologies to achieve global awareness, reach, and power."
"For this to occur, technology advancements in five major areas are necessary. These are advanced nonlinear modeling techniques, computational capability, information gathering and transmission, a global sensor array, and weather intervention techniques. All of these will be greatly enhanced as we approach 2025. Current demographic, economic, and environmental trends will create global stresses that create the necessary impetus for weather modification to become a reality in the commercial sector. Its application in the military arena is a natural development as well. Weather modification will become a part of domestic and international security and could be done unilaterally, through alliance networks—particularly regional ones—or through an ad hoc coalition or a UN framework. It could have offensive and defensive applications and even be used for deterrence purposes. The ability to generate precipitation, fog, and storms on earth or to modify space weather, improve communications through ionospheric modification (the use of ionospheric mirrors), and the production of artificial weather all are a part of an integrated set of technologies which can provide substantial increase in US, or degraded capability in an adversary, to achieve global awareness, reach, and power. Weather modification will be a part of 2025 and is an area in which the US must invest if only to be able to counter adversaries seeking such a capability."


"In contrast to incremental or evolutionary military change brought about by normal modernization efforts, defense transformation is more likely to feature discontinuous or disruptive forms of change."

"The end result of these enablers and concepts is Full Spectrum Dominance."

"Computing is a key element in this revolution." - Newt Gingrich

"We want to live forever, and we're getting there." -Bill Cinton

"This funding will support the work of America's most creative minds as they explore promising areas such as nanotechnology, supercomputing..."
"Dubya" - 2006 State of the Union Address

{In 2001, Bush & the DOD blocked a congressional bill that would have made, space weapons, weather modification and other"exotic weapons" illegal}
Title: To preserve the cooperative, peaceful uses of space for the benefit of all humankind by permanently prohibiting the basing of weapons in space by the United States, and to require the President to take action to adopt and implement a world treaty banning space-based weapons.
Sponsor: Rep Kucinich, Dennis J. [OH-10] (introduced 10/2/2001)      Cosponsors (None)
Latest Major Action: 4/19/2002 House committee/subcommittee actions. Status: Unfavorable Executive Comment Received from DOD.
(B) Such terms include exotic weapons systems such as--
(i) electronic, psychotronic, or information weapons;
(ii) chemtrails;
(iii) high altitude ultra low frequency weapons systems;
(iv) plasma, electromagnetic, sonic, or ultrasonic weapons;
(v) laser weapons systems;
(vi) strategic, theater, tactical, or extraterrestrial weapons; and
(vii) chemical, biological, environmental, climate, or tectonic weapons.
(C) The term `exotic weapons systems' includes weapons designed to damage space or natural ecosystems (such as the ionosphere and upper atmosphere) or climate, weather, and tectonic systems with the purpose of inducing damage or destruction upon a target population or region on earth or in space.
Pages: [1] 2 3 ... 5
Powered by EzPortal
Bookmark this site! | Upgrade This Forum
Free SMF Hosting - Create your own Forum

Powered by SMF | SMF © 2016, Simple Machines
Privacy Policy
Page created in 0.343 seconds with 19 queries.