6 Programming culture

6.1 Cultural theory: practice and form

In order to conduct a cultural analysis of programmers’ work processes, it is necessary first to look at the concepts that will be used in the analysis: cultural practice and cultural form. The concepts as they are presented here come from the ethnological research community, particularly the Danish professor of ethnology Thomas Højrup and his followers.

There are two fundamentally different ways of explaining what happens in the world. One perspective is teleological: I get in my car and drive because I want to get home. Getting home is my goal, my télos (τέλος), and driving in the car is the means of reaching my goal. The other perspective is causal: I press the gas pedal with my foot, which causes more fuel to be let into the combustion engine, which causes the car to move forward. The action is explained as a matter of cause and effect. If teleology and causality are viewed as unrelated or even contradictory principles, as is often the case, it becomes difficult to explain the world around us. For example, what is the explanation if I lose control of the car and drive into the ditch on my way home? In the teleological perspective, my strong desire to get home resulted in the means of getting there – speed – getting out of hand. In the causal perspective, the cause of the accident was the slippery road, which had the effect that the car lost traction, further causing me to lose control of the car.

The concept of practice1 is a way of resolving the contradictions of teleology and causality. Teleology and causality are simply regarded as two opposite but equally valid aspects of practice. Furthermore, the concepts that make up teleology and causality are put into relation with each other: the goal of teleology is identified with the effect of causality; the means of teleology is identified with the cause of causality.

As a consequence of this, the concept of practice establishes a correspondence between the concepts of subjectivity and objectivity. The identification of means and cause shows that causal relations do not exist independently of ourselves.2 It is the presence of a self-conscious subject that, through regarding the world as consisting of goals and means, identifies what is cause and effect. Thus subjectivity and objectivity are two perspectives on the world that proceed from the practice concept; again, they are opposite and equally valid. Subjectivity is to take a teleological perspective on the world, and objectivity is to regard it as made up of cause and effect. The objective perspective is set by the subjective, and the subjective perspective is likewise set by the objective.3

The concept of practice is connected to hermeneutics in the way that a given practice is an expression of tradition – nothing comes from nothing; a practice must either be a repetition of an existing practice, or a break away from a former practice. Another point of connection is the purposefulness of practice: the teleological aspect of practice has a goal, and that goal is akin to the concept of application in hermeneutical theory.4

The realization of a practice will result in a plethora of related practices. A means in one practice is a goal in another practice: my goal is to get home, so I get in the car, which is the means of getting there. But while I am driving, the immediate goal becomes to keep the car safely on the road. Thus, the means to my original goal becomes a sub-goal in itself, in service of a higher goal.5

The concept of practice is closely connected to another concept: that of form. In cultural theory, the concept of form has a specific content that goes back to Aristotle and Plato.6 A form is not simply the outer physical form of a thing, its morphé (μορφέ), as the word is often used in daily speech.7 Rather, a form, eidos (ε῏ιδος), is the idea of a thing, a structural concept that subjugates the matter of which it is made up. As such, the form of a fire, for example, can be made up of many different kinds of matter: wood, oil, tallow, et cetera. Each particular kind of matter will have different properties, which means that the resultant form – the fire – will also have many properties that are not essential, but accidental.

A form (eidos) is a solution to a problem, or a way of reaching a goal, that consists of an idea or structure that expresses the function of the form, plus the matter in which the idea is expressed and which is the means of the form:

form = function (idea, structure) + matter (means).

The problem that a form solves comes from the goals of a practice. This means that it is ultimately practice that determines the forms, but constrained by the properties of matter. A form is a solution to the demands of practice, and demands are contradictory.8 For example, a fire has to provide warmth and light, to be safe, and to conserve fuel. Not all the demands can always be met efficiently at the same time. Thus, a bonfire and a oil lamp do the same thing, but they are well suited to different situations. On one level they are the same form – fire – and on another level they are different: warm-fire-burning-wood versus illuminating-fire-burning-oil.

For the purpose of cultural theory, it does not matter whether a form is material, social, or theoretical.9 A fishing vessel, bureaucracy, and the rational numbers are all examples of forms. However, we cannot hope intellectually to comprehend the existing forms fully. We are fundamentally limited by our experience with the practical world, and any concept of a specific form is always only a temporary and incomplete understanding of reality.10 In this way, the concept of form corresponds to the hermeneutical notion of understanding, which is fundamentally limited by the horizon of understanding.

1  Højrup 1995 p. 67.

2 Or more precisely, it is the designation of something as “cause” and something as “effect” that does not exist independently of ourselves. In a system that altogether lacks a teleological perspective, such as formal physics, everything affects everything else, so “cause” and “effect” become merely shorthand referrals to whatever interests us. Between two masses there will be a gravitational force at work, so we can say that the gravitational force causes the masses to attract each other. But we can equally say that the gravitational force is caused by the two masses being present.

3  Højrup 1995 p. 69.

4 See Chapter 9 (Understanding) for an explanation of the hermeneutical concepts in this section.

5  Højrup 1995 p. 74 ff.

6  Højrup 2002 p. 375 ff.

7  Ibid. p. 378.

8  Ibid. p. 380.

9  Ibid. p. 384.

10  Ibid. p. 376.

6.2 Programming for safety or for fun

The examples of companies we have looked at so far – Tribeflame, Cassidian and Skov – have in common that their businesses depend critically on software development. However, what we have discovered with hermeneutical analysis is that their work processes are not dominated by the concept of programming but by concepts that stem from the products they make: in one case “fun”, in the other cases “safety”.

The hermeneutical analysis of Tribeflame was straightforward in that it is an analysis of a concrete, individual work process at a given time. The analysis of safety critical programming was more complex in that it is an analysis of the cultural form of safety critical work processes. It focuses on the essential features that make a programming work process safety critical in contrast to something else. In this section we will compare the cultural practice of safety programming with the cultural practice of game programming that is represented by Tribeflame’s work process.

The standards and processes that are used in safety critical programming is a form of cultural practice. When they are used in practice, they take the form of hierarchical organization and procedures to be followed, with the goal of assigning responsibility to individual steps in the programming process and controlling it. It is clear from the examples we have seen so far that this particular cultural form is not absolutely necessary to programming, but that it can be useful in some cases, depending on circumstances.

Cassidian is constrained by its customers, in that it is required to fulfil the safety standards. The company makes the most of this by adopting the thinking behind the standards and organizing the whole company’s work around a cultural practice that follows the standards quite closely. Moreover, the company is so large that it believes it can influence its circumstances; not passively being subject to the dictates of the standard, but attempting to impose its own version of the standard on its customers.

For Skov, there is no external requirement to follow standards and processes, and the company is consequently free to adopt a development practice that it likes and to sacrifice rigid control of the programming process. The company only considers it worth having a control- and document-oriented approach in a limited area of the development process: namely, the testing process.

For Tribeflame, there is no external constraint and the company clearly sees no benefit in adopting a rigid process. Of course, this does not prove that it is impossible to run a successful game company following the principles of safety standards – but the example of Tribeflame does show us that the programming principles inherent in safety standards are not strictly necessary to game programming.

An important feature that these examples have in common is that though their businesses depend crucially on programming, the goal of their business is not programming. What they are really striving for is to make games that are fun to play, or products that are safe to use. Programming is, in this respect, a means for the business to achieve its goals; and the specific form given to the programming process – whether the V-model, Agile, or something else – is merely a sub-means to the programming means. To put this another way: the goal of the programming process is to make a program. The program is a sub-goal that itself acts as the means to the real goal of the companies: to make their products useful.

The safety critical programming practice that follows standards is traditional: it builds on other, older traditions. These are primarily the engineering tradition, legal tradition, bureaucratic tradition, and software engineering tradition. Although the software engineering tradition is arguably a descendant of the engineering tradition, the two are not identical. Even among its practitioners, software engineering is frequently regarded as something that is not “real engineering”.1 Building on these diverse traditions, the safety critical programming practice has existed for so long that it has itself become a distinct tradition, expressed in the standards’ chapters on software development and in the unwritten knowledge about safety critical programming that is preserved among the professionals who work with it. As we saw in the hermeneutical analysis, the safety critical tradition is largely impersonal, based on extensive education of its practitioners and commitment to a professional identity.

The game development practice at Tribeflame is also based on tradition, but this is a tradition of a different character. The developers here related to a tradition that is primarily personal and based on direct experience rather than formal education. Their knowledge of the tradition comes from the games they play themselves, from games they have heard about through friends, and from exposure to computer games and other games since their youth. It is a tradition that is shared with many people who are not themselves developers of games, but merely play them. For most people, the computer game tradition is more immediately accessible than the safety critical tradition.

In safety critical development, the processes and standards function as a common language that makes it easier to communicate across companies and departments – both for engineering and bureaucratic purposes. It works well because most communication is with other engineers who have similar training and experience. As a cultural form, the common language of safety standards is a response to the circumstances of safety critical development. The products are so complex to make that a vast number of people are involved and they need to talk to each other about their work in order to coordinate it.

In Tribeflame, investors and computer game players are the outsiders to whom the developers mostly need to talk about their work. They need to talk to investors in order to convince them that they know what they are doing, and they need to talk to players, who represent the actual customers of the company, in order to get feedback on their games. They have a need for talking about their games, but they do not have a great need for talking about exactly how they make them. An investor might of course take an interest in how the company carries out its development in order to be reassured that the invested money is spent well; but it is essentially not the investor’s job to inspect and control the development process in the way that an assessment agency like TÜV inspects and controls the safety critical development process.

In safety critical programming, the developers’ need to discuss their work has, over time, led to the development of models, most notably the V-model, which serve as references during arguments and discussions about the work. In this sense, the models are argumentative: an aspect of a work process can be explained by pointing out the place in the model where it belongs.

In the computer game industry, the need for discussing work processes is not so great. Consequently, it would not be an efficient use of time and energy to develop and maintain consensus about models of the working process. Whenever game developers do need to discuss their work process, it is likely to be more efficient for them to come up with an argumentative model of their process spontaneously, in a face to face meeting.

It is widely known that the process models of software engineering rarely describe what actually happens in development practice. A textbook on software requirements states:

“The waterfall model represents an ideal. Real projects don’t work like that: developers don’t always complete one phase before starting on the next. … All this means that the phase idea of the waterfall model is wrong. The activities are not carried out sequentially and it is unrealistic to insist that they should be. On the other hand, developers do analyse, design, and program. The activities exist, but they are carried out iteratively and simultaneously.” 2

One of the founders of a consulting company in the safety critical industry characterizes the development processes of automotive companies in this way:

“Usually they work. The car companies manage to put out car after car precisely on the scheduled time. They have these start of production dates that they can never shift, they are fixed a few years in advance and they always keep them and I think this is rather impressive. If I were to form an opinion I would say that they are really impressive and they work really really well.

That’s the big view of it, of course if you look at the smaller view then you see that for instance in software development, things do happen to go wrong, so you always have a last minute panic when the start of production comes closer and the control units don’t seem to work the way they were intended to work, and then you see some kind of panic mode: that people work overtime and it is stressful and everybody has to examine where the problems come from, to change the software, to make additional tests. This is something where you think that obviously this is no straight development path that is being taken here, it’s a little bit chaos in the end, and … this means that the development processes for software are not working as nicely as they should work.” 3

Another experienced consultant, a founder of three consulting companies, explains that important parts of the safety critical development process necessarily must take place outside the framework of the V-model:

“The V-model is a nice framework for reasoning but I never saw someone adding all the complexity of a problem at the specification stage. I mean, in fact you have plenty of implicit assumptions on the design. Also, you need to go further in the development in order to, for instance, evaluate some solution, and then go back to the specification. You know, the process is not so straight, so I think it’s better that you have some sort of prototypes and some trials before you go back to the specification stage. And only when you have a precise idea of the system and the system behaviour, in the form of a technical solution, only then is it useful to do verification of the specification and the design. … You must sort of go and return between specifications and proof of concepts, go back to the specifications and so on. And once it has been more or less validated you can enter the V-model and synchronize verification and development.” 4

In the software engineering literature, the usual explanation for the fact that process models rarely describe what people actually do is that the models are not actually meant to describe what people do: they are prescriptive, not descriptive, serving as an ideal for what people should be doing. That the ideals do not work out in practice is blamed on human weakness and organizational inadequacies. The hermeneutical analysis of safety critical programming provides a much simpler explanation: that the process models are not essentially models of how the work is done, but models of how the work is understood by those involved in it. This makes sense when the models are understood as part of a common language through which the work processes can be talked about. In the words of the safety consultant, the models are a framework for reasoning; not only reasoning with oneself but also reasoning with others, as part of communication.

In the game programming practice of Tribeflame, the lack of bureaucracy means that there is no direct counterpart to the process models of safety critical programming. Of course the developers have some kind of mental model of how they do their work, but since they rarely need to talk about it, it is not explicit or schematic. Moreover, it is far more flexible than formal process models. For example, Tribeflame came up with the idea of having a short meeting for the whole company every day and instituted it in the course of a few days.5

All forms of cultural practice are dependent on history, and therefore it is important to understand their history in order to understand them. Safety critical programming has a long history of institutions that enforce standards: courts of commerce, regulatory bodies, assessment companies, and industry associations. So far game programming has a lot less institutional history – the existing game programming institutions are mostly the game companies themselves. The historical aspect of game development often comes through the developers’ personal history. Game development is shaped by the history of tabletop games and the history of computer games. When a new game company is formed by inexperienced programmers, the work practices they bring to game programming are the traditions of working at a desk job, and in many cases the work practices of conducting study projects in university or other educational institutions. When inexperienced engineers join a safety critical company, they are faced with a long-standing tradition of safety critical work practice with which they have to familiarize themselves before they can hope to have an impact on the industry.

Forms of practice are dependent on history but they do not merely repeat old traditions. Cultural practice is shaped by the goal of the practice, no matter what the historical origin of the practice is. For this reason, the goal of a cultural practice will be able to explain much about why the practice looks the way it does. In safety critical programming, the goal is to make safe products through the mechanisms of control and assignment of responsibility. In the cultural context of safety programming, control and assignment of responsibility therefore appear as sub-goals, meaning that much of the cultural practice of safety programming has to be understood through these concepts. In game programming, the goal is to provide entertainment through the sub-goal of making the products fun to play. Since fun is a somewhat ambiguous concept, much of the cultural practice of game programmers revolves around trying to find out what fun is, in the eyes of potential players.

Software engineering theory is created for, and suited to, specific cultural contexts that are shaped by external hermeneutical demands, primarily the demands of bureaucracy and legal systems. This is particularly true for safety critical development but it also applies to ordinary software engineering, which in the main has its origin in projects for the U.S. Department of Defense and similar official bodies.6

In game programming, those particular bureaucratic and judicial demands are missing. To follow a process model like the V-model would be a waste of resources, unless, of course, the process model served some other specific purpose for the company. To insist in game programming on a professionalism of the same kind as that in safety programming would be pointless, since professionalism is a mechanism rather than a goal in itself, and software engineering professionalism has been created to serve goals other than those of game programming.

The purpose of software engineering practice – and safety critical practice in particular – is not essentially to create entirely new things, but largely to preserve the existing state of affairs, the status quo. Especially in safety critical practice, this is easily understandable because it is dangerous to make new, untested products.7 It is better to keep to the traditional way of doing things until it is absolutely certain that a new way of doing things is equally safe. For this reason, new inventions are allowed only if they are kept within strictly defined parameters, which are defined beforehand and permitted to be adjusted only within limits. Inventions that cannot be fitted within the predefined parameters will usually be rejected in order to err on the side of safety.

The practice of the conservation of the existing state of affairs is enforced conceptually by a hierarchical categorization, which is in turn the source of the parameters within which variation is allowed. An example of hierarchical categorization is the V-model: the overall category is a sequential process of phases; within each phase are inputs, outputs, and workflows; each input or output can again be divided into its constituent parts and so forth. In practice, conservation is enforced by rigorous procedures that ensure that things are done in a certain way according to the hierarchical categorization. The procedures are, for example, the step-by-step workflow descriptions for carrying out the V-model in practice, or the list of documents that must be written, verified, and approved in a certain order. The bureaucracies connected with software engineering exist, not for their own sake, but to uphold the tradition of engineering practice.8

Game programming is different in this respect, because tradition is experimented with quite freely. If a game oversteps the boundaries that players find acceptable, this will quickly become apparent in the market as the game will sell badly. Computer games are a form of entertainment, and in entertainment businesses in general it is not particularly valuable to preserve the existing state of affairs: customers demand constant innovation and change, which is why creativity is so highly valued within these industries. Of course, computer games do have to preserve some amount of tradition – games are expected to stay within broad genres. A platform shooter, for example, has to conform to the expectations to the platform shooter genre; but within these broad limits the more innovative games tend to be more successful.

We see that the hermeneutical interpretation in Chapters 4 and 5 supports the cultural practice analysis of the current chapter: safety critical programming practice is very traditionally bound. The hermeneutical circle of interpretation works very slowly in a safety critical context: the time span from a standard being interpreted to its consequences being carried out in practice, and from there to the consequences affecting the original interpretation, can be many years. The safety critical processes are thus slow to carry out in practice and slow to change. The strong emphasis on tradition and bureaucracy means that safety critical practice offers a precise and easy form of communication that is also inflexible, because it cannot accommodate thoughts outside the hierarchical categorizations, and requires years of highly specialized training.

Game programming practice is, by comparison, fluid, because the timescale of the hermeneutical circle operating in practice is short – consequences of interpretation can often be seen in mere days or weeks. This means that the practice can change more quickly. The practice is seldom explicit, but lives in the actions and thoughts of the game makers: it is simply inefficient to write everything down. Consequently, the practice is seldom presented to outsiders other than in the form of the product – the game. The tradition that underlies game practice is more personal and informal, and the ability to contribute constructively and get involved is more important than formal qualifications. In this way the work is similar to that in other creative jobs, such as writing and design.

In safety critical programming as well as in game programming, we see that in the work practices, the concept of programming is subordinate to other concepts that more directly express the goals of the companies. In the case of safety critical development, programming is subordinate to machine production and its regulation. As we see in Section 5.4 (The safety standard), much of the vocabulary that is used in safety development comes from factory settings or from the legal sphere, and the terms used in safety development are usually object terms – they describe how to reach given goals, without questioning the interpretation of the goals to any great extent. In game development, programming is subordinate to entertainment, and entertainment is usually measured by how much fun a game is. The observations from Tribeflame indicate that the discussion about game development centres around terms that express the players’ experience of fun, and as such they are subject terms.

However, though programming in both practices is subordinate to the goal of concepts safety and fun, it is important to keep in mind that programming, in essence, is neither of these concepts. Though directed by the dominating goal concepts, programming has its own cultural characteristics that are not fully explained by the goal concepts. We will return to this point in Chapter 8 (Reflections).

1 See for example Jacobson et al. 2012 p. 9: “Software engineering is gravely hampered today by immature practices. Specific problems include: The prevalence of fads more typical of fashion industry than of an engineering discipline. …” Also Bryant 2000 p. 78: “Whatever the basis for understanding the term software engineering itself, software developers continue to be faced with the dilemma that they seem to wish to mimic engineers, and lay a claim for the status of an engineering discipline; but commercial demands and consumer desires do not support this.”

2  Lauesen 2002 p. 3.

3 Interview 16.

4 Interview 21.

5  Field diary, 30th August 2011, 11:25-12:15.

6 See Section 3.1 (Software engineering).

7 Bear in mind that much ordinary software engineering has its origins in a military context, and the military can indeed also be a dangerous work environment.

8  Should the bureaucracy become an end in itself, a dysfunctional work practice will result.

6.3 Forms of safety critical programming

So far we have primarily looked at safety critical programming as a single cultural form, which means that we have been looking at safety critical programming as a more or less undifferentiated, homogeneous practice. This was the case both in the analysis of safety critical programming provided in Section 5.5 and Section 6.2 above, which contrasted safety critical programming with game programming. Of course, as the example of Skov indicated (Section 5.3), safety programming practice is not a single, undifferentiated activity: in reality, it displays as much variety as any other cultural activity. In this section, we will look at how the concept of cultural form can be used to analyse safety critical programming in order to categorize and explain some of the differences that are visible in the practices as they are expressed in empirical data. Thus, this section is based on interviews with employees in 17 companies in the safety critical industry, which provided data that was suitable for form analysis (see the list of source material). The companies are briefly summarized in Figure 6.1.

Company

Size Dept. Office

Product or

Process

size location

industry

Skov

300 45 Glyngøre

Farming

Agile

(anonymous)

25 Germany

Automotive

Agile

Integrasys

20 Madrid

Satellites

Software engineering

FCC

700 60 Madrid

Simulation, planning

Software engineering

(anonymous)

60000 100 England

Research

Eclectic

Metso Automation

700 5 Finland

Valves

Software engineering

Wittenstein

1800 15 Bristol

Real-time OS

Safety critical

(anonymous)

90 Germany

Real-time OS

Safety critical

Cassidian

31000 70 Ulm

Aircraft

Safety critical

PAJ Systemteknik

20 Sønderborg

Medico, railway, sensors

Safety critical

(anonymous)

28 Finland

Satellites

Safety critical

PSA Peugeot Citroên

200000 700 Paris

Automotive

Safety critical

Delphi Automotive

146000 300 Wiehl

Automotive

Software engineering

Danfoss

22000 2000 Graasten

Electrical motors

Software engineering

(anonymous)

800 250 Germany

Automotive

Tool supported

Safe River

10 Paris

Consulting

Formal methods

Kone

33000 85 Chennai

Elevators

Software eng. / Agile

Figure 6.1 – Brief summary of the companies mentioned in this section, in the order in which they are encountered in the text.

To a reader unfamiliar with cultural research it might seem strange to say something about the practices of a whole industry on the basis of only 17 cases, and sometimes on the basis of just a single case. It must be noted that we are not really trying to say anything about the “average” company, whatever an “average” company might be. Rather, we are trying to say something about the possible forms of practices, and to that end a single case is more than sufficient, because it demonstrates what is possible in practice. This of course means that we are not guaranteed to end up with a complete picture of all possible forms of practice; however, when the reader encounters a new, hitherto unknown form of practice, he should be able to analyse the practice himself, with the help of the cultural concepts of form and practice and the examples provided by this chapter.

The majority of the companies that we are observing – 10 out of 17 – are required to follow the safety standards, and they have done so in the most obvious way, by building a bureaucratic organization that revolves around the standards and which has internalized the development models that the standards prescribe, both explicitly and implicitly. Below, we take a closer look at the variations in the way in which these companies work; first, we examine the remaining group of companies, which for one reason or another are not required to follow safety standards development procedures. Although these companies are not subject to the dictates of the standards they do not fall entirely outside the safety critical industries – each company is in its own way connected to safety critical development. Looking at these forms of development practice can, by contrast, tell us something important about the practices of companies that do follow the standards.

The first thing to notice is that, because of the costs, no company follows safety standards unless it is forced to. The farming systems company Skov is one example of a company that is unaffected by standard regulation. In Section 5.3 (A small farming systems company), we saw how Skov rejects traditional software engineering thinking and has adopted a less formal, Agile way of working. Testing is the only part of the process where Skov finds it worthwhile to maintain a formal work process. The circumstances that allow Skov’s form of work practice to flourish are unique among the companies studied here, in that Skov is a company that produces safety critical systems but operates in an industry that is not regulated by standards.

Another example of a company that escapes standard regulation is a small company of 25 employees that makes timing analysis software tools for the automotive industry.1 This company has built its development process around Scrum, an Agile methodology.2 Like Skov, the testing part of the company’s process is given high importance. But unlike Skov, the automotive industry is not free of regulation; on the contrary, it is heavily regulated by standards. The reason that the timing analysis company can escape regulation is that none of its software ends up in the finished cars: it is used by other companies as an aid in their own safety regulated processes.

This is a conscious strategy taken by the company, and allows it to occupy a niche in the automotive industry without following safety standard procedures. Thus both Skov and the timing analysis company find themselves in circumstances where they are free to follow an informal, Agile work process, but where the absence of regulation in Skov’s case largely depends on factors outside Skov’s control, in the timing analysis company it is a result of a deliberate choice. Of course, this choice has consequences: the company is limited in the range of software it can offer, since it cannot be used directly in safety critical hardware. Thus while the form of the company’s work process has advantages in some respects, such as the freedom to use an Agile process, it has disadvantages in others.

Integrasys is a small company of 20 employees that makes signal monitoring software for satellites and the aerospace industry.3 The company does not follow any standards: it is preparing itself to use IEC 61508, but has no experience so far. The company’s work process is a traditional software engineering phase-based process. The company does most of its work in large projects involving as many as 20 larger companies. Coordination with these companies dictates the working processes, and the planning and communication takes place mostly via software engineering Gantt charts.4

What makes Integrasys special is that, while it externally appears to follow formal, bureaucratic phase-based processes, the internal day-to-day planning is informal. Requirements are written in Microsoft Excel spreadsheets, and there is no formal evaluation – any experience that is built up is personal experience. Integrasys is unique among the studied companies in having a process that externally appears bureaucratic, but is informal internally.

The reason that this form is possible for the company is probably twofold. First, the company is quite small; a larger company would presumably need a more formal bureaucracy. Secondly, the company is subject only to the relatively lax requirements of ordinary software engineering processes, rather than the much more strict safety standard processes. In combination, these circumstances allow the company to have a software engineering process with very little bureaucracy. Of course, the fundamental principles in the company’s work processes are still those of software engineering, in contrast with those companies discussed above, which follow Agile principles.

FCC is a medium-sized company with 700 employees, which produces simulation and mission planning systems for military and civil authorities.5 Most of the software is made in the systems and telecommunications department, which employs 60 people. The company is not normally subject to safety standards; it has recently begun its first safety critical project. Safety critical development has been found to be more work than expected, although not particularly difficult. Normally, FCC’s customers dictate the work processes. The processes are all software engineering processes and the core of the company’s form of work are military software engineering standards.

This company is different in one important respect from all the companies in this study that follow safety standards. Companies that follow safety standards are usually very conscious about their working processes and frequently evaluate and revise them, but FCC has not changed or proposed improvements to its methodology in 11 years. Rather, if there are problems, the company postpones its deadlines and milestones. This is in sharp contrast with companies following safety standards, which are dedicated to meeting their deadlines meticulously.

Other companies do not have the option to postpone their deadlines – so how can this be possible for FCC? Part of the answer is undoubtedly that the company follows ordinary software engineering standards. These standards, though strict, are relatively lax compared to the even stricter safety standards. The other part of the answer might be that FCC primarily delivers to the military and civil authorities, and these kinds of public institutions are well known for suffering delays and deadline postponements. Indeed, though delays are generally viewed as the worst kind of problem within software engineering, the origins of software engineering in part lies in attempts to bring the delays in public institutions’ projects under control.6

The next example is a research company made up of fewer than 100 people, which is part of a larger engineering group that employs 60,000 people.7 The company makes proofs-of-concept, mash-ups and demonstrators in order to investigate the feasibility of proposed new engineering solutions. Some projects are as short as four weeks; some are longer. The work processes are continually adapted to whatever the particular customer or project demands, and so the company works with a number of different software standards, some of which are safety critical standards. According to the employee whom I interviewed, the best way of working is that the employees themselves decide how to do their work. That is both more efficient and more motivating than safety critical work processes, in which people only become experienced and efficient in working with the standards after a long period of time. This employee also expressed that all software processes are exactly the same, whether they are Agile or safety critical.

The research company demonstrates a way of thinking that departs both from common software engineering thinking and from Agile thinking. Instead, it is much more in line with the thinking that follows the traditions of computer science, described in Section 3.3, where there is a marked emphasis on the creativity and insight of the individual. A work process is not seen as something shared, but rather as something private.

This style of thinking fits the form of the research company exactly, because it is not overly concerned with long term efficiency and the cost competitiveness of production. It has much more in common with the kind of scientific and experimental mentality that lies behind the computer science tradition. The company is partially subject to safety standards and is, in many ways, in the same circumstances as companies who follow the safety standard tradition. However, the company pursues research, not cost-effective safety – and this difference in goals makes a difference in the form of its work processes.

Our final example of a company that does not conform to safety critical standards is Metso Automation, a company of about 700 people that makes automatic valves for the chemical process industries.8 Only four or five people make the software to control the valves.9 Because the company has non-software solutions for ensuring the safety of the valves, the certification agency TÜV ranks their software as not safety critical and thus Metso does not need to follow those standards.

The software developers are trying to follow a traditional software engineering V-model process with requirements, reviews of requirements, phases, inputs, and outputs. However, they are struggling to do so. The process is not written down in its entirety anywhere, and only about 10 per cent of the software is covered by tests. Furthermore, the software architecture does not support module testing, which leads to problems in the work process. The company is reluctant to spend time and resources on improving the software because it is not seen as crucial to product quality. Hence the software process is allowed to continue to flounder.

What we see in Metso is a software department that tries to use a form of programming that is not really suited to its circumstances. The department lacks the necessary company support to build the bureaucracy that is needed in order to work the way it wants to, but it is either unwilling or unable to take the consequence and abolish the V-model way of working altogether. This is perhaps because those working in the software department do not know of any other tradition for software development, or perhaps because there are too few of them to build a new work tradition in the company. The underlying problem, however, is that the software department is just a tiny part of a much larger company that is entirely dominated by hardware engineering. Since the company views itself as a hardware company, software is simply not seen as something that can threaten the well-being of the company, and therefore the process problems are allowed to persist. For the time being, the company seems to be correct in this assessment.

Next, we will look at a range of companies that are subject to safety critical standards and conform to them both in deed and in thought. We have already discussed Wittenstein Aerospace and Simulation, a small company of about 15 employees.10 The company’s main product is an operating system for use in flight software. The company is strictly bureaucratic, conforming to the safety standards. All work is organized around the documents and phases required by the standards. Another slightly larger company of 90 employees makes an operating system for embedded software in general.11 Its working process is similarly bureaucratic and strictly conforming to the standards.

Wittenstein and the larger company have in common that they produce operating systems, which are pieces of software that cannot in themselves be certified according to the standards because they do not constitute a complete safety application in themselves. Accordingly, the certification is actually done by the companies’ customers. This means that it is important to the companies that their customers understand the products and how the work processes conform to the standards. An engineer and project manager from the larger operating system company says that:

“ … they need a certain understanding, the customer, because they cannot work with it if they do not understand; and if we deliver some artefact – some document or things like that – the customer needs to understand it because he has to take these documents and work with them inside his [own] company … ” 12

This emphasis on the need for customers properly to understand the products is particular to the operating systems companies, because they cannot themselves complete the certification process. This phenomenon is not only found in software companies: the hardware company Infineon that produces integrated circuits is faced with the same challenge.13

In Section 5.2 (A large avionics company), we saw a detailed account of the work process of Cassidian. The company works with a large number of standards, and in an attempt to cut down the ensuing confusion it has developed an internal standard that combines the elements of all the standards with which it works. The internal standard is also part of an attempt to streamline the processes inside the company. Cassidian’s size makes it powerful enough to try to influence its customers, and its desire is to impose its own standard on its customers, in place of the various standards that the customers demand.

PAJ Systemteknik is a small company of 20 people that works as a subcontractor and assembles equipment for major companies such as Siemens, MAN, and Honeywell.14 The company works in the areas of medical industries, railway, and safety of machinery. PAJ also deals with a large number of different standards that are dictated by its customers. Unlike Cassidian, however, PAJ does not have the size to influence its customers. The company therefore follows another strategy for trying to reduce confusion: its ambition is to develop a “self-certifying” platform for its products. That means a set of procedures that, if they are followed, will guarantee that the resulting product can be certified. Whether the strategies employed by Cassidian and PAJ will work remains to be seen, but it is interesting to note that differences in circumstances cause the companies to react in different, yet similar, ways to the same challenge: the proliferation of standards.

A further example is a small company of 28 people that makes control software for satellites.15 Like PAJ, this company works as a subcontractor and has its work processes imposed on it by customers. The standards in the industry are dictated by the European Space Agency. The company has growing business in other industries, such as nuclear, railway, production and medical industries. This part of the business is becoming more important, and consequently the company is working with an increasing number of standards in addition to the space standards. These non-space standards are perceived by the company as using the same concepts as the space standards, but applied in a slightly different way. An engineer from the company explains:

“Our processes are mostly derived from the European space standards. When we are to work in [non-space] industrial applications, well, it is a variation of that. So, it’s not a completely different story, it’s more or less the same concepts applied in a slightly different way. The names of some things are different; maybe you do some things before, some things after; some things more or some things less – but it’s a variation of what we already know.” 16

Like PAJ, this company is trying to control the challenge of working with a number of standards; but unlike PAJ, it is not trying to create a single procedure that fits all kinds of standards. Rather, it identifies what is common in the standards, and thinks of the different standards as variations on what they already know: a strategy that presumably makes it easier to deal with the differences.

PSA Peugeot Citroên is a very large European car manufacturer. It employs 200,000 people, half of them in France.17 In many respects, the company operates in circumstances similar to those of Cassidian. PSA Peugeot Citroên is a large, highly bureaucratic and tightly controlled organisation. The planning that goes into producing a new model of vehicle is comprehensive and very strict; milestones and deadlines absolutely have to be obeyed. There are also some interesting differences – where Cassidian tries to streamline and centralize its working processes by developing an internal standard, PSA instead allows different parts of the company to have their own processes and traditions, or, as an engineer from the company puts it, their “historical reasons to work in a certain way”. The reason for this is that “if they work in a certain manner they also have some good reason.” 18The company has a department of innovations that makes suggestions about changes in the work processes of the different departments, in close cooperation with the departments in question. This process can take several years and again shows that PSA’s approach is much less centralized than Cassidian’s.

Cassidian tries to affect the standards to which it is subject by making its customers accept its own internal standard. PSA also affects the standards, but in a different way. The company is of such size that it has representatives in the ISO committee that authors the standards, and PSA can thus influence the standard to make it accord better with the company’s wishes. The company also uses some strategies for reducing the complexity of working with the standards. The main software component of a car19 is consciously kept at a low level of safety criticality.20 This means that there are important safety components that must instead be taken care of in other parts of the car, but it simplifies the work on the software component. Another strategy is to allow subcontractors to take care of the safety critical aspects of components. PSA could, in principle, safety engineer the components itself, but it simplifies the work process to have trusted subcontractors do this.

The following two examples differ from the others used in this study in that their work processes do not spring directly from the safety standards, but instead have their origins in general software engineering theory that is adapted to fit safety critical purposes. Delphi Automotive is a global company with around 146,000 employees.21 It makes, among other things, embedded control systems for cars. The company’s software processes derive from the SPICE standard,22 which is a software process standard that is not concerned with safety: it has been chosen because of customers’ requirements. The company has a globally defined standard process that is tailored locally in the individual departments and to each project. The form of the work process thus mixes a centralized and decentralized approach. Local departments deal with as many as 30 different safety standards and other legal requirements. The company is large enough that it is able to influence the standards to which it is subject; the German part of the company is a member of the German working group for the ISO standards committee.

Danfoss Power Electronics is a daughter company of Danfoss, a company of 22,000 people that makes pumps and other hardware.23 Danfoss Power Electronics makes electrical motors and the software to control them, and has around 100 software developers. The company follows a work process of its own devising, which is an elaborated version of an ordinary software engineering iterative waterfall model. The company has a slightly different version of its software process for each of its three software product lines, because standardizing the process to have “one-size-fits-all” is not deemed to be worth the effort it would take. Since the processes have not been made with safety in mind, the company needs to interpret the process steps from the IEC 61508 standard to match its own process steps whenever a product needs to be certified.

An interesting detail is the way the company keeps track of its software requirements. Currently, the requirements are linked directly to the software source code simply by putting comments in the code. But they are considering adding an intermediate layer of “features”, such that requirements would be linked to features and features in turn would be linked to the source code. In that way it is easer to keep track of functionality that is spread out in the source code. The programmers would then arguably have to be more aware of exactly which feature they are working on. This is an example of how demands can shape the programming work process; in this case, bureaucratic demands rather than demands arising from the programming itself.

The following two examples illustrate the inherent conflict between creativity and the safety standards’ requirements for documentation and control, which was discussed in Section 5.5. The two companies simply solve this conflict by keeping creative innovation apart from the safety critical process. The first company employs 800 people and makes software for car controllers24 based on a software platform for the car industry called AUTOSAR,25 which is jointly developed by car manufacturers and others.26 Most of the requirements for the products come from the specification for AUTOSAR, which changes regularly. The work process is heavily supported by software tools. The company only does certification when demanded by customers, and only on well-defined components – if new features are needed, a technical solution is found before safety is addressed, as one manager explains:

“If we are discussing a new feature which requires a new technical solution, maybe as an additional product component, then at a first stage we are trying to solve the technical matters, and then we are going to assure that this will fulfil all the [safety] requirements … we are not happy with doing it the other way around, which means definition of a big process, then breakdown of requirements, and in the end doing a technical solution.” 27

Safe River is a consulting firm of 10 employees in the aeronautics and railway industries.28 The consultants work and participate in customers’ work processes, helping them to use formal methods, a collection of very demanding and costly methods that are only used for the highest levels of safety critical categorization. The founder of Safe River explains that the innovative part of doing a new product should be kept apart from the work process that conforms to safety standards:

“Suppose the system is completely new and you don’t have any experience and so on – you must study and do the proof of concept before you enter the [safety] process itself.” 29

However, she emphasizes that even if the creative part takes place before the safety process is engaged, it is necessary at all times to be aware of the safety requirements that the product must eventually fulfil:

“You have some phases which are more experimental when you must do the proof of concept, but some people do not take into account safety constraints at this stage and afterwards, when they go to the real development phase, there are conflicts between the constraints, which have not been taken into account, and the proof of concept itself, and in this case it can be very very expensive to go back to the first phase.” 30

This last comment shows that although it is in principle a feasible form of practice to separate innovation and fulfilment of the safety standards, it is not always so easy to do in practice.

The final example given here is an interesting hybrid between the companies that conform to the safety standards and the companies that avoid or work around them in some way or another. Kone is a company of 33,000 employees that makes and installs elevators worldwide.31 The 85 employees in the software department makes the controllers for the elevators. Kone uses an Agile methodology, Scrum, for projects in which it develops something completely new. For projects that add features to existing software, and for safety critical projects, Kone uses a traditional iterative waterfall approach. The desire to use an Agile process came from the software developers rather than managers, which is unusual: the company’s software developers normally do not initiate process changes themselves.

Interestingly, Kone combines two forms of safety programming that we have otherwise seen used in a mutually exclusive way in separate companies: an Agile form that does not conform to safety standards, and a software engineering form that does. In Kone these forms exist side by side, not only within the same company, but within the same department. This example illustrates a point made by the economist R.H. Coase in his article “The Nature of the Firm”: that the exact delineation of which tasks belong within one and the same company is, in essence, an economic and therefore a cultural question.32 That is: what constitutes “a company” cannot be taken for granted; it is always possible that some task within the company is better left to a subcontractor, and conversely it is always possible that two smaller companies could better be combined into a single entity.

When, in the previous section, we looked at safety critical programming as a form in contrast with the game programming within Tribeflame, we perceived safety critical programming as a fairly homogeneous form of culture with distinct features. In this section we have taken a closer look at safety critical programming forms and seen that, even within this specific form of programming, there is ample diversity in approaches. This reveals that, while it is possible to identify some general traits of programming, it is equally important to be aware of the context, because it is not possible to identify the form of an example of programming without taking the context into account.

We have also seen how the same form can appear in vastly different circumstances, such as in Skov and in the small German company that makes timing analysis software tools: both employ an Agile form of programming, but while Skov operates in an industry without safety standards, the other company operates in the automotive industry, which is heavily regulated by standards. Also, we have seen examples of companies that operate in similar circumstances but choose different forms to survive: Cassidian, which has a very centralized process form in which one standard is made to fit all processes; and Danfoss Power Electronics, which considers it inefficient to make one process fit all software product lines.

1 Interview 19.

2 See Section 3.2 (Agile software development).

3 Interview 11.

4 See Figure 3.7 in Section 3.1.3 (Process thinking).

5 Interview 28.

6 See Section 3.1.2 (Frederick P. Brooks).

7 Interview 26.

8 Interview 22.

9 The valve control software is technically firmware, and the company refers to it as such.

10 Interview 25. See Section 5.5.

11 Interview 27.

12  Ibid.

13 Interview 31.

14 Interview 14.

15 Interview 13.

16  Ibid.

17 Interview 23.

18  Ibid.

19 The Electronic Control Unit (ECU).

20 Called ASIL B, the second lowest safety level, excluding components classified as not safety critical.

21 Interview 15.

22 ISO/IEC 15504: Software Process Improvement and Capability Determination.

23 Interview 10.

24 Electronic Control Units (ECUs).

25 Automotive Open System Architecture.

26 Interview 18.

27  Ibid.

28 Interview 21.

29  Ibid.

30  Ibid.

31 Interview 29.

32  Coase 1937.