Chapter 12

Safety Management in Airlines

Arthur Dijkstra

Introduction

Aviation has gone through a change and development from when ‘the aircraft was made of wood and the men of steel’ to a situation where ‘the aircraft is made of plastic and operated by computers’. This refers to the many sides of change in the domain of aviation (which, of course also applies to other industries). Aviation has grown into a highly complex and dynamic industry where the margins for profitability are small. Airline companies are maximising production on minimal costs while trying to avoid accidents. Competition is fierce and cost cutting is the focus for economic survival. The environment to which an airline has to adapt is constantly changing. Airlines start network organisations and specialise in specific markets; rules and regulations are changing due to multilateral agreements where national laws are replaced by international regulations; and the technological change of the recent years has influenced aircraft operations and airline management strategies considerably.

In the early years, the men of steel had to deal with unreliable technical systems on board their aircraft. Accidents in those days were attributed to technical failure. This was a wide accepted cause since technical reliability was not very high. Over the years systems became more reliable, accurate and complex. Slowly accident causation explanations shifted from technology to the human operator. The prevailing view was that if machines performed as designed it must have been the human who failed. The conclusion of ‘human error’ as cause was accepted for some time, but now, increasingly, human error is the initiation of investigation and not the endpoint. Organisational as well as technological aspects have come into focus as contributing factors for accidents. This was the result of the recognition that human action does not occur in a vacuum but in a context of organisational and technological factors.

This is the background in which airline safety management has contributed to a safety level of one accident in about a million flights. Other main parties that contributed to safety improvement are the airline manufactures, Air Traffic Control services and regulators, but they are not discussed here any further. The increase in safety level has slowly declined and asymptotically approached its current apparent limit. I will discuss the developments in safety management, supported by interviews with accountable managers and safety practitioners. Some problems will be indicated and a wish list on how to move towards resilience engineering will be proposed.

How Safe is Flying?

To illustrate the safety of air travel the Boeing website gives the following data (Boeing 2005).

In 2000, the world’s commercial jet airlines carried approximately 1.09 billion people on 18 million flights, (Figure 12.1) while suffering only 20 fatal accidents.

In the United States, it’s 22 times safer flying in a commercial jet than travelling by car, according to a 1993-95 study by the U.S. National Safety Council. The study compares accident fatalities per million passenger-miles travelled. The number of U.S. highway deaths in a typical six-month period (about 21,000) roughly equals all commercial jet fatalities worldwide since the dawn of jet aviation four decades ago. In fact, fewer people have died in commercial airplane accidents in America over the past 60 years than are killed in U.S. auto accidents in a typical three-month period. For the year 2000, 41,800 people died in traffic accidents in the U.S. while 878 died in commercial airplane accidents.

Image

Figure 12.1: Accident rate history. Source: Boeing (2005)

A comparison between transport modes is shown in Figure 12.2.

Current Practices in Safety Management

ICAO (International Civil Aviation Organisation), IATA (International Aviation Transport Association) and the FAA (Federal Aviation Administration) in the United States and the JAA (Joint Aviation Authorities) are the legal bodies that regulated safety management in airlines.

An important issue in safety management is the place in the organisation of the safety department. This place influences the ‘span of control’ in the organisation by the number of department boundaries that have to be crossed horizontally or vertically. Regulations allow two ways of implementing a safety office into an airline organisation. One way of organisation is that the flight safety department is directly connected to the board of directors and directly reports to the COO (Chief Operating Officer) and consequently the department has a companywide scope of safety issues. The other is that each main organisational branch has its own safety department. The different safety departments then have to coordinate safety issues that pass organisational boundaries. Directors of the safety departments report to the president of the branch and always have a bypass access to the COO. Safety discussions that cross organisational boundaries horizontally, like flight operations talking to aircraft maintenance, can show different interpretations of safety issues. For example, an engine that failed during ground operation is regarded by the maintenance department as a pure technical failure with no impact at flight safety, while the flight operations department regards this failure as a flight safety related failure because they consider the question of what would have happened if the aircraft had just got airborne.

The practices of safety management are more or less the same across airlines. Airline safety management systems have to comply with FAA, JAA and IATA regulations, which are audited by audit programs. Leverage points for change in safety practices can thus be created by the changes in the regulations.

By analysing the practices at the safety department and the way information is used in the organisation to deal with risk and safety matters, the underlying theoretical concepts and models of risk and safety that these practices represent can be extracted. These models are not made explicit in the FAA, JAA, ICAO or IATA documentation nor in the safety department. People working at the safety department, including management, have no clear picture of why they do what they do. Practitioners and regulations do not refer to any underlying theoretical foundation for the practices. Comments on this observation are that it has always been done more or less like this and there are no alternative safety management practices, just some variations on common themes. Progress in safety and to break the current safety level cannot be expected to be initiated by the airlines themselves. They lack the resources, like time and knowledge for innovations, so help is needed from, e.g., the field of Resilience Engineering to supply alternatives to current practices. Since airlines follow the regulator requirements, regulations that include resilience engineering concepts might support progress on safety.

Data Sources that Shape the Safety Practices

Work in a safety department consists of data collection and organisation, composing management reports, conducting investigations into accidents or incidents and feeding line management with safety related information. The different sources are listed and later an example will be discussed on how information is presented to management. Various sources of data are used and combined in the attempt to create a picture about current risks and safety threats.

The confidentiality of operators, such as pilots, has consequences for data collection. Protocols for data collection and usage have been agreed upon with the pilots’ unions. Information about pilots’ names may never get known outside the safety department. In addition, data may never be used against pilots unless there is gross negligence or wilful misconduct (which is of course hard to define). Such pilot protection positively supports the willingness to report, the fear of blame is minimised and more reports about safety matters will likely be submitted to the safety department.

Air Safety Reports. A critical issue in data collection from reports is what events are reported and how this is done. The commander of an aircraft may always use his own judgement on what events to report and they are supported by an operating manual, which states what events must be reported. Airlines have different sorts of incidents that are defined as occurrences associated with the operation of an aircraft, which affects or could affect the safety of operation. A partial list defining incidents is shown in Figure 12.2.

Image

Figure 12.2: Examples of incident description

Some occurrences are also triggered by other data collection methods (e.g., automatic aircraft data registration) which will be mentioned later. When an event is not triggered by another data collection method, the issue of reporting culture becomes important. Reporting is an aspect of a safety culture (Reason, 1997). In a good safety culture, reporting of safety threats is welcomed and supported. Conversely, relevant safety information may never get into the safety department when pilots are not willing to report or do not see a use in reporting.

Currently, reports are paper forms filled out by pilots. In the future, on-board computers may be used for filling out the report. The boxes to tick and comments to make should give a representation of the event. The contextual description of the event is done with aspects such as, time, altitude, speed, weather, aircraft configuration etc. The event description itself is a block of free text and the information to include is up to the writer.

Some pilot reports are very short and minimal information is given by the pilots. This may give the impression to the reader that the report was written just to comply with the rules and that it was not regarded as a serious event by the pilots. Also professional pride in their work may make pilots reluctant to submit long reports explaining their own (perceived) imperfect work. Maybe some pilots have some fear of negative judgement about their performance. Incidents are assessed at the safety office and pilots may be invited to see a replay of the flight data if the safety officer decides so. For this process a protocol is established between the pilot unions and the airlines. This process will not be the same across all airlines since this is a delicate issue for the unions because of the fear of pilot prosecution. Recent developments fuel this fear. In Japan, a captain was prosecuted when a flight attended got wounded and finally died after her flight had gone through heavy turbulence. In the Netherlands, two air traffic controllers were prosecuted when an aircraft taking off almost collided with an aircraft crossing the same runway. It takes a long time before these fears fade away. There is a saying among pilots that they do not want to face the ‘green table’. This is a ghost image of being held responsible for some mishap on the basis of hindsight. There is a feeling of unfairness that that pilots have to take their decisions sometimes in split seconds and that the investigation committee can take months and have knowledge of the outcome.

Air safety reports are kept in specially designed database programs. Each event is assigned a risk level according to the IATA risk matrix (Figure 12.3) or an own matrix developed by the airline.

Image

Figure 12.3: IATA risk matrix

Every day many reports are processed and the assignment of a risk level is a judgement based on experience. There is not time for a methodological approach to risk assessment for every event. This is not considered a shortcoming. Some airlines have a 3 by 3 matrix for risk assessment and as a reaction on the question, whether so few scales does not oversimplify too much, the answer was based on a practical approach: ‘that you can’t go wrong more than two scales’, which means you are almost always nearly correct.

With an interval of several weeks, trend reports are made and the reports are assessed with a team consisting of safety officers and a representative from the aircraft type office, e.g., a flight technical pilot. The risk level might be reassessed with expert judgement and further possible actions on specific reports and possible trends are discussed. The challenge in these meetings is to distinguish between noise and signal with respect to threats for safety. How can attention in a smart way be given to the events that have the potential to become incidents or even accidents? What are the signals that indicate a real threat and how can the evaluation of all the 100 to 200 reports per month in an average airline be managed?

In addition to risk, analysis reports are also categorised. The categories used to classify the air safety reports are a mix of genotypes and phenotypes. There is, e.g., a category with ‘human factors’ which is a genotype, inferred from data from the report, and a category with ‘air proximities’ (insufficient separation between aircraft) which is a phenotype, the consequence of something. The problems caused by mixing genotypes and phenotypes, such as mixing causes and effects, are discussed by Hollnagel (1993a).

Accident or Incident Investigations. The ICAO emphasises the importance of analysis of accidents and has published an annex, which is a strongly recommended standard, specifically aimed at aircraft accident and incident investigations. ICAO requests compliance with their annexes from their member states, which are 188 (in 2003) countries in total. The ICAO document with its annexes originates from 1946, and today ICAO annex 13 is used as the world standard for investigations conducted by airlines and the national aviation safety authorities such as the NTSB (National Transport Safety Board).

Annex 13 provides definitions, standards and recommended practices for all parties involved in an accident or incident. One specific advantage from following annex 13 is the common and therefore recognisable investigation report structure along with terms and definitions.

Annex 13 stresses that the intention of an investigation is not to put blame on people but to create a learning opportunity. To seek to blame a party or parties creates conflict between investigations aimed at flight safety, focussed on the why and how of the accident and investigations conducted by the public prosecutor aimed at finding who to blame and prosecute. Fear of prosecution reduces the willingness of actors in the mishap to cooperate and tell their stories. Conflicting investigation goals hamper the quality of the safety report and reduce the opportunity to learn.

For classifying an event as an accident the ICAO annex is used. The ICAO definition of an accident states that it is when a person is injured on board or when the aircraft sustains damage or structural failure that adversely affects the structural strength, performance or fight characteristics and would require major repair, except for contained engine failure or when the aircraft is missing. This definition does not lead to much confusion but the definition of a serious incident is more negotiable, see Figure 12.4.

Image

Figure 12.4: ICAO definitions

Due to lack of support from the definitions, safety officers decide on the basis of their knowledge and experience how to classify an event. Based on ICAO event categorisation and or the risk level assigned, the flight safety manager and chief investigator will advise the director of flight safety whether to conduct a further investigation on this event.

Accidents are often investigated by the national transport safety authorities in collaboration with other parties such as the airline concerned, aircraft and engine manufacturers. An incident investigation in an airline is conducted by several investigators, of which often at least one is a pilot (all working for the airline) and often complemented by a pilot union member, also qualified as an investigator. Analysis of the collected accident data occurs often without a methodology but is based on experience and knowledge of the investigating team. This means that the qualifications of the investigators shape the resulting explanation of the accident. An investigation results in a report with safety recommendations aimed at preventing subsequent accidents by improving, changing or inserting new safety barriers. Responsibilities for the safety department stop when the report is delivered to line management. Line management has the responsibility and authority to change organisational or operational aspects to prevent re-occurrence of the accident by implementing the report recommendations.

It is repeatedly remarked that (with hindsight) it was clear the accident or incident could have been avoided if only the signals that were present before the accident were given enough attention and had led to preventive measures. Is this the fallacy of hindsight or are there better methods that can help with prevention?

Flight Data Monitoring and Cockpit Voice Recorder. Another source of data consists of the on-board recording systems in the aircraft that collect massive amount of data. The flight data recorders (the ‘black boxes’) record flight parameters such as speed, altitude, aircraft roll, flight control inputs etc. The aircraft maintenance recorders register system pressures, temperatures, valve positions, operating modes etc. The flight data and maintenance recordings are automatically examined after each flight to check if parameters were exceeded. This is done with algorithms that check, e.g., for an approach to land with idle power. This is done by combining parameters. Making these algorithms, which are aircraft specific, is a specialised job.

The cockpit voice recorder records the last 30 minutes of sounds in the cockpit. This is a continuous tape which overwrites itself so that only the last 30 minutes are available from the tape. The sound recording stops when a pilot pulls the circuit breaker to save the recordings or when the aircraft experiences severe impact during a crash. Data from the cockpit voice recorder are only used in incident or accident investigations and the use of information from this tape is strictly regulated by an agreement between the airline and the pilot union to preserve confidentiality of the pilots.

Quality and Safety. JAR-OPS (Joint Aviation Requirements for Operations) states that the quality system of an operator shall ensure and monitor compliance with and the adequacy of, procedures required to ensure safe operational practices and airworthy airplanes. The quality system must be complied with in order to ensure safe operations and airworthiness of the airline’s fleet of aircraft. This statement is indicative of a perspective that safety and quality have a large overlap, and that safety is almost guaranteed as long as quality is maintained.

Quality management is currently regarded as a method of proactive safety management. While accident and incident investigations are re-active, quality audits and inspections are pro-active. Quality audits compare organisational processes and procedures as described in the companies’ manuals with the rules, regulations and practices as stated in, e.g., the JAR or IOSA. Company rules and procedures should comply with these standards and should be effective. The measure of effectiveness is based on a ‘common sense’ approach (remark by an IATA accredited auditor) and compared to best practices as performed by other airlines. Auditors are mostly not flight safety investigators and vice versa. Quality inspections are observation of people at work and the comparison of their activities with the rules and regulations as described by the company in the manuals. Audit observations and interpretations are thus clearly not black and white issues but are, to a certain extent, negotiable. Audits and inspections lead to recommendations for changes to comply with rules and best practices.

Airline Safety Data Sharing. Safety data is shared among airlines as reported in the GAIN (Global Aviation Information Network, 2003) work package on automated airline safety information sharing systems. Several systems for data exchange exist with the aim of learning from other airline’s experiences. No single airline may have enough experience from its own operations for a clear pattern to emerge from its own incident reports, or may not yet have encountered any incidents of a type that other airlines are beginning to experience. In some cases, such as introducing a new aircraft type or serving a new destination, an airline will not yet have had the opportunity to obtain information from its own operations. Therefore, the importance of sharing information on both incidents and the lessons learned from each airline’s analysis of its own safety data is also becoming more widely recognised.

When, for example, an airline has a unique event, the safety officer can search in the de-identified data for similar cases. If useful information is found, a request for contact with the other airline can be made to the organisation supervising the data. Via this organisation the other airline is requested if contact is possible. This data sharing has been very useful for exchanging knowledge and experience.

Safety Organisation Publication. Organisations like the FSF (Flight Safety Foundation), GAIN, ICAO organise seminars and produce publications on safety-related issues. Periodical publications are sent to member airlines where those papers are collected and may serve as a source of information when a relevant issue is encountered. Daily practice shows that due to work pressure little or no time is used to read the information given through these channels. The documents are stored and kept as a reference when specific information is required.

As a summary, an overview of the mentioned data sources is supplied in Figure 12.5.

Safety Management Targets

Even though airlines may not market themselves as ‘safe’ or ‘safer’, safety is becoming a competitiveness aspect between airlines. This can be seen from the several websites that compare airlines and their accident history. The disclaimers on these sites indicate that such ratings are just numbers and the numbers only give a partial picture of the safety level of an airline. The way data are complied could tell a lot about the validity of the ranking numbers but the general public might not be able to make such judgements. Still, there is a ranking and it could influence people’s perception of safe and less safe airlines.

The term ‘targets’ is confusing if the number is higher than zero. Does an airline aim for, e.g., five serious incidents? No, in safety the word target has more the meaning of reference. This is what I understood when discussing this with safety managers, but still the word target is used.

Safety targets can be defined as the number of accidents, sometimes divided into major- and minor accidents, serious incidents and incidents and this number is indicative of safety performance. The definitions used for event categorisation are very important because this determines in which category an event is counted. As a reference for definitions, the ICAO annex 13 is used. These definitions are widely accepted but interpretations, as in which category an occurrence is placed, are often highly negotiable. A large list of event descriptions is published in annex 13 to facilitate these categorisation process discussions. The trap here is that events are compared with items on the list on phenotypical aspects, on how the event appears from the outside, while the underlying causes or genotypes may make the events qualitatively different. This makes the judgement for severity, what determines the classification, very negotiable.

Image

Figure 12.5: Overview of data sources

The target value for accidents is zero, but some airlines distinguish between major and minor accidents and have a non-zero value for minor accidents. The target values for serious accidents and incidents are often based on historical data, and the number from previous year is then set as the target.

An airline’s current safety status can be expressed in counts of occurrences. Occurrences are categorised in a fixed number of categories and are assigned a risk level and are published in management reports together with historical data to show possible trends. Quantitative indicators may, in this manner, lead to further investigations and analyses of categories with a rising trend while absent or declining trends seldom invoke further analysis.

Workload is high in a safety department as in other production related offices. Much work has to be done with a limited number of people and this results in setting priorities and shifting attention to noticeable signals such as rising trends, accidents and incidents. Weaker signals and investigations into factors higher in the organisation that could have negative effects may consequently get less attention than practitioners really want. For example, can we find threats due to a reduction of training, which have not yet been recognised in incidents? These are pro-active activities for which currently the time and knowledge is lacking. Daily administrative duties and following up ‘clear’ signals such, as air safety reports, absorb the available human resources.

On a regular basis, management information is complied from reports and data monitoring. The data presentation shows graphical counts of occurrences, risk classification and history. Text is added with remarks about high risk events, and there is other explanatory text written by the flight safety manager. A de-identified extract of such a report is shown as an example in Figure 12.6.

Image

Figure 12.6: Example of safety data presentation for management

Upper management is very time constrained and safety information must be delivered in a simple representation. This (over)simplification is recognised, but ‘that is the way the management world works’. These kinds of reports have to trigger upper management for requests for further investigation on the why and how on specific issues. Some reports for upper management compare actual counts compared to the safety targets. It would be interesting to know what meaning is assigned by upper managers to a value that is below the target. Is it only when a count exceeds the target that action is required? Much has been published on the problems of using counts as indicators (Dekker, 2003a).

Summary Safety Practices

This is a description of current airline safety management practices constructed from GAIN reports and communication with practitioners and responsible managers. Different sources of data show that the aviation community puts much effort into sharing information and trying to learn. Many data are of the same dimension, and manifestations of events and common underlying patterns are occasionally sought. Events are converted into numbers, classifications and risk levels and numbers are the main triggers for action from management. This is the way it has been done for a long time and practitioners and management lack resources to self-initiate substantial changes to safety management. They have to turn to science to supply practical diagnostic and sensitive approaches. When no alternatives are supplied, safety management will marginally improve and numbers will rule forever.

In the next sections the theoretical foundation of the current practices will be discussed and in the final section I propose a ‘wish list’ of what would be necessary to move forward in practice.

Models of Risk and Safety

Aviation is very safe, on average one accident in a million flights, so why should airlines try to improve this figure while they experience almost no accidents? Managers come and go about every three years, often too short to experience the feedback from their actions. Many years may pass, depending of airline size and flight frequency, before an accident occurs. From the manager’s perspective it can be hard to justify safety investments; which have unclear direct results, especially in relation to cost-cutting decisions that have clear desired short term effects and are deemed necessary for economical survival. How do most accountable managers and practitioners perceive flight safety data, and what models of risk and safety actually shape their decision and actions?

From Data to Information and Meaning

As the description above shows there is much data available in a safety department. But is it the right data? Can safety be improved with this sort of data or is it just number chasing?

A critical factor for success is the translation from data to information and meaning. Collecting all the data points in a software database structure is not the same as interpreting data and trying to understand what the data points could mean for processes that may influence flight safety. Making sense of the data is complicated by classification methods and current practices are not able to deal with this.

First, all events are categorised and problems arise when categories are mixed in the sense of antecedents and consequents of failure as an event description. As said, Hollnagel (1993a) explains how classification systems confuse phenotypes (manifestations) with genotypes (‘causes’). The phenotype of an incident is what happens, what people actually did and what is observable. Conversely, the genotype of an incident is the characteristic collection of factors that produce the surface (phenotype) appearance of the event. Genotypes refer to patterns of contributing factors that are not observable directly. The significance of a genotype is that it identifies deeper characteristics that many superficially different phenotypes have in common.

The second issue is that of classification of occurrences. An often complex event in a highly dynamic environment is reduced to one event report. Shallow information, no contextual data and lack of method to assign a category make categorisation a process that is greatly influenced by the personal interpretation of the practitioner doing the administration. This also goes for the risk level that is assigned to an event and used to rank the events.

Third, the requirement from most of the database software is to put events in a single category. Most software tools cannot handle events in different categories at the same time. Counting gets difficult when some events are assigned to more than one category and counting events per category is regarded as valuable information.

Management is presented with numbers and some qualitative descriptions of high risk events. An initial quantitative approach on occurrences is often used and a further qualitative approach is postponed to the point where a trend, a rise in occurrences in a specific category, is observed. When a trend is observed an analysis of this data is started and this is then the initiation of a more qualitative analysis where data is interpreted and assigned a meaning.

All the mentioned re-active practices are assumed to be useful based on the ‘iceberg’ theory, which basically means that preventing incidents will prevent accidents. But is this still a valid assumption?

Learning from Accidents

Accident and incident investigations are executed to create a learning opportunity for the organisation. As a result, from the investigation a report is published with safety recommendations that, according to the investigation team, should be implemented to prevent re-occurrence of the mishap. The responsibility of the safety department ends when the report is published; thereafter it is up to the line management (e.g., fleet chiefs) to consider the recommendations in the report. Line management has the authority to interpret the recommendations and can reject or fully implement the changes suggested by the investigators. First, often recommendations can have a negative effect on production, and thus on cost. Implementation may include extra training for pilots or changing a procedure or limitations of equipment. Line management will make trade-offs between these conflicting goals and it is not often that a recommendation is rejected. Second, failure is often seen as a unique event, an anomaly without wider meaning for the airline in question. Post-accident commentary typically emphasises how the circumstances of the accident were unusual and do not have parallels for other people, other groups, other parts of the organisation, other aircraft types.

One airline is experimenting by involving line management when the recommendations are defined. Having early commitment from line managers, to comply with the recommendations that they themselves have helped to define, can increase acceptance of the final recommendations when the report is published. Without acceptance of the suggested changes the time and money invested in executing the investigation is wasted, since no change, or learning has occurred. A downside of involving line managers in this early stage might be that only easy-to-implement changes are defined as recommendations and that the harder and more expensive changes are not accepted. It was suggested to use the acceptance of recommendations as a quality indicator for the report. Since costs of implementation play an important role in managers’ acceptance of recommendations it might be more suitable to use acceptance of the suggested changes as a measure of quality of the learning organisation.

The quality management system in an organisation checks if the accepted recommendations are indeed implemented. There is, however, no defined process that analyses if the recommendations had the intended effect. Only if new events are reported that seem to relate to the installed changes can a connection be made and an analysis done of the ‘old’ recommendations.

Much effort is put into investigations and after a few weeks and months operations go back to ‘normal’. The investigation is still in progress and only after many months or sometimes after two or more years, is the final report published. This delay can be caused by factors such as the legal accountability of the parties involved and their subsequent withdrawal of the investigation, or a lack of resources in the (national) investigation bureaus. The eagerness to use the final recommendations a long time after the accident is considerably reduced. The possible learning opportunity was not effectively used and the system’s safety is not improved.

Quality versus Safety

Current practices in safety management build on the view that quality management is the pro-active approach to preventing accidents. Quality management focuses on compliance with procedures, rules and regulations of people and organisations in their activities. Audits and inspections are done to confirm this compliance and, in a high quality organisation, few deviations from the standards are observed. This raises the question of whether high quality organisations are safer than lower quality organisations. The approach that quality guarantees safety builds on a world view that compliance is always possible, that all procedures are perfect, that no failures outside the design occur, that all performance is constant, and that no constraints exist which limit the ability of people to have full knowledge certainty and time to do their work. This view explains that accidents are caused by deviations from the approved procedures, rules and regulations, thus quality and safety have a complete overlap.

A more realistic view is that the complexity and dynamics of our world are not so predictable that all rules, regulations and procedures are always valid and perfect. Due to coincidences, people have to improvise, because sometimes conditions do not meet the specifications as stated in the procedure, and trade-offs have to be made between competing goals and this is normal work. In this view there is far less overlap between quality and safety, it is assumed that procedures, rules and regulations are as good as they can be but at the same time their limitations are recognised.

Quality as a pro-active approach to safety is a limited but research and science has not yet delivered practical alternatives.

What Next? From Safety to Resilience

Having given an interpretation of current practices and having pointed at some problematic issues, here are some questions to the researchers and scientists:

Should data collection only be done from the ‘sharp end’, the operators or should this be complemented by data gathering from other cross-company sources such as maintenance, ground services and managerial failures?

How can the above-mentioned data sources be combined to provide a wider perspective on safety and risk?

Current practices use only flight operation related data sources. Maintenance event reports are not evaluated outside the maintenance department. Managerial accidents, such as failed re-organisations are not investigated like operational events but all these different types of failures occur in the same organisational context so common factors can be assumed.

What sort of data should be collected from operational events with, e.g., an air safety report?

Current practices show reports with descriptions of events and only a few contextual factors. Classification and risk assessment are done by a flight safety officer.

How can it, with the evaluation of events, be determined whether the event is signal or noise in relation to safety or risk and whether putting resources in the investigation is justified?

Operational events (not yet determined if it is an incident) are reported to the safety department. Safety officers and management have to decide in a short time frame (the flight is, e.g., postponed until the decision is made) whether an investigation is deemed necessary.

Safety reports are evaluated periodically and, based on their assigned risk level, further investigation or other follow up actions may be decided.

What kind of data should be presented to management to keep them involved, willing to follow-up on findings and supportive for actions such as investigations and risk analysis?

Experienced and knowledgeable safety officers have a feeling certain indications, e.g., an increased numbers of technical complaints about the aircraft, point to increased risk. But in the ‘managers’ world’ it requires ‘hard facts’ or evidence to make (upper) management react to these signals. Currently, owing to the immense work pressure on management, data are presented in simplified form that takes little time to evaluate.

How can risks be predicted and managed if, e.g., training of pilots is reduced, procedures changed and new technology introduced?

Judgements about possible risks are made ‘during lunch’ since no methods are available to predict the consequences of changing procedures and if, e.g., additional training is required when the procedure to abort a take-off is changed. What kind of problems can be expected and how should these be handled when, e.g., new technology such as a data link, is introduced. Nowadays, engineering pilots and managers with much operational experience but limited human factors knowledge try to manage these issues.

How can a common conceptual and practical approach to reporting, investigation and risk prediction be developed?

The three issues of reporting, retrospective investigations and proactive risk prediction have no commonalities whatsoever. These isolated approaches do not reinforce each other and valuable insight may be missed.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset