It is currently Tue Dec 07, 2021 5:00 pm



Post new topic Reply to topic
Author Message
 Post subject: Updated Arc Flash Risk Assessment
PostPosted: Sun Sep 29, 2013 8:30 am 
Plasma Level
User avatar

Joined: Tue Oct 26, 2010 9:08 am
Posts: 2174
Location: North Carolina
An updated approach to arc flash risk assessments.

70E-2015 2nd draft, likely to stand, will require risk assessments, not merely hazard assessments. This is in keeping with current safety standards but also requires a little more work. It will no longer be enough to simply perform an engineering study using IEEE 1584. This is going to leave those of us who have done an engineering study for the hazard assessment in the past in a quandary because we will have to add risk assessments to that.

I looked at all of the risk assessment methodologies out there that I could find, even Annex F. Annex F simply does not work. Try it yourself...just walk through the procedure and fill out the form for an energized task. I put in a public input to replace Annex F with a laundry list of risk assessment safety standards similar to Annex D but this was shot down because I didn't summarize them all similar to Annex D. Trouble is that I'd end up with a book similar to Rockwell Automation's "safety book" larger than the standard itself!

The attached document which is again simply an idea suggests taking a very simple approach to the issue using three existing standards (NFPA 70E, IEEE 493, IEEE 1584, AiChE LOPA) and synthesizing a complete risk assessment method from them. I picked LOPA because it was the most complete, allowed direct use of frequency data, and maintained the simplicity of the RIA and PMMI standards.

Again, looking for comments, feedback, etc.


Report this post
Top
 Profile Send private message  
Reply with quote  
 Post subject:
PostPosted: Mon Sep 30, 2013 9:21 am 

Joined: Mon Apr 29, 2013 5:16 am
Posts: 2
Interesting!
This is the first I have heard of this, but the first thoughts that come to mind, are that a risk assessment is basically sound judgment by a qualified person (An opinion). We all know how well opinions work as consultants.
A risk assessment once boiled down is generally not that complicated, but one major problem I see is that a risk assessment is based on quantifying Severity of Injury and Probability. Severity can be determined through Arc Flash and Shock Hazard Assessments, however probability is based on a given task. You may have several different risk assessments for the same piece of equipment depending on the task you wish to perform.
This definitely gives me something to chew on.


Report this post
Top
 Profile Send private message  
Reply with quote  
 Post subject:
PostPosted: Mon Sep 30, 2013 11:18 am 
Arc Level

Joined: Wed Jun 04, 2008 9:17 am
Posts: 428
Location: Spartanburg, South Carolina
This leaves open the possibility of having the arc hazard analysis divided into two parts.

The first part would be a risk assessment. This could be done by consultants or plant staff with expertise in operations, risk analysis, and safety. The deliverable would be a task based documentation of electrical safety program procedures that would include a determination of when PPE is required. Logically, this part would be done first because it may identify locations where PPE beyond Cat 0 is never required and could be eliminated from the arc hazard analysis.

The second would be the old approach of determining the incident energy at each location. This would be essentially the same as current arc hazard analyses and could be done by consultants with expertise in electrical analysis. The second part would create the arc hazard labels to document the level of PPE to be used when it is required.


Report this post
Top
 Profile Send private message  
Reply with quote  
 Post subject:
PostPosted: Mon Sep 30, 2013 3:35 pm 
Plasma Level
User avatar

Joined: Tue Oct 26, 2010 9:08 am
Posts: 2174
Location: North Carolina
Earl: read it first. Lots of human performance studies have been especially by militaries that are interested in the reliability of people to follow orders. The results are not encouraging. Under the absolute best circumstances, you get around 1% failure rates, which is far higher than acceptable compared to average accident rates. More typical is around10%. I kind of bunted by referring to the CPSC standard, but I have plenty of documentation to back this up. For the most common cases I borrowed the new 70E tables.

jghrist: that is the format of the draft tables currently. It is also very similar to the tables that I developed for my company after constantly arguing over when arc flash, shock, and EEWP was required. They kept getting hung up on the simplest things.


Report this post
Top
 Profile Send private message  
Reply with quote  
 Post subject:
PostPosted: Tue Oct 01, 2013 10:10 am 
Plasma Level
User avatar

Joined: Tue Oct 26, 2010 9:08 am
Posts: 2174
Location: North Carolina
This is a longer follow up to Earl's question about judgement calls. Frankly if you have a task where the possibility of injury is due to shock or arc flash, then it is not practically possible to reduce human error rates to the point where the task can be done safely without requiring a redesign of the equipment or task (preferred), or wearing PPE for the hazard.

I have files on other methods but copied down some human error potential rates. Essentially if someone executes a task that is structured (written procedure), in an unstressed condition after extensive training without other distractions that they are fairly familiar with but is not so routine that they aren't paying attention, you can get error rates in the range of 0.01 or less. Add in any of those elements and you quickly move towards a failure rate of 0.1 for "typical conditions" or oever 0.5 or higher under stressful/emergency conditons, though the "stress" may be external (fight with their spouse). If you add in inspectors/monitors and turn it into a multi-person task with plenty of avenues for correction of a problem, it is possible to get to 0.001% error rates. But that is just about the limit for human performance. If there is a possibility of a fatal mistake, current recommended safety requirements are generally 1 in 100K to 1 in 1 million. This is far better than the best achievable failure rate with human performance.

First my favorite statistic which has been repeated elsewhere is from Swain and Guttmann (1983, HRA Handbook) which states that the error rate under the best conditions for following a checklist correctly is 0.5 (50% failure rate). The reason is quite simply because human nature is not to do them one step at a time but to simply do several steps and once and then check them all off, shortcutting the purpose of a check list. Written maintenance procedures have an error rate of 0.3 (tend not to get used/paying attention to them). Calibration procedures have failure rates of 0.05 mostly because they are multi-step and detailed. You can approach this rate if instead of a checklist, each item in this list requires some piece of information which cannot be penciled in, which has to be determined and then written down. This forces the procedure-user to go step by step and actually pay attention to the procedure, similar to filling out a government tax form.

HEART (Human Error Assessment and Reduction Technique) is widely used but widely criticized as well for lack of a strong experimental basis behind it, from Lee's:
Totally unfamiliar task, performed at speed with no real idea of the likely consequences of actions taken. 0.35-0.97.
Shift or restore system to a new or original state at a single attempt without supervision or procedures. 0.04-0.42
Complex task requiring a high level of understanding and skill. 0.12-0.28
Fairly simple task performed rapidly or given insufficient or inadequate attention. 0.06-0.13.
Routine, highly practiced, rapid task involvely relatively low level of skill. 0.007-0.045
Restore or shift a system to original or new state following procedures with some checking. 0.0008-0.007
Completely familiar, well designed, highly practiced routine task occurring several times per hour, performed by highly motivated, well trained and experienced persons, aware of implicatioins of failure, with time to correct errors. 0.00008-0.007.
Respond corectly to a system command even when there is an assisting or automated supervisory system providing accurate interpretation of system state. 0.000006-0.009.
Miscellaneous task for which no written description can be found. 0.0008-0.11.
Simple response to a dedicated alarm with little noise, execution of appropriate actions covered in procedures. 0.008-0.11.
Identification of situation requiring interpretation of alarm indication patterns, pattern unique but no dedicated single positive features. Situation infrequent but covered by bi-monthly training. 0.02-0.17.
Complex diagnosis where there is no positive identifier for the real problem, it must be determined by reasoned deduction from available information. 0.09-0.16.

These are the basic failure rates assuming perfect conditions. Then we get to conditions which are multipliers to the basic error rates. HEART doesn't make it clear whether you pick the worst one or average the multipliers together:
Unfamiliarity with a situation which is potentially important but which may occur infrequently or which is novel. x17.
Shortage of time available in error detection and correction, x11
Means of suppressing or overriding information or features which is easily accessible, x9
No means of conveying spatial and functional information to operators in a form which they can readily assimilate, x8
Mismatches between operator's model of the world and the designers, x8
No obvious means of reversing an unintended action, x8
Ambiguity in required performance standards, x5
Inexperienced operator, x3
A conflict between long term and immediate objectives, x2.5
An incentive to use other more dangeorus procedures, x2
Unreliable instrumentation, x1.6
High level emotional stress, x1.2
Ill heath, x1.2
Low workforce morale, x1.2
Inconsistent displays or procedures, x1.2
Poor or hostile environment, x1.15
Disruption of normal work-sleep cycles, x1.1
Task pacing interrupted by intervention of others, x1.05

Deming reported 10% error rates on inspection tasks. In practice I've seen as good as 1% error rates on inspection tasks under the best conditions but Deming's number is reasonable and accepted industry-wide in quality control circles.

Rasmussen report includes some human error factors:
Selection of a switch or pair of switches dissimilar in shape or location to the desired switch assuming no decision error, 0.01
Error of commission (misreading label), 0.003
Inspector or monitor fails to recognize an error by operator. 0.1.
Personnel on a different shift to check condition fail to find error using written checklist or written directive, 0.1.
General error rate in high stress levels where dangerous activities are occurring rapidly, 0.2-0.3
Operator fails to act correctly in first 60 seconds after the onset of an extremely high stress condition, 1.0.
Same thing, after 5 minutes, 0.90
Same thing after the first 30 minutes, 0.1
Same thing after several hours, 0.01.

Failure rates given by LOPA (CCPS) vary between 0.1% and 10% as well, with some situations going to 100% assumed failure rates.

So in summary, you can achieve error rates anywhere between 0.1% and 100% depending on a variety of factors for a given task, with a basic "typical" rate of 10% under good conditions. If there is lots of cross checking, military-style drills, and multiple personnel involved, under the best conditions perhaps as low as 0.1% is achievable. Under conditions which would lead to errors, typical error rates rapidly escalate to 100%.

This is a little (actually a lot) unsettling to those of us who would like to first off, put our faith in our qualified and most skilled people. It says that essentially no matter how much checking and cross checking, no matter how many procedures are written and trained on, that humans are just not all that reliable. If the hazard is a relatively low concern such as a first aid case, then written procedures and training are probably acceptable. Whenever we put people's lives on the line though it is just not practical to expect people to do the right thing.

It also works both ways. When doing investigations on performance, it drives the discussion towards looking towards system failures (procedural issues, task design issues, equipment design issues) and away from personnel performance. We have to accept the idea that people make mistakes with a very high frequency. Thus you can't blame the person for every mistake and if errors are that important, it is better to focus on a less error-prone way to do the task. In other words, set people up for success rather than failure. That is not to say that discipline should not be used when a person is making mistakes over and over again with a much higher failure rate than others that the issue should not be dealt with.


Report this post
Top
 Profile Send private message  
Reply with quote  
 Post subject:
PostPosted: Tue Oct 01, 2013 1:16 pm 

Joined: Mon Apr 29, 2013 5:16 am
Posts: 2
PaulEngr

I am not saying I agree or disagree at this point just trying to obtain a better understanding.

I agree with the fact that the better the statistics that can be obtained will greatly benefit the outcome of a risk assessment.
I will admit, I am limited apparently, on the resources you have available but after reading the paper in your first post I have more questions.
If I understand Lopa correctly, we are still looking at an evaluation of severity of injury and likelihood of occurrence except the principle are based on events that happen less frequently and using statistics more so than "sound judgment". I am Ok with this.
I see the recommended range of acceptable risk in the 10^-4 – 10^-6 range.
It also states fatality rates of .2 fatalities/10,000 for shock and .1 fatalities/10,000 for arc flash per year.

Adding into the mix failure rates for human caused events at .01-1% and the latest thread suggest as high as .5. I see is suggests ideas such as the task is not common but the person is paying attention. Is this not a judgment call. If I were to ask 10 employees if they are going to pay attention to what I have to say. They would all say yes. But would that be true, and if so, how many would not pay attention whether it be intentional or not.
These are great statistics but when I try to apply this principle to actual work tasks I find it difficult.

Say for example: Depending on the equipment and the task an employee is exposed to on a daily basses would greatly change the statistics.

One individual may primarily working on large switchgear and distribution and yet another may primarily work at the machine level in a manufacturing facility with low energy levels (say below the 60A range). Do these statistics differentiate on tasks for different locations or equipment and if so where are they found.
The way I read it, the human error percentages are based on not following procedures correctly. Is this correct?

Is there a statistic on how human error accounts for shock or arc flash injury and not just procedures performed incorrectly?

So now I am with a maintenance worker and we have two tasks that need to be assessed. And for argument sake we have justification to work within the limited approach boundaries for both.

1. Changing a fuse in a 400A 240 volt open disconnect, fed by a 150kva transformer, with the supply side energized. The disconnect is properly installed, maintained and available incident energy is 26 cal. cm² (Category 4).

2. Replacing a starter bucket in a 480v 1600A MCC in this case I conducted a short circuit analysis and determined the tables are applicable and it suggests Category 4 PPE. MCC is also properly installed and maintained.

How do I apply these principles to these common scenarios?
Could you show an example or two perhaps?


Report this post
Top
 Profile Send private message  
Reply with quote  
 Post subject:
PostPosted: Wed Oct 02, 2013 7:28 pm 
Plasma Level
User avatar

Joined: Tue Oct 26, 2010 9:08 am
Posts: 2174
Location: North Carolina
Sure. When replacing the fuse, one question is whether or not the fuse holder is of the touch safe variety, a cutout, or something similar, in which case the shock incident rate is very low and the arc flash incident rate is based on the failure of the fuse holder disconnect. This is not directly in any statistcal references I have but failure rates on disconnects are typically 10^-8. If this is not a case of someone manually grasping the fuse with gloves on and trying to remove it then it would be equipment dependent and thus arc flash is negligible. There are lots of ways to change a fuse safely. In the second example, there are countless examples of arcing faults while pulling MCC buckets because controlling alignment lf the stabs is entirely done by the person doing it, to say nothing of quickly making/breaking connections. If you need incident reports I can send you a couple. That is why the 2015 draft task tables specifically identify this task as requiring arc flash PPE. You will not find the failure rate in the statistical information because mcc buckets are not designed to be inserted or removed while energized. This is also EEWP required. You can find that draw out breaker failure rates are about 1 krder of magnitude lower than their bolted equivalents. Draw out mechanisms minimize the risks that I just described, but as per ABB, 80% kf faults in switchgear are due to the draw out mechanism itself. MCC buckets have a much less reliable, flimsy design. By comparison even though there is not any direct statistical data for it, a draw out breaker failure rates are borderline at 10^-4 while bolted breakers are 10^-5 or better. This makes draw out switchgear marginal when racking breakers in and out of the cell. It also points to the idea that doing the same thing with an MCC bucket is definitely not recommended in the first place.


Report this post
Top
 Profile Send private message  
Reply with quote  
 Post subject:
PostPosted: Wed Oct 02, 2013 7:39 pm 
Plasma Level
User avatar

Joined: Tue Oct 26, 2010 9:08 am
Posts: 2174
Location: North Carolina
What it comes down to is that tasks such as operating disconnects or breakers, even taking readings with insulated probes on a meter....these are tasks where the likelihood of an arcing fault is driven by equipment failures. For these types of tasks we can look to statistics on equipment failure rates. Frankly if your cutoff is 10^-5 and you are doing proper maintenance, the only common equipment that is problematic is draw out switchgear, and only then because kf the drawout mechanism. So this suggests just as with the informational notes in the 2012 edition that tasks such as reading meters, operating breakers and disconnects, and the like, would not require PPE. Live work with panels open that carries a risk of accidently shkrting out equipment or coming into contact with it on the other hand is entirely driven by electrician skill, health, emotional state, and so forth. And maybe 99.9% of the time nothing bad happens. But LOPA is looking for 99.999%, something that is not humanly possible.


Report this post
Top
 Profile Send private message  
Reply with quote  
 Post subject:
PostPosted: Thu Jan 09, 2014 5:18 am 
Sparks Level

Joined: Fri Jan 03, 2014 6:57 am
Posts: 66
Location: the Netherlands
Hello guys,

I am trying to find out if the new NFPA 2015 will have much impact on my research. I am using the IEEE 1584 for determining the energy of the potential arc flashes. In ‘arc flash risk assessment’ by Paul Campbell (the attached file in PaulEngr's first post) I read in the introduction the 3rd paragraph ‘the old 2012 combined tables which showed various levels of PPE is gone.

In the IEEE 1584 ANNEX B 1.4 it states; ‘Read off incident energy, flash boundary, and the PPE level
recommended in NFPA 70E-2000’. The NFPA 70E 2012 edition does do that in table 130.7(C)(16).


Report this post
Top
 Profile Send private message  
Reply with quote  
 Post subject:
PostPosted: Fri Jan 10, 2014 10:14 am 
Plasma Level
User avatar

Joined: Tue Oct 26, 2010 9:08 am
Posts: 2174
Location: North Carolina
You did not state what exactly your research is.

When it comes to arc flash, there are two methodologies allowed in 70E. Method #1 is to use the tables in 70E. Method #2 is to use your own method, whatever that may entail.

When using the tabular method in the 2012 (and earlier) editions, you look up the task on a task table. It gives an "H/RC" (hazard/risk category) value. A second table then gives PPE required for this H/RC category. The "H/RC category" COMBINES the concepts of risk and hazard.

In the 2015 edition, at least the one in draft, there are two fundamental changes. First, the bare minimum PPE for ALL tasks is nonmeltable clothing. Thus, "H/RC 0" disappears. The second change is that the table lookup method is now contained in 3 separate tables. In table one, it is determined whether or not there is a risk of an arc flash. If no risk, stop. Table 2 provides a set of equipment and a "PPE level". The third table gives recommended PPE based on the "PPE level".

What has changed here is that the first set of tables from the 2012 edition are now broken out into two new tables. The first table considers the risk in a specific task, while the second table gives the hazard for a given type of equipment. The third table is nearly identical to the old "H/RC table".

Similar changes are made for the "use your own method". This is very confusing in the current (2012) edition. The definition of an arc flash hazard in the definitions section excludes all activities where an arc flash is not likely to occur. The remaining sections including Article 130 refer to determining an arc flash hazard but never mention looking at risk (likelihood) again. The only other place that risk assessments are mentioned is in the Annex. The annex contains what amounts to a modified version of an ISO risk assessment procedure but is missing a lot of key data necessary to complete one properly. This is no surprise because the ISO method is just as vague.

In keeping with current safety terminology in contrast, the new (2015) draft uses the word "risk assessment" instead of hazard assessment in Article 130. Thus one is forced to do the risk assessment but not left with confusing definitions on when it should be done.

IEEE 1584 is a hazard assessment method. It does not include risk assessments at all. Similarly the other methods referenced in Annex D as well as the tables in NESC that used to be referenced but are being pulled from the 2015 edition are also hazard but not risk assessment methods.

The fundamental difference by the way is that a risk assessment looks at both the likelihood of an accident AND the severity of the accident. A hazard assessment only covers the severity. If you look at only hazard assessments in isolation, you get very ridiculous results such as requiring some category of PPE in order to make a pot of coffee.


Report this post
Top
 Profile Send private message  
Reply with quote  
 Post subject:
PostPosted: Wed Apr 16, 2014 8:42 am 
User avatar

Joined: Mon Jan 31, 2011 6:21 pm
Posts: 27
Location: Salt Lake City, Utah
Thank you Paul for your explanation on this, we are getting ready to conduct our studies. It is helpful!

_________________
Priscilla Anderson :D


Report this post
Top
 Profile Send private message  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 11 posts ] 

All times are UTC - 7 hours


You can post new topics in this forum
You can reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Jump to:  
© 2019 Arcflash Forum / Brainfiller, Inc. | P.O. Box 12024 | Scottsdale, AZ 85267 USA | 800-874-8883