Search Documents
Search Again
Search Again
Refine Search
Refine Search
- Relevance
- Most Recent
- Alphabetically
Sort by
- Relevance
- Most Recent
- Alphabetically
-
On A Simulation Method Of Methane-Concentration Control ? IntroductionBy Waclaw Trutwin
The idea of automatic or remote control of the mine ventilation process generally, and methane concentration particularly, attracts the attention of mining engineers more and more. The advantages of introducing mine ventilation control systems are breaking traditional reluctance. The change of attitude is not only because of the requirements of modern exploitation technology, but it is also due to the recent progress in development and successful introduction of reliable monitoring systems and actuators in the form of controlled ventilators and doors [1]; [2], [3], [4], [5], [6]. Many 'years of theoretical and experimental studies of the dynamics of mine ventilation processes created the needed base for a proper design of an automatic control system [7],[8],[9], [10]. From these studies must, however, be drawn a fundamental conclusion, which may be regarded as the motto of this paper: An automatic control system for mine ventilation ill-conditioned or improperly designed is capable of creating hazard situations in response to random disturbances, much more, severe in consequence than a traditional ventilation system without any automatic or remote control! This statement is easy to prove if the dynamic properties of the ventilation process are taken into consideration. The ventilation process, as a matter of fact, is described by non-linear equations, and it must be expected that the process has more than one state of equilibrium. In other words, in the ventilation process may exist not only one but also more than one steady-states of flow, of which some are stable and others unstable. In certain circumstances, there may be no steady-state at all, and the process will oscillate [8], [11] , [12] . The state of flow in a network tends towards a steady-state and the actual steady-state established will depend on the initial conditions or disturbances in flow (fire,. etc.), which steady-state from the total number that will be . We frequently observe jumps from one steady-state to another. Disturbances in flow conditions which may cause such transitions are events of random character, occurring very rarely. Concluding, it must be stressed that there has to be a control system adjusted to the ventilation process in order to avoid situations mentioned above. There is only one alternative available and suitable for examination or study of the dynamics of a given mine ventilation problem: either by continuous monitoring of the real process, or numerical simulation of the process using a mathematical model. The advantages of the second method are obvious. This method allows consideration of every possible case very quickly and cheaply in relation to the first method. The aim of the paper is to show again that the simulation of the mine ventilation process and particularly a methane concentration process, separately or combined together with a control system, are real possibilities. A simulation method requires precise specification of the problem under consideration. For example, if we intend to examine a methane-concentration control system, the following items have to be specified: - expected target function of the control system. - structure of the control system. - mathematical model of control system, including sensor system, data preparation system, controllers, decision routine, regulators, etc. - structure of mine ventilation network. - mathematical model of ventilation process, including air flow and methane concentration processes. - pattern of disturbances which may occur in the controlled process as well as initial conditions on a 'start-up' of the system. Using typical computer programs for numerical solution of equations in the mathematical model of the problem involved, we are able, within the adequacy of the model, to simulate every case specified by the disturbances and initial conditions. As a result of simulation, it is expected that the following parameters could be defined: - transient flow in the network. - transient state of methane concentration in working areas. - stability of flow and methane concent¬ration. - stability of the control system. - range of control. - efficiency of control, etc. It is obvious that simulation methods readily allow for modifications to existing systems such that desired results will be obtained. Also optimisation problems could be solved by use of the simulation methods. In order to illustrate these general thoughts, a brief presentation of a mathematical model of methane concentration and
Jan 1, 1980
-
Use Lower Shearer Drum Speeds to Achieve Deeper Coal CuttingBy Jonathan Ludlow, Robert A. Jankowksi
Introduction A longwall operator can make few changes to increase output, significantly reduce respirable dust, and decrease power consumption. Reducing drum speed, and thereby cutting with increased pick penetration, is one. This article defines the benefits of deep cutting in terms of reduced dust production and power consumption. It also identifies the practical aspects of high pick penetration in terms of shearer performance and coal loading. Before examining some practical aspects of reducing drum speed and looking at the theoretical background, it is worthwhile to summarize what is meant by high penetration and deep cutting, and what potential benefits and pitfalls may be expected. Deep cutting (in the sense of high penetration rather than wide web) can be defined in one or more of the following ways: • Cutting with an average pick penetration distance higher than that used in the past. • Cutting with a pick penetration higher than the longwall operator would have used if the advantages of deep and slow cutting were not considered. • Cutting with a well-designed shearer drum below 40 rpm. All these definitions are slightly arbitrary. They are given to provide a basis for discussion and to make the point that any move towards deeper, more efficient cutting can result in operational benefits. The benefits of deep cutting appear in many different areas. The most noticeable benefit, provided suitable instruments are available, is the reduction of airborne respirable dust. During an experiment on a longwall in the Pittsburgh seam, a nearly four to one reduction in dust levels was seen when drum speed was halved. Not all studies have shown such a big reduction, but it seems that some benefit is almost always obtained when drum speed is reduced. Production rate and specific power consumption are also affected (in a positive sense) by reducing drum speed or increasing pick penetration. Although these changes may not be as spectacular as those in dust level, they contribute to the economic return of the longwall operation. Similarly, improved washability through fines reduction may have a beneficial economic effect. Cutting with shearer drums operating at lower speeds does have some possible deleterious impacts that an operator should be aware of. For example, cutting reactions - loads imposed on the picks by the coal being cut - will be increased as a deeper cut is used. Steps must, therefore, be taken to ensure the stability of the shearer and provide an adequate haulage effort. These increased cutting reactions also result in higher loads on the power transmission system (gearboxes, ranging arms, pick boxes, etc.) from the shearer motor(s) to the pick tip. These higher loads must be anticipated and provided for with the necessary hardware. In particular, extra haulage power must be provided with low drum speeds, since haulage effort required increases roughly in proportion with pick penetration. Because the drum will be rotating more slowly or will have fewer picks, the load on shearer components will also be more variable. If suitable, robust equipment is not used, this increased vibration will decrease reliability. Benefits of Deep Cutting Lower dust levels, decreased specific power consumption, and improved product washability are the most noticeable benefits of reduced drum speeds. Although the benefits will vary greatly with mining conditions and the type of coal, some examples of what can be expected are described below. Reduced Dust Levels Figure 1 shows principal results of a study on the effects of reduced drum speed conducted on a longwall in the Pittsburgh seam (Ludlow, 1981). This figure shows that average dust production was reduced by about 70% when drum speed was halved. By making some assumptions about such quantities as coal density, it is possible to apply this proportional reduction to the quantity of respirable dust liberated per ton of coal mined. When this is done, two kinds of results are obtained: • At 70 rpm, about 1 g (15 gr) of airborne respirable dust is created for every ton mined (roughly one part per million). At 35 rpm, only 0.28-0.37 g/t (3.9-5.1 gr per st) of coal mined become airborne respirable dust. • At 35 rpm, nearly four times the amount of coal may be mined before the compliance level is exceeded, compared with 70 rpm.
Jan 3, 1984
-
US soda ash industry - the next decadeBy Dennis S. Kostick
Introduction Soda ash is known chemically as sodium carbonate, an important inorganic chemical. It has been produced for several centuries by processing certain vegetation and minerals. The US soda ash industry has evolved from several small sodium carbonate mining operations in the West. Now, a nucleus of six companies produce about one-fourth of the world's annual soda ash output US producers currently dominate the world market. But certain international events are occurring that will reshape the domestic soda ash industry in the next decade. Historical perspective Soda ash is used mainly in the manufacture of glass, soap, dyes and pigments, textiles, and other chemical preparations. All of these are the first basic consumer products produced by developing societies. About 3500 BC, the Egyptians became the first society to use crude soda ash. The soda ash was used to make glass containers. It was most likely obtained from dried mineral incrustations around alkaline lakes. Soda deposits were virtually nonexistent in western Europe. So people resorted to burning seaweed to obtain the ashes. The ashes were then leached with hot water and the solute was recovered after evaporating the solution to dryness. The solute, a crude "soda ash" was impure. But, it could be used to make glass and soap. These two products and industries were important to the population and economic growth of the region. About 11.5 t (13 st) of seaweed ash was required to produce about 0.9 t (1 st) of soda ash. Along the coasts of England, France, and Spain, seaweeds with varying alkali contents became important items of commerce and sources of soda ash before the 18th century. The LeBlanc process used salt, sulfuric acid, coal, and limestone. It became the major method of production from about 1823 to 1885. In the early 1860s, Ernest and Alfred Solvay, two Belgian brothers, successfully commercialized an ammonia-soda process to synthesize soda ash. It used salt, coke, limestone, and ammonia. The Solvay process produced a better quality product than the LeBlanc method. In 1879, Oswald J. Heinrich presented to the Baltimore meeting of AIME, a paper entitled "The manufacture of soda by the ammonia process." The paper compared the two processes and foretold the demise of the LeBlanc technique. World production of soda ash in 1880 was 680 kt (750,000 st). Of that, 544 kt (600,000 st) was produced by the LeBlanc process. Of the 2.8 Mt (3.1 million st) of soda ash produced worldwide in 1913, only about 50 kt (55,000 st) was by the LeBlanc method. The LeBlanc process was never used successfully in the US, except for a brief period from July 1884 to January 1885 in Laramie, WY. Previously, soda ash had been produced by burning certain plants, as exemplified by the early Jamestown colonists, or by recovering small quantities of natural sodium carbonate found in alkaline lakes, such as those found near Fallon, NV, and Independence Rock, WY. Before the 1884 startup of the first synthetic soda ash plant in the US at Syracuse, NY, most of the domestic soda ash demand in the East was met by imports, primarily from England. Large-scale commercial production of natural soda ash began in California in 1887 from surface crystalline material at Owens Lake. Production from sodium carbonate-bearing brines at Searles Lake began in 1927 (Fig. 1). In 1938, during exploration for oil and gas in southwestern Wyoming, a massive buried trona deposit, presumably the world's largest, was accidentally discovered. Recent mineral resource evaluation by the US Geological Survey and the US Bureau of Mines indicates that the Wyoming trona deposit contains 86 Gt (93 billion st) of identified trona resource in beds 1.2 m (4 ft) thick or greater. Additionally, there is about 61 Gt (67 billion st) of reserve base trona. Of this 36 Gt (40 billion st) is in halite-free trona beds and 24 Gt (27 billion st) is in mixed trona and halite beds. In 1953, the Food Machinery and Chemical Corp. (later shortened to FMC Corp.) became the first company to mine trona in Wyoming. Soda ash demand increased.
Jan 10, 1985
-
Technical Note - Partially Fluxed Pellets With Low Silica For Blast Furnace At Samarco Mineração S.A.By J. A. M. Cano
Introduction Since the beginning of operations at the pellet plant at Ponta Ubu ES, Samarco Mineração SA has produced pellets for direct reduction and blast furnace processes. Of the total amount of pellets produced from 1977 through 1993, 55 % are used in the blast furnace, about 45 Mt (49.6 million st). The principal components of pellet gangue are calcium oxide, magnesium oxide, silica and alumina. They should be added in adequate quantities to guarantee the mechanical resistance of the fired pellets under blast furnace conditions. Over the years, the pellets produced by Samarco had a silica content between 2.5% and 2.8% and varying binary basicity preferably between 0.8 and 0.85. On the other hand, the increasing amount of industrial waste in siderurgical plants caused by the increase of steel production has caused some countries to put into practice methods to reduce the volume of slag produced in the blast furnace. This paper's goal is to find an alternative for decreasing the amount of slag produced in the blast furnace. It is possible to decrease pellet gangue by decreasing the silica content to about 2%, leaving the metallurgical properties and quality of pellets unaltered. For this work, Samarco pellets were used with Si02 between 2.5% and 2.8%, pellets with high silica and pellets with Si02 between 2.0% and 2.3% low silica. Both were partially fluxed by the range of varying basicity CaO/SiO2 from 0.8 to 0.95 during production. Experimental tests on pilot scale Thhis work began in January 1986 in the pilot plant (pot grate, Fig. 1) at Samarco. Its goal was to obtain preliminary data that would indicate the bybility of the project. It also formed a solid base to extend the studies in tests on an industrial scale of production for blast-furnace pellets. The pot grate is a test furnace composed of a gas burner, a combustion chamber and a grate, connected by hot air ducts. The burner is fed by a mixture of LPG and air. It reaches high temperatures through oxygen injection. The combustion chamber heats the air that comes from the turbocompressor. This hot air flows through the ducts to the grate on which the pellet samples are fired. During updraft drying, downdraft drying, preheating, firing and afterfiring, the upward and downward direction of the air flow can be controlled by valves driven by pneumatic cylinders. The pot grate indurator was fully automated in August 1989. Positive and negative pressures measurements resulting from gas passing through the pellet layer, as well as temperature readings, are recorded in graphs in relation to time for all tests. The tests depend on the various steps carried out in sequence that can influence the results of the tests. Therefore, some criteria were adopted to restrict the number of variables in the process. This was done to facilitate the results of the analysis. The pellets were composed of concentrate, bentonite, hydrated calcitic lime and metallurgical coal, all regularly used in the pellet plant. The material balance of the pellets mix was determined from the chemical analysis of the components. Table 1 shows the chemical characteristics of the concentrate and the additives used in the pilot plant and industrial tests. The basicity has a marked influence on the metallurgical properties of the pellets produced by Samarco with a silica content between 2.5% and 2.8%. However, for pellets with low silica (about 2%), it was necessary to study the variation in the parameters of quality in a wide range of basicities to deliniate with precision a scope of work. A large variety of low-silica blast-furnace pellets were produced at the pilot plant with binary basicity varying between 0.8 and 0.95. After chemical analysis, those pellets were separated into five groups of different binary basicity (0.8, 0.84, 0.87, 0.90 and 0.95). Each was then split, one part for metallurgical tests in the laboratory at Samarco and another to be evaluated in a laboratory for metallurgical tests in Germany. It was agreed that the tests to evaluate the quality of the pellets in the two [ ]
Jan 1, 1996
-
Thermal Spallation Excavation of RockBy R. Edward Williams
The Spa1lation Process Because of the low thermal conductivity of many hard rocks, rapid heating of these rocks produces a thin surface layer in which the temperatures attain high values. Thermal expansion of this surface layer is constrained by the reminder of the still cool rock, and when stresses within the surface rock become high enough, the surface rock breaks away from the cooler rock behind it and flies or falls off as a thin flake called a spall. Then the next, newly exposed surface is heated, and the process continues. This process is the basis of spallation drilling. The hot gases from a jet burner provide the heat for spallation to occur, and their high velocity provides a scouring action that transfers heat to the rock and removes the spalls as rapidly as they form. Spallation is a process which works in very hard rock. It is dependent upon the thermal expansion coefficient and the thermal diffusivity of the rock but is also affected by any discontinuities in the rock. To date the efforts which have been made to evaluate the various rock according to their spallability has been minimal. As the success of this process is dependent upon the characteristics of rock it is expected that the study of rock mechanics will prove to be of greater value to this program than to the other mechanism for drilling and excavating rock. Commercial Uses of SPALLATION In the 19408s, the Linde Air Products Division of Union Carbide (UC 1 began developing spallation for use in mining taconite ore, which is presently the chief source of iron in the United States. In this work UC developed a jet-piercing tool that burned fuel oil with oxygen to produce spallation and contained mechanical cutters to remove rock that was not amenable to spallation. The UC jet-piercing machines have since produced about 40 million feet of shallow blast holes used for emplacing explosives in the taconite mines. During this work it was found that hole diameters could be increased by merely reducing the advance rate of the burners and that existing holes could be enlarged by making another pass through the hole with the same burner. The Browning Engineering Go. of Hanover, N.H., has developed a hand-held spallation burner to cut slots in granite. It has been used for a quarter of a century and is now standard equipment for quarrying granite throughout the world. This burner, which resembles a small jet engine oriented with its exhaust pointed downward, is the forerunner of a flame jet burner used to spall experimental holes in granite at maximum rates in excess of 100 ft/hr when operating in hard, competent granite. It uses No. 2 fuel oil, which is burned with compressed air. The system uses water to cool the burner and the exhaust gases. These gases, along with the steam produced from the cooling water, blow the spalls from the hole. Experimental Work Theoretical and experimental work has been accomplished by the Massachusetts Institute of Technology and the Los Alamos National Laboratory. This work is reported in Refs. (3) and (4). To verify the experimental results of this work laboratory scaled down field tests were conducted using two we1 1 characterized granites from quarries in Barre, VT and Westerby, RI, under defined heating conditions. In the laboratory tests a propane - oxygen heating torch was used to direct a flame at the granite surface and the spal 1 ing process was examined at various heating rates with a high-speed video taping system operated at 200 frame per second. This produced a time-lapse sequence where the onset of the spallation process was easily distinguished. Also the heat flux from the torch to a flat surface at various stand off distances and flows was measured. A similar set of tests was conducted using the more easily quantified and uniform heat source of a 1.5 kw GO2 laser. This allowed accurate
Jan 1, 1986
-
An Overview Of The Use Of Coal Cleaning To Reduce Air ToxicsBy D. Akers, R. Dospoy
Introduction The geological processes that form coal can also concentrate trace elements in the coal. For example, the average concentration of arsenic in bituminous coal (20 ppm) is ten times the average concentration found in all the other rocks that make up the earth's crust (2 ppm). Similarly, other elements, such as antimony, cadmium, mercury and selenium, are more concentrated in coal than in the earth's crust. When coal is burned, trace elements can be further concentrated. Although no new constraints on trace clement emissions were placed on the power generation industry under the 1990 Clean Air Act Amendments, the act does mandate a three-year study of air toxics. Any new regulations aimed at electric utilities as a result of this Federally-mandated study will almost assuredly be very costly-an estimated $1 billion per year. Many trace elements in coal are associated with mineral matter. For example, arsenic is commonly associated with pyrite, cadmium with sphalerite, chromium with clay minerals, mercury with pyrite and cinnabar, nickel with millerite, pyrite and other sulfides, and selenium with lead selenide, pyrite and other sulfides (Finkelman, 1980). There are also cases in which some of these elements are organically bound. Just as both organic and pyritic sulfur can be found in the same coal, the same trace element may be both organically bound and present as part of a mineral in the sane coal. Physical coal cleaning techniques are effective in removing mineral matter from coal and can potentially remove at least some of the trace elements associated with specific minerals, thereby reducing the release of these elements into the atmosphere. Conventional coal cleaning to remove trace elements As part of a project funded by the Electric Power Research Institute, CQ Inc., a wholly-owned EPRI subsidiary located in western Pennsylvania, has demonstrated that large reductions in the concentration of many trace elements are possible if conventional coal cleaning techniques are properly applied. Four examples are given in Tables 1 to 4. In each example, the results shown were generated by cleaning the coal at CQ Inc.'s commercial-scale cleaning test facility. Cleaning results for Upper Freeport Seam coal from Northern Appalachia are provided in Table 1. Data are presented in the table in two ways: as a weight-based concentration (parts per million) and as a concentration per heat unit (grams per billion Btu). Grams per billion Btu is analogous to pounds per million Btu, but avoids the use of numbers with many decimal places. The heat-based concentration provides a better measure of boiler impacts, because the increased heating value obtained through coal cleaning reduces the number of tons that must be burned to produce a given thermal output. Reducing the quantity of coal burned reduces the quantity of trace elements entering the boiler. This raw coal is relatively high in several trace elements of environmental concern, including arsenic, cadmium and chromium. Cleaning provided large reductions in the quantity of arsenic, barium, cadmium, chromium, fluorine, lead, mercury, nickel, silver and zinc. The results for tests with a Powder River Basin coal, Rosebud/McKay, are presented in Table 2. Large reductions in arsenic, barium, cadmium, fluorine, mercury, nickel, selenium and zinc were observed with cleaning. The concentration of chromium increased with cleaning, while lead concentration increased in one test and decreased in another. Table 3 presents test results for Croweburg Seam coal from Oklahoma. Large reductions in arsenic, barium, cadmium, chromium and zinc were obtained with cleaning. Smaller reductions were obtained with lead and nickel, while chromium, fluorine and mercury increased in at least one of the tests. Table 4 presents cleaning test data for Kentucky No.11 Seam coal. In this case, large reductions were obtained with all elements measured. In general, these data indicate that physical coal cleaning is effective in reducing the concentration of many trace elements, especially if they are present in the coal at relatively high concentrations. The degree of reduction achieved is coal-specific, relating in part to the degree of mineral association of the specific trace element and the degree of liberation of the trace element-bearing mineral. The extent of trace element removal also depends on the method of cleaning the coal. Figure 1 is a washability plot, by size fraction, of arsenic vs. ash content for Upper Freeport
Jan 1, 1994
-
Mining in ancient Egypt – all for one, PharaohBy Bob Snashall
Introduction 1300 BC, Egypt. Pharaoh, the god-king, owned all things. He was the only mine operator. As the provider of all things, Pharaoh had great expectations of his officials who gathered the wealth. Pharaoh's official, the mine foreman, was at a gold mine site to see that royal expectations were met. For the official, it could mean a promotion to the good life here and to the godly life hereafter. When he checked the haul for sufficient progress, a lot was at stake. The miner wore a loincloth, perhaps a headband and, if he was a prisoner, ankle manacles. Only an oil lamp helped illuminate the hot, dusty blackness. A fire at the base of the quartz ore face competed for scarce air. The ore so heated crumbled at the prompting of copper wedges. Confined to a crouch, the miner tossed chunks of ore onto a rope-mesh which, when loaded, was drawn up and lugged out. On the surface, the gold was ground to dust. Then it was transported by donkey caravan to the royal depot. There it was weighed, recorded, and distributed to workshops. Many minerals mined Egypt had gold mines to the south in Nubia and to the east in the desert and Sinai. Indeed, gold underwrote Egypt's prosperity. With a constant gold supply, fewer hungry hands robbed burial crypts and tombs. Gold was sacred, "the flesh of the gods." The shiny metal financed the army that policed the desert mining routes and guarded the gold caravans from Bedouin marauders. Gold theft was an offense to the gods. Anyone caught with gold `in his lunchpail,' so to speak, could say goodbye to life, both in this world and the next. In addition to gold, Egypt possessed other mined riches that allowed the Egyptian civilization to flourish. From Sinai and Nubia came copper. So abundant was the red metal that it enabled Egypt to become the supreme power, before the advent of iron. Also mined were amethyst, turquoise, feldspar, jasper, carnelian, and garnet. These were used for the rich inlay work that distinguished Egyptian jewelry and cloisonne. But Egypt's most endurable and awesome material was its stonework - for statues and obelisks and in temples, tombs, and pyramids. Stone quarrying was a vast enterprise. One expedition boasted nearly 10,000 men. These included 5000 laborer soldiers, 130 skilled quarrymen and stonecutters, and - egads! - even 20 scribes. In addition, there were thousands of officials, priests, and officers grooms. There were even fishermen, to provide the multitudes with the catch of the day. Mining methods detailed In 1300 BC, quarrying techniques had changed little since the age of the pyramids some 1300 years before. At that time, in 2600 BC, limestone was locally quarried and fashioned into the blocks of the pyramids. A basic limestone mining method was tunnel quarrying. A ramp was built up to the face of a cliff. A monkey stage was then erected on a ramp. While standing on the stage, quarrymen carved out a rectangular niche in the cliff. The niche was large enough for a quarryman to crawl into. With a wooden mallet, he hammered long copper chisels along the edges of the niche floor to free up the back and sides of the block. The quarryman climbed out of the niche and removed the stage. He then carved out a series of holes in the cliff face for what would be the bottom of the block. The quarryman pounded wooden wedges into the holes. He watered the wedges until they were soaked. The water-logged wedges expanded, splitting the stone along the line of holes. The freed-up block was then levered down from the cliff. On the ground, the blocks were placed on sledges. Men pulled these to nearby water transport. Without block and tackle pulleys, paved roads, and wheels, this was no mean feat. Each block weighed an average of 2.3 t (2.5 st). Whenever possible, the quarrying was done directly from the surface. This "open cast" quarrying also involved using chisels
Jan 2, 1987
-
Saskatchewan potash : near-term problems, long-term optimismBy E. C. Ekedahl, R. J. Heath
Introduction Potassium, together with nitrogen and phosphorous, is an essential nutrient required for growth. Since all living things need potash, the major demand for potash - about 95% of the total - is as a fertilizer. Agricultural productivity has increased dramatically in recent times. This increase in crop yields requires substantial amounts of added nutrients to keep the soil fertile. It follows then that potash will always be in demand. There is no substitute. Other fertilizers that contain phosphorous (P) and nitrogen (N) are complementary and not competing products. Fireplace ashes (pot-ashes) have a relatively high potassium content. Their value as a fertilizer had been recognized for centuries. But today's potash industry did not begin until deposits of potassium-rich ore were discovered and exploited in Europe during the 19th century. Canadian potash development Potash in Saskatchewan was first recognized in 1943. It was discovered as a byproduct of an oil exploration program. But it was several years later before the existence of a major commercial deposit was acknowledged, and not until 1951 that the first attempt at development occurred. That attempt was unsuccessful. The shaft flooded and was abandoned. It did, however, demonstrate the need for new technology to penetrate the waterlogged Blairmore layer. This was eventually developed and the first mines were brought into production in the early 1960s. Once the technology was available, and the extent and quality of the potash beds became known, a number of companies proceeded to develop mines. By 1970, seven mines were in operation and three more were nearing completion. Combined, total capacity then was 7.6 Mt/a (8.4 mil¬lion stpy) K20. At that time, world potash consumption was about 15 Mt/a (16.5 million stpy). This increase in supply from Canada produced a large potential surplus that shattered the prevailing balance between supply and demand. Although world demand increased steadily throughout the 1960s and early 1970s, it was several years before world supply and demand were again in balance. Saskatchewan capacity has been expanded a number of times. It now stands at 10.7 Mt/a (11.7 million stpy) K20. Actual production has not approached this figure, however. Two new mines in New Brunswick have recently been built with a combined annual capacity of 1.2 Mt (1.3 million st) K20. Total Canadian capacity of about 12 Mt/a (13 million stpy) now amounts to 30% of world capacity. Central offshore marketing organization Canadian Potash Exports Ltd. (Canpotex) was created in 1970 as the offshore marketing organization for Canadian producers. Canpotex is owned by Saskatchewan producers and is their exclusive marketing organization for offshore business. Each company handles its own sales in Canada and the US, but all sales to other markets are handled through and by Canpotex. The Saskatchewan industry has an ore body of a size and consistency unmatched anywhere in the world. Large efficient mines have production costs that compare favorably with other producing countries. On the minus side, Saskatchewan is remote from most major markets. It therefore needs the ef¬ficiencies that stem from one organization that coordinates all offshore shipments and minimizes distribution costs. Agriculture guides potash market In the period following World War II, potash was a classic growth industry. World demand increased each year from 1945 to early 1970s. Since then, demand has been more erratic. Some years show substantial increases, but are followed by significant declines. For about the last decade, the pattern has been unclear and future demand has become correspondingly difficult to predict. North America and Europe together account for about 40% of the world potash consumption. In both areas, farming is characterized by surplus production, declining crop prices, and expensive government support programs. Under those circumstances, farmers respond by minimizing input costs. Fertilizer is one of the items they reduce. Potash is retained in the soil. It is possible to reduce potash application with no immediate deterioration in crop yield. The lower yields occur only when potash levels are depleted. So, farmers can econo-
Jan 12, 1987
-
Initiation Of A Personal Alpha Dosimetry Service In Canadian Uranium MinesBy Duport. P. J.
INTRODUCTION In February 1981, the Canadian Institute for Radiation Safety (CAIRS) initiated a routine Personal Alpha Dosimetry service for personnel of the Canadian uranium mining industry. This service is based on the use of the [Personal Alpha Dosimeter] developed by the French Atomic Energy Commission (CEA). The origins of personal alpha dosimetry and its rational are briefly described. Technical and organizational aspects of a routine personal alpha dosimetry service are outlined in this paper. HISTORICAL BACKGROUND International recommendations (1) and Canadian regulations have established Maximum Permissible Exposures (MPE) for each source of radiation exposure. Uranium workers in mines and mills are exposed to external radiation ( [y] rays) and to internal radiations ( [B] and [a] particles) which are delivered to the respiratory track by airborne alpha emitters (Rn and Th daughters and Long Lived Dust). To date, dosimetry for uranium workers has been performed by area monitoring/collective dosimetry. In North America the concentration of radon daughters is routinely measured by grab samples taken at the work place and by on-site gross alpha counting. The concentration of potential alpha energy is then calculated (usually by Kusnetz method) and expressed in Working Levels (WL). The time spent by each worker at a given work place is determined from his time sheets and used to calculate the individual monthly exposures to airborne alpha emitters, which is then expressed in Working Level Months (WLM). The uncertainties attached to such a procedure are obvious even in the case of frequent grab samplings and can be expected to lead to an underestimation of individual doses. Among fifteen possible sources identified in a mine situation, (2) four may stretch the standard deviation of the measurements' distribution, nine may lead to an underestimation and two may lead to either an underestimation or to an overestimation. To improve this situation, in 1971 the Atomic Energy Commission began studying the use of personal alpha dosimeters to determine individual exposures from the airborne alpha emitters encountered in the uranium industry environments. Criteria for a Personal Alpha Dosimeter In order to minimize the difficulties encountered in determining exposures received by uranium workers, the CEA in co-operation with the Atomic Energy Control Board of Canada (AECB), has developed a set of criteria for personal alpha dosimeters. Exposures may be determined easily and accurately using this criteria. Autonomy The dosimeter must operate for at least 10 to 12 hours. Excess time spent in the mine or in the facility may possibly be related to an accidental situation causing unusual levels of radioactivity. Since the dosimeter may be needed in non-underground settings where a cap lamp is not used, full autonomy is desirable. Maintenance, Periodicity of Reading In order to complement other dosimetry systems, the personal alpha dosimeter should be read monthly when the filter should also be changed. Routine air flow checks can be made according to local conditions (e.g. diesel loading). Radioisotopes Identification Since the exposure unit (WLM) is based on the concentration of potential alpha energy in the air, the personal alpha dosimeter should be capable of identifying each short lived alpha emitter included in the calculation of the WI, and WLM. Permanent Exposure Record Three points may be considered here: 1. In many countries, lung cancer in uranium workers is a compensable occupational disease. In some instances, compensation is awarded when it can be proven that the worker has received an exposure above a certain limit. The present uncertainty of the individual exposure makes the compensation procedure difficult. 2. By design, a personal alpha dosimeter must representatively sample all airborne particles, ranging in size from the unattached fraction to the upper limit of respirable aerosols (0.001 to 5 µm). The dosimeter must offer minimal resistance to the penetration of these aerosols. While the mining/ milling environment presents harsh conditions which may accidentally contaminate the dosimeter, it is important to be able to distinguish these cases of contamination and still obtain accurate readings. 3. A dependable dose register is most valuable for further epidimiological studies. The dependability of such a data base increases with the possibility of a second assessment of the dosimeters' reading (filter, film).
Jan 1, 1981
-
Rare Earth MineralsBy Stephen B. Castor
The rare earth elements (REE) which include the 15 lanthanide elements (Z = 57 through 71) and yttrium (Z = 39) are so called because the elements were originally isolated in the late 18th and early 19th centuries as oxides from rare minerals. Most REE are not as uncommon in nature as the name implies. Cerium, the most abundant REE (Table 1), comprises more of the earth's crust than copper or lead. Many REE are more common than tin and molyb¬denum, and all but promethium are more common than silver or mercury (Taylor, 1964). Promethium (Z = 61) is best known as an artificial element, but has been reported in very minute quantities in natural materials. Lanthanide elements with low atomic numbers are generally more abundant in the earth's crust than those with high atomic numbers. In addition, lanthanide elements with even atomic numbers are two to seven times more abundant than adjacent lan¬thanides (Table 1) with odd atomic numbers. The lanthanide elements traditionally have been divided into two groups: the light rare earths (LREE), lanthanum through eu¬ropium (Z = 57 through 63); and the heavy rare earths (HREE), gadolinium through lutetium (Z = 64 through 71). Although yttrium is the lightest REE, it is usually grouped with the HREE to which it is chemically and physically similar. The REE are lithophile elements (elements enriched in the earth's crust) that invariably occur together naturally because all are trivalent (except for Ce+4 and Eu+2 in some environments) and have similar ionic radii. Increase in atomic number in the lanthanide group is accompanied by addition of electrons to an inner level rather than the outer shell. Consequently, there is no change in valence with change in atomic number, and the lanthanide elements all fall into the same cell of the periodic table. The chemical and physical differences that do exist within the REE group are caused by small differences in ionic radius, and generally result in segre¬gation of REE into deposits enriched in either light lanthanides or heavy lanthanides plus yttrium. The relative abundance of individual lanthanide elements has been found useful in the modelling of rock-forming processes. Comparisons are generally made using a logarithmic plot of lanthanide abundances normalized to abundances in chondritic (stony) meteorites. The use of this method eliminates the abundance vari¬ation between lanthanides of odd and even atomic number, and allows determination of the extent of fractionation between the lanthanides because such fractionation is not considered to have taken place during chondrite formation. The method is also useful because chondrites are thought to be compositionally similar to the original earth's mantle. Europium anomalies (positive or negative departures of europium from chondrite-normalized plots) have been found to be particularly effective for petrogenetic modelling. REE were originally produced in minor amounts from small deposits in granite pegmatite, the geologic environment in which they were discovered. During the second half of the 19th century and the first half of the 20th century, REE came mainly from placer deposits. With the exception of the most abundant lanthanide el¬ements (cerium, lanthanum, and neodymium), individual REE were not commercially available until the 1940s. Since 1965, most of the world's REE have come from two hard rock deposits: Mountain Pass, United States, and Bayan Obo, China. GEOGRAPHIC DISTRIBUTION OF REE DEPOSITS More than 70% of the world's REE raw materials come from three countries: China, the United States, and Australia. China emerged as a major producer of REE raw materials during the 1980s, while Australian and United States market share decreased dramatically (Fig. 1). Table 2 gives recent annual production figures along with estimated reserves by country, and Fig. 2 shows loca¬tions of significant REE mining. MINERALS THAT CONTAIN REE Although REE comprise significant amounts of many minerals, almost all production has come from less than ten minerals. Table 3 lists minerals that have yielded REE commercially or have po¬tential for production in the future. Extraction from a potentially economic REE resource is strongly dependant on its REE miner¬alogy. Minerals that are easily broken down, such as bastnasite, are more desirable than those that are difficult to dissociate, such as allanite. In general, producing deposits contain REE-bearing min¬erals that are relatively easy to concentrate because of coarse grain size or other attributes. For more thorough discussions of REE¬bearing minerals see Mariano (1989a) and Cesbron (1989).
Jan 1, 1994
-
Room-and-Pillar Method of Open- Stope Mining - Study of Interrelationships and Constraints in Underground Coal Mining by Room-and-Pillar MethodsBy Stanley C. Suboleski, C. B. Manula
INTRODUCTION In any mining operation all possible steps should be taken to increase efficiency. One area for improvement is mine planning and design, particularly in the area of equipment selection for room-and-pillar systems. Be- cause of the availability of a wide variety of face machines, a fair degree of selectivity can be exercised in the choice of equipment for a particular job. However, this choice must be made on the basis of quantitative facts and forecasts related to the mining application. The purpose of this section is to develop and analyze the details of the mining process. Some specific areas studied include the relationship of system design to productivity, suboptimization as a result of equipment changes, and measurement of system performance. The plan of work leading to a quantitative description of these study areas is based on the growing interest in total system design using simulation as an analytical method (Manula, 1963). CHARACTERISTICS OF PRODUCTION OPERATIONS FROM ROOM-AND-PILLAR SECTIONS For a given mining method, raw production in a given section of a mine is primarily dependent upon the coal seam thickness, roof and floor conditions, methane emission, the mining methods, and the man-machine element. Average section production varies from 300 to 800 st per shift for conventional and continuous mining in high seams and from less than 100 to 300 st per shift in low seams. Since the reject varies from 0 to 40%, these figures must be decreased by the appropriate percentage to reflect the amount of clean coal mined. Personnel requirements per production section per shift for the various methods are listed in Table 1. Table 1. Production Personnel Method No. Method No. Conventional 12-1 5 Longwall 9-14 Continuous 9-1 2 Shortwall 9-12 MINING VARIABLES To evaluate the constraints and interrelationships for various mining methods, it is necessary to categorize the variables which underlie system production potential. Seven critical independent variables which determine production can be identified and categorized (Suboleski, 1978) : Seam Height The five categories are as follows: less than 36 in.;. 36 to 55 in.; 55 to 100 in.; 100 to 180 in.; and greater than 180 in. Floor Quality Floor quality ranges from : Excellent: Smooth, hard, grades less than 1 to 1 % % , and dry. Good: Smooth, soft but dry, with grades less than 3 % . The floor will deteriorate, but cautious operation can prevent it. There may possibly be heaving at some later time. Fair. Soft and damp. There is occasional interference with equipment operation; requires the use of four-wheel drive shuttle cars; ruts with regular use, and may have adverse grades of 5 to 7%. This may be coupled with slippery bottom and/or occasional steep rolls. Poor: Soft and wet. Requires blocking of the bottom to support equipment. There are frequent steep rolls and grades in excess of 7%. Roof Quality Roof quality ranges from : Excellent: Men are able to work under the unsupported top during the initial production cycle if legally permitted. Good: The roof is bolted on a 4 x 4 or 5 x 5 pattern with short bolts (442 in.) or <seam height if the seam >42 in., or requires posting with no bolts on a 4 x 4 or 5 x 5 pattern. There are no falls. Average: The roof is normally bolted on a 4 x 4 or 5 x 5 pattern, but with long bolts (>seam height or >6 ft.). There are infrequent minor falls or there may be an excellent roof which is difficult to drill. Fair: This type often requires spot bolting in addition to the regular pattern or bolting with planks. The roof conditions require shorter than planned cuts, or narrow cuts. Poor: This type requires bolts plus crossbars and posts, or installation of yielding supports or truss-type support. It is almost certain to fall if this is not done. Methane Liberation This ranges from none detected to low (no buildup at the face, even with minimum ventilation requirements) to moderate (the curtains must be extremely tight and tubing close to the face or methane will build up to 1 % during the loading of the car) to high (methane will build up to 170 if the miner is operated at the normal rate, even with proper ventilation). Hardness of Coal Coal hardness falls into the following categories: Soft: Soft coal is easily cut by a continuous miner. A plow could be used by longwall. Average: Coal of average hardness could be easily cut by a miner, and a shearer would be used in the longwall. Moderate: Moderately hard coal causes difficult cut-
Jan 1, 1982
-
State-Of-The-Art Of The [] Individual Dosimetry In FranceBy P. R. ZETTWOOG
HISTORICAL BACKGROUND A program in France to develop personal [a] dosimeters has been initiated 1974. The patent on which is based the present device was obtained in 1972 * . From 1972 to 1974, the possibilities of applying certain ionograph track detectors to the spectrodosimetry of radon daughters was explored. The first prototype were produced in 1974. It took four years (from 1974 to 1978) to produce an autonomous dosimeter whose components has a sufficient life span, especially for the turbine motor unit. Qualification in the laboratory was obtained in 1977. In 1978 it was obtained in the mine for technology (autonomy of 12 hours and a life span of more than one year) and in 1980 for monitoring. 300 dosimeters have been tested in underground mines all together. Indispensable peripheral equipment were also developed from 1976 to 1980 : calibration devices, equipment to prepare and develop the films, read out systems. The concept of an "Integrated System of Individual Dosimetry" (ISID) based on a personal [a] dosimeter measuring exposure to radon daughters, thoron daughters, ore dust and external irradiation doses was proposed at the end of 1980. Since January 1st 1981, ISID is used on a routine basis in some french mines, situated in remote area, and appears to be very competitive with the ambiant dosimetry. The latest version of the dosimeter is produced in mass series since June 1981 and should equip all french mines in 1982. DESCRIPTION OF THE INSTRUMENTATION DEVELOPMENT OF THE DOSIMETER MEASURING HEAD The measuring head is based on the use of ionographic film to detect a tracks. In fact, the measuring head is a spectrodosimeter which measures separately over the period of exposure: - the potential [a] energy inhaled due to the decay of Po 218, Po 214, and Po 212 ; - the number of Rn 222 atoms inhaled ; - the inhaled total [a] activity of the five long-lived emitters present in the ore dust. The contribution to the total inhalable potential [a] energy of these various radionuclidesin a typical underground mine is studied in Appendix I. The measuring head described in detail in Appendix II, is able to satisfy all the implications made in the ICRP recommendations. Appendix III deals with the use of this measuring head in the cases where the equilibrium factor is lower than 0.1. This situation occur in open-pit mines where account must be taken of the Rn 222 contribution, which is no longer negligible in relation to that of its daughters. CURRENT DOSIMETER PERFORMANCES Table I shows the characteristics of the latest dosimeter. Appendix IV should be consulted concerning qualification of the dosimeter in the laboratory and in mines, technological development which finally produced the [a] dosimeter and its peripheral equipment, and technical presentation of the ISID (Integrated System of Individual Dosimetry) based on the concept of a multirisk personal dosimeter. Data on the installation and operating costs of such a dosimeter, which would seem to be competitive, are also given in this Appendix. ADVANTAGES OF PERSONAL DOSIMETRY AS COMPARED TO AREA MONITORING The results of the first eight months of experiments carried out under real conditions in an underground mine site are given in detail in reference 8. Area monitoring : The monthly exposure per worker to inhaled Rn 222 was determined from the knowledge of time spent in various areas of the mine and for the different mining operations, as well as from numerous and systematic sampling of the Rn 222 concentration in all work places. Personal dosimetry : The exposure to potential energy from radon daughters was measured by an [a] dosimeter developed by the CEA and worn by each of the 160 miners during eight months. In this way 160 x 8 pairs of monthly individual exposure values have been obtained which can be statistically studied. This test was decisive for us because it proved that the [a] dosimeter was technically sound (very few defects over one year for 160 dosimeters) and especially that personal monitoring devices were superior to area monitoring devices. The following conclusions can be drawn 1. The exposure distribution obtained by personal dosimetry is log-normal. This is true for the results on the whole as well as for groups of results relating to certain explanatory variables. See fig. la, 1b, 1c. 2. The exposure distribution obtained by area monitoring does not correspond to any type of distribution. If the results of personal monitoring are taken as a reference, area monitoring tends to underestimate the high exposures and overestimate the low exposures.See fig. 2. 3. [a]-energy exposures are underestimated when calculated from radon exposures and the equilibrium factor found in the considered mines. This is due to episodes or to zones of high radon concentrations not registered
Jan 1, 1981
-
Classical Mineral Processing Principles in Technical Ceramics ApplicationsBy K. S. Venkataraman
The physical properties of clay-water systems depend on the complicated system of forces between the clay particles themselves, and between the clay particles and the ions in the liquid phase. The kind and distribution of ions in, on, and between the clay particles and the size and the shape of the particles are the basic factors determining the macroscopic behavior of clay-water systems. Understanding the system requires a knowledge of the nature of the clay particles, their size, structure, composition, and surface properties, and of the manner in which they interact with ions [and molecules] in the surrounding liquid [or other medium]. The validity of Professor Brindley's words (Brindley, 1958), written three decades ago in the context of making pottery, whitewares, and electrical porcelains, transcends time, and the basic message is perhaps all the more important in the considerably expanded use of ceramics for structural, thermal, tribological, electronic, and other applications. Silicon carbide, silicon nitride, and sialons have been studied in the last two decades for high- temperature structural and tribological applications, particularly for using in internal combustion engines. Titanates, zirconates and niobates of barium, strontium and lead, have high dielectric constants, and are extensively used in the formulations for making capacitors. Hexagonal ferrites (molecular formula MO.6Fe2O3) are in use for making permanent magnets for fabricating miniature motors, and for assembling loud speakers, particle accelerators etc. Cubic ferrites such as magnesium-zinc ferrite and nickel-zinc ferrite are used as transformer cores, and for other high-frequency applications. In this context, Richerson's recent book (Richerson, 1984) on the general scope of traditional and technical ceramics is a good starting point for an overview of contemporary ceramics technology. Glasses are a whole class of amorphous materials used widely as sintering aids, and for making glass-bonded ceramics and glass-ceramic composites. Composites are yet another burgeoning field where two or more particulate components are used for improving the performance of ceramics. For all these applications, the inorganic starting materials are almost always submicron and near-micron powders. Understanding the powders' physicochemical properties, and their surface chemical interactions with the surrounding liquid/gaseous medium is-necessary for making reliable ceramic parts at competitive prices. Even though ceramics science and engineering has attained its separate identity in universities and the industry, ceramists themselves would concede that ceramics science is a cross-disciplinary field, having incorporated and assimilated within itself many principles from several apparently disjointed disciplines. Principles of material science, graduate-level physics and chemistry, polymer science, surface and colloid chemistry, transport phenomena, particle technology, unit operations commonly used in chemical engineering and mineral processing, and statistics and applied mathematics are integral part of any ceramics curriculum in universities. Added to this is the fact that all bench-scale successes in making ceramic parts are to be scaled-up for larger throughput operations. Understanding and applying process engineering principles of comminution, classification, drying, calcination, etc. then becomes essential. CERAMIC FORMING: Despite the diversity of the materials and processes, conceptually, the steps involved in making ceramic parts have remained the same over several decades: The different components for making the pan (usually one or more powders plus other forming and sintering additives) are proportioned and mixed thoroughly, and the well-mixed formulations are consolidated into desirable shapes known as "green bodies." Usually binders such as wax, clay, organic polymers and surfactants, whether dispersed or dissolved in a suitable liquid are used during mixing the batch for giving strength for the green bodies. In the dried green state, the inorganic powders typically occupy only 55 to 60% of the bulk volume of the body, depending on the particle size distributions of the powders and the forming history, with mostly inter- particle voids accounting for the rest of the void volume. SINTERING: The formed bodies are then fired in high- temperatures kilns/furnaces during which the parts are exposed to a predetermined temperature profile, and "soaked" for a certain duration at the final high temperatures, typically between 1200 K and 1900 K, and then cooled to room temperature. The gaseous atmosphere in the furnace is controlled (oxidizing, reducing, or inert) when necessary. During the initial stages of firing, volatile liquids evaporate, and during the intermediate temperatures between 400 and 600 K, the the organic polymeric additives pyrolize and oxidize into water vapor, CO, C02, and other gases. At still high temperature, the glasses, when present, soften, and simultaneously, the ceramic particles rearrange into a network of grains with definite grain boundaries so as to reduce the total interfacial free
Jan 1, 1990
-
Pitfalls In Air Sampling For Radioactive ParticulatesBy L. H. Munson, D. E. Hadlock, L. F. Munson, R. L. Gilchrist, P. D. Robinson
All uranium mills are required to perform sampling and analysis for radioactive particulates in their gaseous effluent streams and in the environment. Pacific Northwest Laboratory was requested by the U.S. Nuclear Regulatory Commission (NRC) to provide technical assistance to them for their Uranium Mill Health Physics Appraisal Program. In conducting appraisals, air sampling methods used at NRC-licensed mills were reviewed and several deficiencies noted. This paper includes only environmental and effluent particulate sampling although much of the information is applicable to both in-plant and environmental samples. First, the components of a proper sampling program are discussed: program objectives, program design, sampler design, analyses, quality assurance, and data handling. Then the specific deficiencies, or the "pitfalls" from the first 8 mill appraisals are discussed. The first consideration in establishing an air sampling program is defining the objectives of the program. What is air sampling suppose to accomplish? Many of the deficiencies we have observed have resulted because the desired objectives were not clearly established in the minds of the radiation safety staff. PROGRAM OBJECTIVES An environmental air sampling program ought to fulfill the following seven objectives. The first is to: 1) [demonstrate regulatory compliance]. Although a goal of most programs, regulatory compliance, is not well understood. One has not only to comply with the conditions of the source materials licensee, but one must also demonstrate compliance with 10CFR20 and 40CFR190. For example, 10CFR20.106 states: "A licensee shall not possess, use, or transfer licensed material so as to release to an unrestricted area radioactive material in concentrations which exceed the limits specified in Appendix B, Table II of this part .... For purposes of this section, concentrations may be averaged over a period not greater than one year." Even if a mill's license does not require sampling at the site boundary of maximum concentration, a sample may be necessary to demonstrate compliance with 10CFR20. Most mill personnel are painfully familiar with 40CFRl90.10, which states: "Operations.... shall be conducted in such a manner as to provide reasonable assurance that: (a) The annual dose equivalent does not exceed 25 millirems to the whole body.... of any member of the public as the result of exposures to planned discharges of radioactive materials, radon and its daughters excepted... from uranium fuel cycle operations..." This means a licensee's sampling program must give "reasonable assurance" that the member of the general public receiving in the most exposure gets no more than 25 millirems per year. The sampling program necessary to provide that assurance may or may not be a license requirement. However, merely meeting the license requirements and the explicit regulatory requirements does not necessariarly ensure an adequate effluent and environmental air sampling program. The second objective of the environmental air sampling program, is to 2) [identify the source(s) of contaminants]. This will include not only the routine program, but special sampling for verification of sources and nonsources. Only after sampling can a mill operator be assured that roof vents, laboratory hoods, and other localized ventilation systems are not making a significant contribution to environmental releases. An environmental sampling program should also allow the mill operator to fulfill the third objective, to 3) [estimate exposures]. Even before 40CFR190, a sampling program should have provided the mill operator with the information necessary to determine the dose to the "fence post" person, or at least to determine if doses were well below the 10CFR20 limits previously allowed. The program should 4) [detect and measure unplanned releases]. If there is a fire, a scrubber failure, or if a drum of yellowcake breaks open, measured releases will almost always be lower than conservative estimates. Whether or not a system to provide sampling during accidents is needed is almost always a cost-benefit decision. In general, uranium operations do not sample just in case an accident may occur. Yet they may decide on continuous air sampling in lieu of intermittant sampling partially because of the potential for accidents. Another objective of air sampling is 5) [to provide information on the effectiveness of control systems]. This is always a concern with new or modified equipment and may dictate sampling frequency in other situations as well. For instance, if a small leak in a bag filter cannot be detected by other means, then more frequent stack sampling may be indicated. A routine effluent and environmental monitoring program should also fulfill the sixth objective,
Jan 1, 1981
-
Dynamic Methods of Rock Structure AnalysisBy Fred Leighton
INTRODUCTION Dynamic (seismic or microseismic) methods of determining the stability of structures in rock are based on detecting and analyzing the characteristics of seismic energy that has originated from or traveled through the rock mass. This seismic energy can be in the form of naturally occurring rock noise energy resulting from structural adjustments within the rock or can be introduced into the structure by physical means, such as by blasting or impact. In either case, the seismic energy radiating through the rock mass can be detected using standard equipment and can be analyzed by established techniques to reveal a wide variety of information concerning the condition and stability of the rock mass through which the energy has traveled. In the following sections, the basic instrumentation required for seismic and microseismic studies is described, and some of the presently used applications of these methods are discussed to exemplify the state of the art. INSTRUMENTATION Seismic disturbances in a rock structure generate two types of seismic wave radiation, body waves and sometimes surface waves, which radiate outward in all direc¬tions from the source of the disturbance. Underground mining applications are generally concerned only with discerning the characteristics of the resulting body waves, i.e., the compressional (p-wave) and the shear (s-wave) energy. As these two forms of energy travel through the rock structure, the particles of the rock mass are caused to vibrate, and the vibration character¬istics resulting from each of the two types of wave are distinct. Some important differences are: 1) Compressional and shear waves travel at different velocities through the rock structure. 2) The frequency at which each wave causes particles to vibrate is different, and may range from about 50 to 100 000 Hz. 3) The amplitude or energy level of each wave is different, with the shear energy usually being the greatest. These differences form the basis for equipment se¬lection for individual studies and for modern data analysis techniques. The following sections describe the basic equipment necessary to detect and record seismic wave energy data and show several examples of analysis procedures and how these procedures have been used. In principle, seismic equipment is very simple. It consists of a geophone (or geophones) to detect the seismic energy vibration and convert that vibration to an electric signal, an amplification system to increase the level of that signal, and a means of monitoring and/or recording the signals detected. Fig. 1 is a block diagram of a typical system. The following sections offer a very brief discussion of system components and their individual functions. A more complete discussion is given by Blake, Leighton, and Duvall (1974). Geophones The function of the geophone is to detect the vibrations caused by the passing of the seismic wave energy and to convert that vibration into an electrical signal that displays both the amplitude and frequency characteristics of the vibration. Particle motion or vibration can be quantified and measured by measuring displacement, velocity, or acceleration of the particles. Thus, there are three types of geophones: displacement gages, velocity gages, and accelerometers. The choice of gage depends on the characteristic frequencies of the seismic energy to be monitored and the sensitivities of each type of geophone. In general, displacement gages are used for low-frequency monitoring (periods to 1.0 Hz), velocity gages for medium-frequency monitoring (1.0 to 250 Hz), and accelerometers for high-frequency monitoring (250 to 10 000+ Hz). Experience has shown that in underground studies, the choice of which gage to use lies between velocity gages and accelerometers. An easy, accurate method for selection of gage type is discussed by Blake, Leighton, and Duvall (1974). Once the type of geophone has been selected for use, it must be properly installed, and in the installation procedure the most important step is insuring that the gage is firmly attached to a competent portion of the rock structure. Poorly mounted geophones may entirely fail to recognize low-level seismic signals and will distort the information from signals they do see. Amplifiers Seismic events associated with mine structures occur over a very broad range of energy which results in a broad range of geophone output levels. In general, geophone output levels occur in the microvolt to low milli-volt range, and it is necessary to amplify these signals in order to drive recording or monitoring equipment. Because either an accelerometer or a velocity gage might be used as the geophone, the amplification system must
Jan 1, 1982
-
Sublevel Caving at Craigmont Mines Ltd.By R. A. Basse, W. D. Diment, A. J. Petrina
INTRODUCTION In 1957, diamond drilling on a magnetic anomaly indicated an extensive zone of copper mineralization on what is now the Craigmont Mines property. By mid¬1958, drilling established a copper ore body. Milling commenced in September 1961 at 4536 t/d (5000 stpd) and by the end of October 1977 the mine had produced 339 662.04 t (374,363.9 st) of copper. At present, two-thirds of the mill feed is derived from underground operations and one-third from low-grade surface stockpiles. Craigmont Mines is situated 209 km (130 air miles) northeast of Vancouver (see Fig. 1), 16 km (10 miles) west of the town of Merritt, a logging, ranching, and mining community of about 7000 people. It is serviced by paved highways, Canadian Pacific Railway, British Columbia Hydro, and Inland Natural Gas Co. Water is pumped from the Nicola River, a distance of 6 km (4 miles) and a lift of 244 m (800 ft). In March 1967, the open pit mining operations at Craigmont Mines reached their economic limit and were suspended. Before this, it had been decided that a sub¬level caving method of underground mining would be used to supply ore to the concentrator after the cessation of open pit production. This chapter describes the fac¬tors influencing the choice of mining method, some of the problems encountered, mining practices, and results. GEOLOGY The ore bodies of upper Triassic age are located in a limy horizon striking east-west, closely paralleling the intrusive Guichon batholith, bounded on the south by rhyolites and on the north by graywackes, and dipping steeply to the south (Figs. 2a, b). The ore bodies are relatively narrow with a maxi¬mum width of 79 m (260 ft), a combined strike length of 853 m (2800 ft), and a vertical extent of 610 m (2000 ft). Chalcopyrite is virtually the only copper mineral, and 20% of the ore zone consists of acid solu¬ble magnetite and hematite. The area has been subjected to considerable faulting and brecciation, which is a major factor in the mining operation. Total geological reserves, at 0.7% Cu cutoff, for the deposit were 22 316 743 t (24,600,000 st) at 1.89% Cu. An additional 5 236 270 t (5,772,000 st) at 0.6% Cu were mined from the open pit. Ground Conditions The waste rocks-graywacke, andesites, and diorite -are relatively incompetent due to the high degree of fracturing and jointing, and all require varying degrees of support. The ore zones are somewhat less fractured; ground support is still required, however, although to a lesser extent than in the country rock. Ground conditions in the main ore body are better than in the smaller, nar¬rower ore bodies. Clayey fault gouge is present in most of the faults; gouge zones may be up to 6 or 9 m (20 or 30 ft) wide. The main ground problems are associated with local weakness rather than pressure. Shape of Ore Bodies (Figs. 2a, b and 3a, b) The main No. 1 ore body is approximately 244 m (800 ft) long and 46 m (150 ft) wide. It extends ver¬tically from the original top of the open pit at 4200 ele¬vation to just below the 3060 level. The No. 2 ore body is approximately 304 m (1000 ft) long, varies from stringer width at the extremities up to 79 m (260 ft) wide, and extends from 3060 level to 2400 level. Both these ore bodies have extensions re¬sulting in additional small irregular bodies. Ore bodies are mostly steep dipping, though part of the Wing ore body, an extension of No. 2 ore body, dips at 0.87 rad (50'). This ore body varies in size, but is approximately 122 m (400 ft) long, 21 m (70 ft) wide, and about 213 m (700 ft) high. No. 1 Limb ore body is a narrow extension of the No. I Main with a vertical extent of 137 m (450 ft), average width of 18 ft (60 ft), a strike length of 152 m (500 ft), and dips steeply at 1.4 rad (80°). No. 1 East is an eastern extension of the No. 1 Main with a vertical extent of 183 m (600 ft), a strike length of 91 m (300 ft), an average width of 30 m (100 ft), and dips at 1.2 to 1.4 rad (70 to 80°). No. 1 South is at the upper west end of the open pit with a vertical extent of 76 m (250 ft), a strike length
Jan 1, 1982
-
Recent Developments in the Design of Large Size Grinding MillsBy Norbert Patzelt, Johann Knecht
INTRODUCTION Grinding mills have been used in the minerals processing industry for over 100 years. Their dimensions have grown continuously during this time. Besides increasing throughput rates of grinding plants due to the depletion of high grade ores, the lower specific in- vestment costs, as well as reduced operating and maintenance requirements are major reasons for this trend. When selecting new plant equipment one must consider that design principles which have proven their reliability on sizes of today's equipment do not automatically warrant a successful operation on the ever larger size of equipment. Modern calculation methods as for instance the Finite Element method already contribute considerably to the safe design of the huge equipment being built today and are a standard tool of the design engineers. More recently, modern computer programs are also being used in order to size the equipment to meet the process requirements. Today, two design principles are on the market - one which supports the weight of such a unit on trunnion bearings through cast conical endwalls and one which is supported through slipper pad bearings arranged at the circumference of the mill shell (Fig.1). The reason for the development of this alternative grinding mill design can be found in the past. During the sixties and seventies the growing sizes of ball mills with high LID ratios caused many mill failures due to cracked endwalls. The accuracy of the calculation methods as well as the quality standards for castings were not developed to a degree required for such kind of heavy equipment. One way to overcome these problems was the increase of the manufacturing quality standards as well as the introduction of the finite element method based on the analysis of the experience available. The biggest grinding mills being built today are large size SAG mills with cast conical endwalls and trunnion bearings (Fig.2). This is due to the fact that mill manufacturers who had come from the conventional ball mill design adopted these principles as well to their SAG mills. These grinding mills perform well without special concern to the operators. Other manufacturers overcame the problems as mentioned above by eliminating completely the heavy castings and trunnion bearings and the problems associated to it (Fig.1). This design was originally applied to ball mills for the mining and other industries. Due to the success of these shell supported ball mills, this design principle was also applied to SAG mills(Fig.3). Despite of the fact that the majority of today's grinding mills are built to the conventional design it is also interesting to have a look at this alternative. Principles which have proven their reliability on sizes of today's equipment do not automatically warrant a successful operation on the ever larger equipment if bigger mill sizes are realized only based on the pantograph principle. With growing grinding mill sizes, the mass and volume flows through the equipment increases rapidly. Thus it is very important not only to concentrate on the safe design of the structural components of the equipment but as well on the process requirements. The influence of the design on important process parameters of dry and wet grinding plants are discussed thereafter. It shall be shown how modern computer programs can assist in the optimization of the design of components in order to fulfil the operational requirements of such large size equipment. PROCESS REQUIREMENTS OF LARGE SIZE GRINDING MILLS Dry Grinding Mills The world's biggest ball mill is a dry grinding ball mill having a diameter of 6.2m and an overall length of 25,5m with a drive power of 11,200 KW or 15,000HP. This grinding mill dries and grinds gold ore at a rate of 500 tons per hour at a moisture content of up to 9,5%. As shown in Fig.4 this mill was built as a shell supported unit. In fact only this design principle allowed to meet the process requirement. This mill could hardly be built with cast conical endwalls due to the constraints of the trunnion bearings limiting the mill inlet. The following case shows how modern computer programs can help to meet the design criteria of the air system of large size dry grinding plants. For dry grinding plants, the gas flow through the SAG mill has to match the drying, as well as the material transportation require-
Jan 1, 1998
-
Discussion - (Mis)Use Of Monte Carlo Simulations In NPV Analysis - Davis, G. A.By R. J. Pindred
Discussion by R.J. Pindred In his paper, Davis presents an overview of risk. He also introduces the Capital Asset Processing Model (CAPM) as a foundation for selecting the appropriate discount rate for a mining project. While applying portfolio theory is more defensible than the ad hoc adjustment of discount rates, the CAPM is not a panacea. CAPM shortcomings [The CAPM, as Davis stated, is expressed in the equation: ri=rf+pi4) where ri is the project discount rate rf is the risk free interest rate (3i is the project beta, and 0 is the market risk premium (rm - rf)] Application of the CAPM is more difficult than Davis indicates. Valuation is prospective, while the CAPM parameters are historical. Beta is determined from a regression analysis of historical data, while the beta needed for valuation is the expected beta. Betas are known to be unstable and the regressions that generate them often have low explanatory power. The difficulty of estimating a "project" beta must also be considered. Thus, the beta that is used in the CAPM will be based on the analyst's judgment. Like Cavender's discount rate, this judgment can lead to different project NPVs. Subjectivity in valuation cannot be avoided by a mechanical application of the CAPM. The risk-free rate, which Davis identifies as a short-term real rate of 4%, is also subject to scrutiny. A mining project is not a short-term investment and no single risk-free rate is appropriate for all of the cash flows. The hypothetical mine discussed in Cavender's paper is a six-year project. One might argue for the application of a risk-free rate from the Treasury yield curve at the duration of the project (in a bond-duration sense). This, too, is inappropriate. The risk-free rate should be matched to the timing of the cash flow. These rates can be determined by calculating the implied forward rates from the yield curve using a procedure known as "bootstrapping." It is likely that each of the project's cash flows would be discounted at a different rate. Commodity prices Davis criticizes the "ad hoc adjustment to the discount rate." Yet, in his discussion of the value of stochastic simulation, he suggests that the gold price be modeled as a "random walk, with or without a trend." This is essentially an arbitrary modeling of price risk. Consider that a liquid market in gold futures exists. The futures' price curve, which is closely related to the market's estimate of future spot gold prices, should be used to provide inputs to the model. This is especially true of a relatively short six-year project. Alternatively, as Davis correctly points out, a risk-averse investor can sell the commodity short to hedge price risk. Is it any more correct, in the portfolio sense, to account for price risk at all ?? References Cavender, B., 1992, "Determination of the optimum lifetime of a mining project using discounted cash flow and option pricing techniques," Mining Engineering, Vol. 44, No. 10, pp.1262-1268 Fabozzi, F.J., 1993, Bond Markets, Analysis and Strategies, Second Edition, Prentice Hall, Inc. Higgins, R.C., 1992, Analysis for Financial Management, Third Edition, Richard D. Irwin, Inc. Solnik, B., 1991, International Investments, Second Edition, Addison Wesley Reply by G.A. Davis Pindred discusses two issues related to my paper, the shortcomings of the Capital Asset Pricing Model (CAPM) and which commodity price values to use in the valuation exercise. Even though these topics are not directly related to the use or misuse of Monte Carlo simulation, they are important points to take into consideration in valuation exercises. Since I do not appear to have addressed these issues satisfactorily in my original paper, I will comment on each here. Pindred agrees with me that applying portfolio theory, and specifically the CAPM, to the selection of project discount rates is more defensible than ad hoc methods. But he then points out that the application of the CAPM to project valuation is more difficult that I indicate. It is true that the CAPM is a difficult tool for project valuation in general,. But the application of the CAPM to mining projects is one of the easiest I can think of. The biggest problem with using the CAPM for project valuation is coming up with an expected project beta. I suggest a project beta for gold projects of 0.45. The "true" value might be 0.35, 0.55 or whatever. Pindred correctly notes that the selection of the appropriate project beta is based
Jan 1, 1996
-
Luncheon SpeechBy Lowell T. Harmison
I appreciate very much the invitation to speak with you and the opportunity of bringing you messages from both the Secretary of the Department of Health and Human Services and the Assistant Secretary for Health/Acting Surgeon General of the U.S. Public Health Service. I would like to take this opportunity to congratulate you (the organizers of this Conference) on identifying the critical issues in the field and assembling such a broad array of experts to address them. I would like to present a brief view of the emerging framework for health that puts into perspective some of the aspirations of the Administration and to highlight several points with regard to prevention and occupational health. The goals are: 1. To improve the overall health status of our people. (This has been and will remain the National policy regarding health.); 2. To engage the Nation in the important effort of enhancing public health. (This is not reserved exclusively for the activity of the Federal Government or for State Governments. Public health has to be a cooperative effort that brings together all of the people engaged in the process of serving the people.); and 3. To pledge that health care will not be priced out of anyone's reach because of inflation. (It is clear that there are major tasks of bringing about economic recovery in our country. One aspect of this effort is to guard against the cost of health care not being allowed to rise beyond the reach of persons who need that care.) "How will these goals be achieved and what must change in the delivery of health and medical care in our society?" There are a number of real issues as well as perceptions that adversely affect the attainment of these goals: First, The cost of medical care is soaring and the public, industry unions and other elements of our society are becoming concerned. (They recognize the problem and are demanding a solution.); Second, There is a growing concern about the priorities that have been set. (For example, the evidence that preventive interventions are the most effective approach is overwhelming, yet medicine has not yet given that a high priority.); and Third, There is the perception that physicians do too much to too many people at too great a cost and that too much and too costly technologies are used. In view of the perceptions, we all must accept some changes and the challenges that needed changes will bring. A month before the new budget went to Congress, President Reagan went on nationwide television and told the American people that, "It is time to recognize that we have come to a turning point and we are threatened with an economic calamity of tremendous proportion and the [old business as usual treatment can't save us. Together we must chart a new course]." Now eight months down the road from this and a long Spring and Summer of discussion both within the Executive Branch and in the Congress, many plans and programs and concepts have emerged. The new course has been charted and the turning point has been made. Business as usual has been put aside and the Administration's leadership has been stretched and tested in putting forth a better approach with the reality that money is tight and that old habits of delivering care are difficult to change. The Congress has now given us a look at a new health budget that takes into account some of the harsh economic realities and that does make allowances for the persistence of familiar behavior. Against this background, it is now possible to begin addressing ways to provide health services to people at a price the Nation can afford to pay. There are without question difficult decisions involved but the Administration is committed to supporting and improving health care in America. It has been the President's contention that one of the principal causes of the inflationary spiral in the country was the steady and indefensible growth of the Federal budget. The problem stems from the fact that we have been living well, but beyond our means for nearly 30 years. Now we are discovering that there is a bottom to the barrel after all. It is possible for our society to run out of things like energy (oil), water or money. The health bills must be paid -- by Government, by insurance, by parents or by someone. Each year with a bigger shopping list and more money to spend the Federal Government went into the marketplace to buy. This action altered the
Jan 1, 1981
-
Ventilation Systems As An Effective Tool For Control Of Radon Daughter Concentrations In MinesBy Aladar B. Dory
INTRODUCTION Practical experience in mines with known presence of radon daughters in the mine atmosphere in Canada and elsewhere shows that a very high concentration builds up in an unventilated dead end heading. As Holaday et al1 observed, even a minimal air movement results in a drastic reduction in radon daughter concentration. It is therefore obvious that the main objective of radon daughter control in the working environment is to design the ventilation system providing an optimized flow of fresh air into the workplace, resulting in acceptable climatic conditions and achieving radon daughter concentrations resulting in exposures as low as reasonably achievable. BASIC OBJECTIVES Large mining companies, having extensive material resources and professional expertise, have utilized elaborate electrical modelling in the design of mine ventilation systems as early as 1950 (coal mining industry in Europe) and with the advance of computer modelling techniques, their utilization in ventilation systems design is on the increase. Unfortunately, these methods are usually not available to small mining companies and even the large companies might not achieve the fullest benefit from utilizing them, if proper limiting factors are not considered in the modelling. When an evaluation of a ventilation system of a mine is undertaken in literature, a measure of the amount of air supplied underground per one ton of ore mined is used as an indicator of the efficiency of the ventilation system. Yet, even the greatest amount of air forced into the mine might not result in an acceptable working environment if a proper distribution of this air into individual working places is not achieved. The volume and the age of the air are probably the two most important factors in achieving acceptable radon daughter concentrations in the workplace, but other factors also have to be considered. DIRECTOR MINE - ALCAN, NEWFOUNDLAND FLUORSPAR WORKS ST. LAWRENCE, NEWFOUNDLAND, CANADA Ventilation To illustrate the effects of the design of the ventilation system on the control of radon daughter concentration, let us review the gradual development of the ventilation system of this mine from the earlier years of its development up until its final years of operation. This mine, located near the community of St. Lawrence on the south coast of Burin Peninsula was developed in the late thirties and reached full production by 1942. Unfortunately as was customary at that time, the only source of ventilation was a natural draft. The mine was extremely wet, and no significant attention was initially given to possible health effects of dust. It was not until the mid-fifties, when a number of cases of silicosis had surfaced, that de Villiers and Windish2 observed a significant increase of lung cancer incidence among the miners in comparison to its incidence among the general population of Newfoundland. Suspicions regarding radiation as a cause of the lung cancer were expressed, but it was only in surveys taken in late 1959 and early 1960 that Windish3 and Little4 established the presence of radon daughters in the mine atmosphere in very high concentrations. Windish, de Villiers and Hurley suggested that the most likely source of the radon in the mine was the mine water which dissolved radon during its passage through the granitic country rock in the surrounding geological area. This conclusion was confirmed by analyses of water from various areas of the mine by the Atomic Energy Canada Limited laboratories. The radon values in the samples varied from 4,240 to 12,850 pCi/L5. Following the discovery of the presence of radon daughters in the mine, the company took speedy action to install mechanical ventilation for the mine. The system was not designed as a total unit, but fans were installed rather on a trial and error basis. The basic system installation began in March 1960 and was completed by 1962. It remained basically unchanged with only minor modifications until August 1973 when a wholly new, redesigned ventilation system was implemented. A schematic section of the mine and its ventilation system for the period prior to March 1960 is given in Figure "A", for the period 1960-1973 in Figure "B", and for the period after August 1973 in Figure "C". The ventilation system prior to 1960 is not known. All workings of the mine were ventilated only by natural ventilation. If any measurements of airflows at different or any times of the year ever existed, no records have been preserved. The very minimal natural ventilation was augmented by "blowing" air from compressed air supply lines and exhaust air from drills. It is known that the compressor capacities of the mine were limited and therefore no significant air movement was probably created by the "blowing".
Jan 1, 1981