Search Documents
Search Again
Search Again
Refine Search
Refine Search
-
Mineral Science and the Future of Metals – 1973 Jackling LectureBy Lyman H. Hart
Some of the significant facts that will affect the supply and demand for metals during the next few decades are given in this presentation. This is important because the only hope for intelligent guidance of the national destiny depends upon presenting a consistent and accurate picture of basic problems. The metals problems now fomenting are only slightly less important than those precipitating the energy crisis. Admittedly, it is difficult to generate serious thinking on this subject when the world is enjoying easy access to all metals at modest prices. This is possible because of the current world capacity to over-produce, and United States credit standing, at least up to now, has permitted buying freely on world markets. Although both these vital conditions are subject to change, there is little doubt that if the U.S. financial house is kept in order, we can muddle along for a while on a generally expanding, metal-based economy. But should our credit standing be permitted to collapse, or should there be a tightening of world supplies of metals, the nation would immediately find itself in trouble. Unfortunately, clear signals indicate that both of these conditions are threatening. The metal age started many centuries ago, and with the exception of the noble metals, barely a dent was made in world reserves until the present century. Then, as a result of the pioneering ideas of D. C. Jackling and Paul Gemmel, and the willingness of the McNeals, Penroses and Guggenheims to back ideas with money, a virtual explosion of metal production was propagated throughout the world. It is important to examine the causes of this expansion to determine whether it is a trend that may be sustained or whether it is anomalous and therefore attributable to circumstances not likely to be repeated or continued. This period of plenty is a once only phenomenon. Vast metal resources have been sitting around here for eons, while the world was unaware of or unconcerned with their existence. But when population began to grow exponentially, and ways were discovered to use metals to develop machines to do the work of men and animals, the exploitation of raw materials increased at a staggering rate. And now the realization is dawning that minerals are a wasting asset and are beginning to present supply problems that should begin to be of concern. Two main factors have caused this condition; One is the population growth, and the other is the expanding demand for irreplaceable raw materials as nations become more developed. A group of systems analysts, at Massachusetts Institute of Technology in a book Limits of Growth, has warned that at present rates of population growth and resource consumption together with environmental constraints, a point will be reached within 100 years when the world population will become totally incapable of supporting itself. The authors conclude that this end result could be avoided if a planned equilibrium could be established at a lower rate of population growth and resource consumption. Fortunately, there is evidence that the population problem is moderating. The growth rate in the U.S. has decreased from 2.3% per year in 1900 to 1.1% in 1970. But, even at the lower rate, population will double in 70 years. The world rate is greater, and if it continues, world population will rise from 3.5 billion to nearly 10 billion in 50 years. All of these projections are questionable because the dangers of growing population are now being recognized. A marked decrease in growth rate has been noted in the more developed countries-in fact, it was zero in the U.S. in 1972. The other factor affecting consumption of raw materials is the increase in per capita use of most items as individuals become more affluent. Many economists believe that developing countries such as China and India represent a vast future potential market; but there are others, with whom the author agrees, who believe over-population tends to stagnate development. Nevertheless, it is a fact that vigorously developing societies increase consumption at a rate far above their population growth rate.
Jan 1, 1974
-
Institute of Metals Division - Precipitation from Martensitic Solid Solutions of Ti-Cu AlloysBy R. Taggart, D. H. Polonis, W. C. Gallaugher
In the Ti-Cu system, the a' phase can be produced over a wide range of alloy composition witJwut the retention of measurable amounts of the ß or ? phases. This paper reports on the decomposition of this hexagonal martensite phase front the standpoint of solute corzcentratiorz, tempering temperature, and coherency strain condition. The mechanism of precipitation during tempering comprises localized precipitation which is followed by discontinuous precipitation if sufficient coherency strains are present. The localized precipitation process in hypoeutectoid alloys is described by the generalized Johnson-hiehl equation and an activation energy of 46,000 cal per mole. In hypereutectoid alloys the corresponcEing activation energy is 51,000 cal per mole. The rate-controlling process has been proposed as the difision of copper along a' platelet interfnces. SEVERAL investigations have been reported concerning the kinetics of decomposition of the mar-tensitic a' phase in titanium binary alloys.' In the Ti-Mo system,' two modes of a' decomposition have been observed in the presence of /3 phase; one of these involves the precipitation of fine a from a', and the other involves diffusion across the a'-ß interface. In studies of the Ti-Cr system, Rostoker2 did not detect the formation of TiCr2 during tempering. In Ti-Ni alloys,3 the intermediate phase Ti2Ni has been found to precipitate directly from a' along the interfaces between plates. With the exception of the study of Ti-Ni alloys the previous investigations of tempering phenomena in substitutional martensites are mainly qualitative and do not present a detailed description of the precipitation processes. The following limitations restrict the correlation of the microstruc-tural processes and reaction kinetics during tempering in most binary-alloy systems. 1) A mixture of at least two phases characterizes the constitution of quenched alloys. 2) Difficulties have been encountered in obtaining uniform structures throughout quenched samples. 3) The reaction products, and in particular their morphology. have not been clearly resolved. 4) The martensitic a' phase forms over only a limited composition range for most titanium-base binary alloys. Gomez and polonis4 showed that the Ti-Cu system provides an excellent basis for investigating the tempering of a' over a range of composition without the complication of retained P or the transition w phase. In the present work the effects of solute concentration, tempering temperature, and coherency strain conditions are considered with reference to the over-all precipitation process in quenched alloys of the Ti-Cu system. Both microstructural observations and kinetic data are correlated to define the rate-controlling processes which govern the observed localized and discontinuous precipitation reactions. An earlier paper5 was devoted to a discussion of the modes of heterogeneous nucleation of Ti2Cu from the martensitic a' structure. The present paper will therefore emphasize the progress of the tempering reaction beyond the initial stages. EXPERIMENTAL METHODS The Ti-Cu alloys for this study were prepared by conventional arc-melting procedures. Chemical analysis revealed that the alloy buttons contained approximately 0.02 wt pct 0, 0.04 wt pct N, and within 0.1 wt pct of the intended copper concentration. The progress of the tempering reaction was followed by means of electrical-resistance measurements utilizing a Kelvin double bridge, microhard-ness readings with a 400-g load, and X-ray dif-fractometry patterns to reveal line-broadening effects. Metallographic specimens examined at magnifications greater than X1000 were shadowed with germanium to reveal fine structural details. Direct carbon replicas were prepared for the electron-microscopy studies. EXPERIMENTAL RESULTS Property Changes. The changes of microhardness that accompany the precipitation of Ti2Cu from a' are shown in Figs. 1 and 2. As the solute concentration increases the peak hardness, for a given tempering temperature, increases. In alloys of given composition the time to reach the peak hardness agreed with the time to attain maximum X-ray diffraction peak breadth, as shown in Fig. 3. The maximum hardness and the maximum line breadth increased with lower tempering temperatures, and the time to reach the maximum also increased. The a' peaks broadened during the initial stages of tem-
Jan 1, 1965
-
Comparative Cavability Studies at Three MinesBy Louis A. Panek
INTRODUCTION AND SUMMARY With respect to the geomechanics aspects, the primary technical objectives in mining by an undercut-cave method are to achieve a controlled, sustained caving of the mineral body and to remove the fragmented ore with a minimum of dilution from surrounding unmineralized rock, while maintaining stability or control of the rock structure around the access openings. The structurally ideal ore body probably does not exist. If the rock mass is so weak as to cave on a short span, it may tend to be too sticky for easy drawing and handling, or create special problems in regard to support of the access openings. If the ore is strong, caving may be difficult and the caved fragments may be of such a large size as to require special equipment for transferring the ore from the caved zone. Given enough time and money to generate data and conduct trials, engineering and ingenuity can devise an appropriate combination of mine layout, sequence of extraction, and mechanical equipment to achieve a technically successful caving extraction operation to meet the foregoing requirements in many types of deposits. The large capital investment involved, however, reduces the freedom to make major changes once the mine development is well under- way, and the penalties for failure to accurately anticipate operating conditions militate against selecting any but the most obvious candidates for mining by an undercut-cave method. The demonstrated capability to extract large, deep, economically marginal deposits by this low-cost, high-volume method of mining provides an incentive to develop a rationale for predicting the cavability and stability characteristics of a deposit prior to mining, so that the undercut-cave method may be extended to a much wider range of mineral deposit characteristics. The ultimate goal is to establish as explicitly as possible the quantitative interrelationships between the measured rock-mass characteristics, the caving span, the size distribution of caved ore fragments, and the sizes and locations of stable access openings. Lacking an understanding of these relation- ships, a designer may readily change some factor in the wrong direction (e.g., excessively reduce the distance between the extraction level and the undercut to increase the convenience of operations for the undercutting crew, increasing the frequency of repairs to the extraction-level support system) or create unnecessary problems elsewhere in the system by introducing a design change that can achieve only minimal improvement in the factor of direct interest (e.g., unnecessarily complicate the ore-transfer system by changing the orientations of the openings, with- out succeeding in the objective of improving the ground support conditions). Although successful predesign is the prime objective, subsequent modifications in mine lay- out and sequence of extraction operations are inevitable. In developing the modified solution, systematic experimentation based on an understanding of the underlying structural relationships, coupled with monitoring measurements of selected diagnostic structural-behavior parameters, can achieve an acceptable solution in a minimum number of steps, which is far superior to the typical operational trial-and-error approach, in view of the cost of implementing each successive change. Since a drill-core sample of ore rock from a successful undercut-cave operation may exhibit a uniaxial crushing strength in excess of 100 MPa, the caving of such a rock mass is now commonly believed to be ascribable to the presence of discontinuities such as joints or fractures throughout the ore body. An essential part of the present investigation was therefore to characterize the natural discontinuities at each of the test sites by measuring their attitudes and spacings. The term "fracture" is used herein in a general sense to include any planar discontinuity without implication as to its suggested mode of origin. Most, but not all, of the fractures are properly termed joints. As a point of departure we may consider the possibility that the rock mass is transected by three families of joints, each family possessing a distinct orientation, such that parallelepipeds of intact rock are delineated by the jointing. Even if cementation is absent between adjoining parallelepipeds, the undercutting of the rock mass will not necessarily initiate sustained caving--owing to the all-around confinement, an arch may tend to stabilize over the undercut unless prevented from doing so by the failures of key blocks of intact rock. Thus, although the jointing can be assumed to weaken the rock mass, creating preferred directions of
Jan 1, 1981
-
Regional Geochemical Patterns in Wyoming and Northern Colorado Defined by Stream Sediment AnalysesBy Richard G. Warren, Michael M. Minor, Gayle J. Thomas
INTRODUCTION Los Alamos Scientific Laboratory (LASL) initiated its effort in the Hydrogeochemical and Stream Sediment Reconnaissance (HSSR), a part of the Department of Energy (DOE) National Uranium Resource Evaluation (NURE) program, in late 1975. Since that time, LASL has completed sampling of the Rocky Mountain states of New Mexico, Colorado, Wyoming, and Montana at a density of about one water or waterborne sediment sample per 10 km2 and has sampled about 85% of Alaska at a density of at least one per 25 km2. Analytical results for these samples are reported by National Topographic Map Series (NTMS) quadrangle. All collection and analytical procedures have been standardized from the outset (Sharp, 1977; Sharp and Aamodt, 1978). Until early 1979, only uranium results were reported for these samples; thereafter, results include analyses for at least 42 additional elements in the sediment samples. Each LASL HSSR report also includes a discussion of the relationship of these analytical data to known or possible uranium resources; more recently, these data have been re- examined for their relationship to resources for other metals (Beyth et al., 1980; 1980a). Analytical results for LASL HSSR sediment samples are both comprehensive, including 43 elements, and precise, allowing close inter-comparison of data between quadrangles. A recent HSSR study shows that elemental concentrations in sediment samples from the Dixon Entrance quadrangle in Alaska are insensitive to the choice of sieve size fraction and do not vary significantly between stream locations a few meters apart, except for extremely high elemental concentrations (Warren, et al., 1980). Elements that are particularly reproducible include thorium, hafnium, and the .ram earth elements. Fortunately, sensitivities for these elements, which are often associated with uranium, are excellent by the nondestructive analytical technique of neutron activation analysis that LASL employs. Wet chemical techniques may not always give reliable results for these elements in sediment samples due to the difficulty of dissolving the resistate mineral phases that normally contain these elements. LASL has recently open-filed results for a large number of NTK3 quadrangles; to date analyses have been reported for about 605 of the quadrangles within the states of the Rocky Mountain region. As a result, uranium analyses am now complete for all sediment samples collected from the state of Wyoming. We have chosen to sumnarize the analytical results for Wyoming and portions of adjoining states hereafter termed "the ~ Wyoming regionn (Fig. 1 ) ; nearly 24 000 uranium analyses and nearly 17 000 analyses for 42 additional elements are available for sediment samples collected from this area. The DOE open-file report numbers are shown in Fig. 1 for reports open filed before September 1, 1980. The Wyoming region provides an ideal area for an examination of regional geochemical patterns ex- hibited by sediment samples. It is endowed with a variety of exposed geologic units and with widely distributed uranium districts associated with host units of several ages. However, discussion will focus on rock units of Eocene and Precambrian ages because they are widely exposed and host the majority of the uranium resources within the region (Fig. 2). Miocene units are also widespread and host mineralization in the Brown's Park Formation (Fig. 2). Epigenetic uranium mineralization occurs in sand- stones of lower Eocene age such as the Wasatch, Battle Springs, and Wind River Formations whereas vein type mineralization occurs in Precambrian crystalline rocks. Precambrian rocks within the region consist of a variety of granitic and meta- morphic rocks with l .4-2.8 billion year ages, except within the Uinta Arch, where they consist of a sequence of very low grade metamorphosed or unmetamorphosed 1.0 billion year old sedimentary rocks. Precambrian rocks have served as sources for much of the clastic material comprising the lower Eocene units, and may also have provided the source for the uranium in mineralized Eocene or Miocene sandstones (Stuckless, 1979). The remainder of this paper describes the relationship of elemental concentrations in HSSR sediment samples to exposures of Eocene and Precambrian rocks. The results are used primarily to determine where these Precambrian rocks might provide the most suitable uranium source areas and to infer the direction of transport toward adjacent depositional basins during the Eocene.
Jan 1, 1980
-
Institute of Metals Division - Stored Energy and Release Kinetics in Lead, Aluminum, Silver, Nickel, Iron, and Zirconium after DeformationBy Robin O. Williams
The increase in internal energy as the result of deformation has been measured for lead, aluminum, silver, nickel, iron, and zirconium by using rapid, adiabatic compression. The stored energy increase is roughly Proportional to the strain; the propor-tionality constant increases rapidly with increasing melting point. The fraction of the mechanical energy which is stored increases more slowly, since the strength of the metals also increases with melting point. The values of the stored energy are considered accurate to about 10 pet. The present values appear about 50 pet larger than the more reliable published results where comparisons are possible. It is possible that this difference is due to the high strain rate used in this investigation. Immediately after deformation all these metals release energy at a rate roughly proportional to (time)-'. This release is considered to be associated with dislocation motion but in aluminum (and copper) some additional process seems to be present. This release can represent 20 pet or more of the stored energy. WHEN pure metals are plastically deformed, most of the mechanical energy is converted into heat. The energy remaining within the metal is significant in that it is the energy of the disorder produced and thus detailed knowledge of this energy is a powerful tool in understanding the nature of deformational disorder. While much effort has been expended on this problem, the amount of information available is limited. The situation as of 1958 has been carefully reviewed by Titchener and ever.' The present results have been obtained by a new experimental approach to this problem. The method necessitates high-strain rates which make comparisons with published results less certain, but a high-strain rate is an advantage in that the energy release immediately after deformation can be followed. EXPERIMENTAL PROCEDURE In the experimental method used, the internal (stored) energy is given as the difference between the mechanical work used in deforming the sample and the heat which is released (the first law of thermodynamics). The work is supplied by two identical hammers swinging freely from a fixed height, the available energy being the product of the mass, the gravitational constant, and the distance through which the center of gravity moves. The initial temperature rise of the sample represents the heat produced by the deformation. The sample temperature is determined by a small thermocouple embedded in and supporting the sample. This process can be repeated over and over to produce increased strains. Most samples are run through about five cycles to a total strain of around 0.7. Further details are covered elsewhere. The determination of the heat is dependent upon the sample weight, its specific heat, the rise in temperature, and any gains or losses to the surroundings. One is dependent on published results for specific heals (the values were taken from a collection).3 The values must be accurate to about 0.1 pet in order that they not affect the results. The best values thus do not contribute to the uncertainty, but this is probably not always the case. The accuracy of the temperature rise is limited primarily by knowledge of the thermocouple characteristics over short temperature intervals, but careful calibration eliminates this as an important factor. One can calculate readily the heat flow between the sample and hammers if the time of contact and the sample temperature are known and if there were no oil at the interface. Assuming the maximum temperature difference, upper values for this interchange have been calculated (the time of contact is determined from the decrease in sample length) and it may or may not be significant. However, no correction is colnsidered necessary because of the presence of a thin oil film which has a thermal conductivity much less than the metals. Except for the possible uncertainty in specific heat, the heat is considered to be adequately known. The mechitnical losses which are recognized as important are: 1) friction between the sample and the hammer:;, 2) the kinetic energy in the hammer suspension system which may not be entirely usable, 3) the rebound energy of the hammers, and 4) the vibration of the hammer heads. All these have been covered in detail elsewhere2 and only the more significant points are made here. The friction turns out to be small except for soft materials (lead)
Jan 1, 1962
-
Coal - The Petrographic Composition of Two Alabama Whole Coals Compared to the Composition of Their Size and Density FractionsBy Reynold Q. Shotts
CHEMICAL methods, based on the relative rates of oxidation of fusain, bright coal, and dull coal by nitric acid, have been devised to determine these coal components.1-4 Results obtained by oxidation methods for fusain have been checked against results obtained from microscopic methods5, 6on duplicate samples of the same coals, but to the author's knowledge this has not been done for bright and dull coal components. For this reason it is not certain that the two methods of analysis identify essentially the same chemical or physical units. It would be highly desirable to see results of the application of both methods to duplicate samples, but in the absence of any such data the author has attacked the problem indirectly. From the U. S. Bureau of Mines samples were obtained of three Alabama coals which the USBM had analyzed optically and reported on over the past 30 years. These samples were subjected to analyses by oxidation rate methods. Results of this work, and comparisons with the USBM analyses, have been published." This was the first indirect approach to the problem. The present report attempts a second indirect approach by way of internal validation. By nitric oxidation samples of two whole coals were carefully analyzed for fusain, bright coal, and dull coal. One coal was analyzed in duplicate. Duplicate portions of each of the coals were divided into three density fractions by means of heavy liquids. A duplicate portion of one of the coals was divided by sieving into three size fractions. Each fraction was analyzed by oxidation and its percentage composition calculated in terms of fusain, bright coal, and dull coal. Because the weight percent of each fraction was known, a material balance calculation for the whole coal was also made. The resulting reconstituted analysis of the whole coal could be compared to that determined by direct analysis. In addition, specific reaction rate constants were determined for each component and for each whole coal or fraction. Arbitrary reactivity indexes were calculated by dividing by 100 the sum of the products of the percent of each component in the coal and its specific reaction rate constant. The resulting figure was an average reactivity index for each coal or fraction. Weighting the reactivity indexes for each fraction by the percent of the fraction in the coal gave a reactivity index for each whole coal which could be compared to that calculated directly from the whole coal analysis. If oxidation analyses really delineate definite physical entities within the coal, or even definite groups of similar entities, reconstituted analyses calculated from fraction analyses should check closely those made of the whole coal. It probably is true that optical methods identify and describe a greater variety of components than do chemical methods and that variations are wide in the appearance and quantity of those components identified optically. Chemical methods based upon differences in oxidation reaction rates would of necessity be less discriminating, as between similar components, than would optical methods. The procedures followed for oxidizing the samples, analyzing the residues, plotting and calculating the percentage of each component, and calculating its specific reaction rate constant have been fully described.'-' In brief the method originally proposed by Fuchs et al.1 for the determination of fusain consists of the oxidation of small samples of coal in boiling 8N nitric acid in a condenser-fitted flask. After boiling for periods of 1/2 to 4 hr, the unoxidized residue is filtered and washed. The washed residue is treated with normal sodium hydroxide, diluted, and allowed to stand for several hours. The resulting brown liquor is removed, and the filtered residue dried, weighed, ignited, and weighed again. The ash-free residue is expressed as a percent of original dry, ash-free coal. The percent residue is plotted against time, and the extrapolation of this line to zero time gives the percentage of dry ash-free fusain present in the original sample. The shape of the resulting time plots has been explained' by the assumption that they are the result of two different types of reaction, the first part representing a first order reaction with rate a func-
Jan 1, 1956
-
Minerals Beneficiation - Depolarizing Magnetite PulpsBy M. F. Williams, L. G. Hendrickson
IN classification of pulps bearing magnetized ferromagnetic particles, depolarizing is of great importance. If size separation is to be effective, particles must be individual rather than in floes. Depolarizing is also practiced in heavy medium separations in which ferrosilicon or magnetite is the medium. When particles of ferromagnetic material have been removed from a magnetic field, residual magnetism causes agglomeration. The term depolarizing refers to the operation of reducing or eliminating this residual magnetism and may thus be considered magnetic deflocculation. The terms demagnetizing and randomizing are also used. At the Research Laboratory of Oliver Iron Mining Div. in Duluth a method was developed for measuring depolarization of the pulp of ferromagnetic material. Experiments were made with Mesabi taconite,' a natural magnetite of low coercive force. Ferromagnetic materials of higher coercive force, such as lodestone or the artificial magnetite produced by reduction roasting of hematite, present a more difficult problem, which was not within the scope of this investigation. It is possible, however, that some of the techniques evolved for measuring and calculating electrical characteristics of alternating current coils would be of use in depolarizing high coercive force material, particularly in conjunction with high-frequency alternating current, as proposed by Hartig and others.'," Properties of Ferromagnetic Materials:'7 Experimental work, described below, has shown that if a sample in the magnetized state is heated above the Curie point and cooled, much of the preferred orientation is destroyed and the sample is substantially depolarized. It has been thought that when the sample cools below the Curie point the domains cancel each other, leaving a zero net moment. However, such particles still exhibit a tendency to cohere, and undoubtedly this is caused by the forces of residual magnetism. -AS measured by the percent depolarization, this tendency is reproducible for any sample upon repeated heating above the Curie point and subsequent cooling and is independent of the initial state of magnetization. It is postulated, therefore, that as the material is cooled below the Curie point the domains in any particle do not completely cancel each other, but rather are preferentially oriented to some extent. Mechanism of Depolarizing with Alternating Current Magnetic Fields: It is believed that when ferromagnetic material is passed through an alternating magnetic field, depolarizing occurs in the decaying portion of the field. As the particles pass through the portion of highest intensity they become magne- tized. If the particles are not free to move, the polarities of the particles will be reversed (by a mechanism similar to that described above for magnetism) at a frequency equal to that of the applied field. As the material moves through the decaying field, intensity levels become such that a domain does not completely reverse, but stops on an axis of easy magnetization. By the time the material reaches the point of zero field intensity, a state of fairly random orientation of domains is achieved. If conditions are such as to give a completely random orientation, the particle will have little or no external magnetic field, and a pulp of such particles will be depolarized. Previous Work In 1918 E. W. Davis was granted a patent for demagnetization of magnetite pulps.' His method consisted of passing the pulp through a tapered coil, activated by alternating current of normal frequency (60 cycle). This method, with minor modifications, has been used almost universally in all pilot plants and commercial installations in which depolarization of low coercive force materials has been required. Hartig, Onstad and Foot2,3 avd made a detailed study of the factors involved in depolarizing both low (below 100 oersteds) and high (above 100 oersteds) coercive force material. They developed a method for evaluating the relative degree of depolarization of any pulp based on the settling characteristics of the pulp. Their standard of comparison was a sample heated to above the Curie point and cooled in a zero field, all in a neutral atmosphere.* For low coercive force material they found treatment.Thisprocedure is subsequently called, in this report,theCurie that results equivalent to Curie treatment could be
Jan 1, 1957
-
Electric Logging - Resistivity Logging in Thin BedsBy Leendert de Witte
Conventional resistivity logs consisting of a short normal, a long normal, and one or more long lateral curves do not give data that allow a complete quantitative interpretation in beds thinner than 20 ft. Reservoir rocks usually exhibit zones of continuous homogeneity of quite limited thickness where the long lateral curves become useless because of adjacent bed effects and boundary phenomena. If the beds are 12 ft or thicker, the short and long normals may be used for qualitative interpretation, which can be streamlined by the application of simplified departure curves. For beds of a thickness less than twice the long normal spacing, this procedure breaks down. The combination of the limestone curve, the later-olog or guard electrode log, and the microlaterolog permit quantitative interpretation for beds that are at least 10 ft thick, provided the mud resistivity and the hole diameter are known with sufficient accuracy. For beds thinner than 10 ft, combinations of the microlaterolog with short spaced laterologs and pseudo laterologs appear to be promising. Interpretation of these curves again requires the application of simplified departure curves. Resolution of various possible combinations was analyzed using departure curve data calculated on the Whirlwind I computer at the Massachusetts Institute of Technology. A field example is shown using the microlaterolog-microlog combination, and the combination of a 6-in. modified laterolog plus a 6-in. pseudo laterolog. INTRODUCTION For the purpose of quantitative interpretation of resistivity logs in porous formations, we want to obtain two essential quantities from the logs, namely, the true resistivity of the undisturbed formation, Rt, and the resistivity of the part of the formation invaded by mud filtrate, Rt. The apparent resistivities of all conventional logging devices are functions of these two parameters and are also influenced by a third unknown parameter, the diameter of the invaded zone, d. It has been shown' that from the normal curves alone it is impossible to arrive at a unique solution for the three unknowns, Rt, R1, and d1. In very thick homogeneous beds, if invasion is not too deep, we can obtain a fair approximation to Rt from the long lateral curves and then use the two normal curves to find Rt and d1 Even under the most favorable conditions, the resolution of this system is not very good. The short normal does not give a reasonable approximation to R1 unless invasion is very deep (dl>16 hole diameters). For very deep invasion, however, the long laterals no longer approximate Rt. For bed thickness between 20 and 40 ft, the long laterals are affected appreciably by the adjacent beds; and the curves are distorted by boundary anomalies to the extent that they lose their quantitative usefulness in most cases. For the same bed thicknesses, the normal curves still function reasonably well. Although it is impossible to find unique solutions Lor R1 and Rt using the normal curves alone, we can obtain a reasonable approximation for the ratio R1/Rt through the use of simplified departure curves. This fact was brought to our attention by A. J. de Witte, geologist with Continental Oil Co. As the magnitude of Rl/Rt is a major clue to the presence of oil in formations, this method can be used to good advantage for qualitative analysis and will be discussed in somewhat greater detail. With the aid of suitable bed thickness corrections, the analysis of the normal curves may be used for bed thicknesses larger than 12 ft. For thinner beds, the method rapidly loses its resolution; and we have to resort to different types of resistivity logs if we want to attempt to analyze the curves quantitatively. The inadequacy of conventional resistivity curves in thin beds is far more serious than generally realized. Fig. 1 shows a conventional E. S. with a 16-in. and 64-in. normal and a 16-ft lateral through a section of Lansing-Kansas City lime, in comparison to a guard electrode survey through the same section in a neighboring well. The porous zones, which show up as low resistivity breaks on the guard electrode log, are completely masked by adjacent bed effects and boundary anomalies on the conventional curves. Even the short normal shows most of the porous Zones only as Vague deflections and in many cases fails to register Their
Jan 1, 1955
-
Solid Surface Energy And Calorimetric Determinations Of Surface-Energy Relationship For Some Common MineralsBy A. Kenneth Schellinger
THE terms surface tension .and surface energy are well known when applied to liquids and are generally described by referring to the excess energy of the air: liquid interface as a result of unsaturated molecular forces surrounding the surface molecules of the liquid due to the presence of the air phase on one side. Such unbalanced forces produce the familiar water droplet of spherical form and are generally summed up as a surface tension measured in dynes per centimeter which can be shown mathematically to be equal numerically to a corresponding surface energy expressed in ergs per square centimeter. A specific surface energy, however, is best thought of as the energy necessary to produce one unit of new surface on a substance. Hence, in producing a bubble in a flotation cell the impeller must supply surface energy corresponding to the air: liquid interfacial area on the interior of the bubble. Inasmuch as it is relatively easy to extend or contract the surface of a liquid, there are a number of successful methods for liquid surface tension, or energy, measurement based upon surface deformation. This happy state of affairs does not, however, extend to solids, which are considered to possess surface energies for the same reasons as do liquids, i.e. because of unsaturated ionic bonds at the solid gas interface. As in the case of the flotation cell producing surface on liquids as new bubbles, it takes energy to produce new surface on solids as new particles. As every mill man knows, this surface is produced on mineral solids in a grinding mill by the action of a tumbling mass of iron balls. But here so much energy usually is wasted by the inefficient action of these balls that a large amount of heat is generated, and the surface energy production may be easily confused with the energy necessary to produce this ineffective heat. The tumbling balls and fracturing minerals ultimately take their energy from a rather large electric motor. It has been variously estimated that from 10 to 20 pct" only of this energy from the motor does not appear as heat and may be presumed to appear as surface energy on the minerals present.. Such a production of new surface on the mineral phases is accompanied, of course, by a size reduction that is inevitable as more and more mineral interior molecules become surface molecules by the fracture exposure. This size reduction of mineral particles, although the most obvious feature and perhaps the sole object of the milling operation, is from this energy viewpoint the outward manifestation of the production of surface energy only. Measurement of the characteristic surface energies of pure minerals and their various mixtures in ores would be a step towards understanding of the energetics of the commercial grinding operation. In addition, the characteristic surface energy of a mineral is probably a physical property specific for that mineral, and therefore, from a scientific standpoint, should be measured. It is interesting to note that, in contrast to the large body of work on the surface tensions of liquid systems and biological systems, the field of solid surface energies has been neglected. Prior to. 1920 it is difficult to find more than one or two references to work on solid surface energies in Chemical Abstracts. Since 1920 such references number somewhat less than 100, while those on liquid systems are numbered in the thousands. Much of this apparent neglect of the field of solid surface energies (the term is intended to be somewhat inclusive at this point and refers both to the solid: gas, and the solid: liquid interface) is because of the lack of a reliable method of measurement rather than any lack of scientific curiosity. It was, and still is, difficult to produce new surface on a solid without the simultaneous production of interior changes in the same solid which may consume part of the energy used. The extension in the surface area can be measured, but the interior crystal-
Jan 1, 1952
-
Metal Mining - Some Applications of Millisecond Delay Electric Blasting CapsBy D. M. McFarland
A FEW years ago a novel electric detonator known as the split-second or millisecond delay electric blasting cap was introduced for use in quarry blasting. Regular electric blasting caps fired in series may be depended upon to fire within a millisecond or so from the first to the last in a series. Regular delay electric blasting caps are provided that fire one period after the other period in intervals of 1/2 to possibly 11/2 sec. Most split-second or millisecond delays are designed to fire one period after the other period in possibly 25 to 50 millisecond intervals. The ear is not capable of detecting time intervals of this magnitude. The primary thought at the time millisecond delays were introduced was to investigate the results on rock breakage by firing a line of holes in a quarry face so that charges in adjacent holes would not be detonated simultaneously. This could not be accomplished satisfactorily with regular delays. The time interval between successive periods of 1/2 to 1 sec was sufficient to permit considerable movement of the burden. If the burden of one hole was reduced to a great extent by the firing of an adjacent hole, the firing of the hole with the reduced burden would likely reveal this lack of confinement by a terrific report and wild throw of rock. In the early blasts with millisecond delays it was observed that instead of the usual sharp report, the blast had a muffled sound and vibration was not as perceptible as when simultaneous firing was used. Because many quarry operators were being threatened with injunctions or suits for damages by neighbors who claimed structural damage to their buildings, millisecond delays were tried extensively in quarries. In the majority of these trials, the results were very satisfactory. The seismologists recorded the ground movement created by many blasts and verified the initial observations that millisecond delays could be used to reduce vibrations appreciably. In the past few years the advantages of this principle of nonsimultaneous firing of the charges in blasts has become generally accepted. Today the quarry operator who has vibration troubles, inadequate breakage, and excessive backbreak and has not investigated the possibilities of millisecond delay blasting is ignoring a remedy that has proved satisfactory for many. His complacency may be costing him money. Because of the results attained in quarry blasting, it was logical that millisecond delays should be tried in construction work such as in road cuts. As formations in this type of work are likely to change rapidly with advance of the cut, it is more difficult to evaluate results than in quarry blasting. However, this improved control over timing has been beneficial in limiting throw, promoting fragmentation, and reducing overbreak. In blasting near buildings the reduction in vibration and in throw has been especially helpful. As blasters employed in construction work learn what may be accomplished by closer control over the time of firing of explosives charges, more and more millisecond delays are being used to supplant instantaneous electric blasting caps. Improved Fragmentation Underground With this background of promising results, it was not surprising that millisecond delays should go underground. In limestone mining use of millisecond delays as compared with use of cap and fuse or electric blasting caps showed improved fragmentation in stopes and in slabbing operations. Then an opportunity developed to use millisecond delays in some tunnels being driven in a limestone mine (fig. 1). Using the normal charge employed and merely substituting three millisecond delay periods for three regular delay periods, there was a noticeable difference in the appearance and the position of the pile of rock after a blast. A greater portion of the face was exposed, the crest of the pile was farther from the face, and the pile was heaped high along the center line of the tunnel leaving room to walk along the ribs to the face. Fragmentation was appreciably increased. It gave the impression that the slabs had been thrown against each other with tremendous force, promoting the movement of the broken rock along the center line of the tunnel away from the face. Because the drilling and the charge weights were unchanged, the evidence was convincing that the difference in timing was responsible for the difference in results. Probably a greater portion of the energy from the explosives had been expended in doing useful work on the rock. Zeros followed by two periods of millisecond delays were used in the V cut and in two slabs to either side of the cut in this simple round. When millisecond delays, substituted period for period for regular delays, are first tried in a drift round in a mine, and the usual charge of explosives
Jan 1, 1951
-
Uranium Ore Body Analysis Using The DFN TechniqueBy James K. Hallenburg
INTRODUCTION The delayed fission neutron, or DFN technique for uranium ore body analysis uses the first down-hole method for detecting uranium in place quantitatively. This technique detects the presence of and measures the amount of uranium in the formation. DFN TECHNIQUE DESCRIPTION The DFN technique depends upon inducing a fission reaction in the formation uranium with neutrons, resulting in an anomalous and quantitative return of neutrons from the uranium. Since there are no free, natural neutrons in formation, a good, low noise assessment may be made. There are several methods available for determining uranium quantity in situ. The method used by Century uses an electrical source of neutrons. This is a linear accelerator which bombards a tritium target with high velocity deuterium ions. The resulting reaction emits high energy neutrons which diffuse into the surrounding formation. They lose most of their energy until they come to thermal equilibrium with the formation. Upon encountering a fissile material, such as uranium, these thermal neutrons will react with the material. These reactions produce additional neutrons, the number of which is a function of the number of original neutrons and the amount of fissile material exposed. The particular source used, the linear accelerator, has several distinct advantages over other types of sources: 1. It can be turned off. Thus, it does not constitute a radioactive hazard when it is not in use. 2. It can be gated on in short bursts (6 to 8 microseconds). This results in measurements free of a high background of primary neutrons. 3. The output can be controlled. Thus, the neutron output can be made the same in a number of tools, easily and automatically. There are several interesting reactions which take place during the lifetime of the neutrons around the source. During the slowing down or moderating process the neutron can react with several elements. One of these is oxygen 17. This results in a background level of neutrons in any of the measurements which must be accounted for in any interpretation technique. These elements are usually uninteresting economically. The high energy neutrons will also react with uranium 238. However, the proportions of uranium 235 and 238 are nearly constant. Therefore, this reaction aids detection of uranium mineral and need not be seperated out. Upon reaching thermal energy the neutrons will react with any fissile material, uranium 235, uranium 234, and thorium 232. At present, we do not have good techniques for seperating out the reaction products of uranium 234 and thorium 232. However, uranium 234 is a small (.0055%) percentage of the uranium mineral and thorium 232 is usually not present in sedimentary deposits. When the uranium 235 reacts with thermal neutrons it breaks into two or more fragments and some neutrons. This occurs within a few microseconds after the primary neutrons have moderated and is the prompt reaction. One system uses this; the PFN or prompt fission neutron technique. We don't use this method because the neutron population is low and, therefore, the signal is small and difficult to work with, accurately. Within a few microseconds to several seconds the fission fragments also decay with the emmission of additional neutrons. Now, with a long time period available and a large neutron population we gate off the generator and measure the delayed fission neutrons after a waiting period. These neutrons can be a measure of the amount of uranium present around the probe. Thermal neutrons are detected with the DFN technique instead of capture gamma rays to avoid some of the returns from other elements than uranium. LOGGING TECHNIQUE The exact logging technique will depend, to some extent, upon the purpose of the measurement. However, the general technique is to first run the standard logs. These will include: 1. The gamma ray log for initial evaluation of the mineral body and for determining the position of the borehole within the mineral body, 2. The resistance or resistivity log for determining the formation quality, lithology, and porosity. 3. The S. P. curve for estimating the redox state and shale content, and measuring formation water salinity, 4. The hole deviation for locating the position, depth, and thickness of the mineral (and other formations), and 5. The neutron porosity curve. The neutron porosity curve is most important to the interpretation of the DFN readings. The neutrons from this tool are affected in the same way by bore hole and formation fluids as the DFN neutrons are. Therefore, we can use this curve to determine effect of the oxygen 17 in the water. Of course, this curve can be used to determine formation porosity. It can also be used to calculate formation density.
Jan 1, 1979
-
"What Happened To The Uranium Boom?"By Reaves. M. J.
The title of my talk, "What Happened to the Uranium Boom?" is old news. Certainly it is for this group. All of us that make our living in uranium know that the boom of the last half of the 1970's is over. U.S. production has been exceeding consumption by more than two to one. Mines and mills are closing and yellowcake prices have been dropping for over 20 months. The gloomy outlook for the industry in the near term has been well documented by soothsayers of various descriptions, your daily newspapers, and in the Nuexco Monthly Reports. I'd like to attempt to describe the next upturn in the market (speculate, really) based upon the clues we're seeing now. In order to do that, I'd first like to go over briefly, some of the market factors that contributed to the recent price drop and resultant production cutbacks, and then hypothesize on the way these factors are changing and will change. Market prices are greatly affected (maybe even entirely determined) by buyer perceptions. This is particularly true with uranium, because of the long lead times associated with nuclear plant construction and also with conventional mine/mill development. Before the price rise (say, 1975) utility uranium buyers believed that: 1) U.S. producers would have difficulty expanding to meet U.S. demand. 2) Australian and Canadian production was essential to avoid shortages in the early 1980's. 3) Uranlum prices would continue to rise as demand exceeded supply. 4) Enrichment capacity would become inadequate. It was thought necessary, therefore, to build enriched inventory in the early 1980's for use in the late 1980's. Artificially accelerated expansion of the uranium producer industry was necessary to accommodate anticipated enrichment demand. Current perceptions are largely the opposite. These are the beliefs that were held most of this year and late last year as prices dropped. 1) U.S. production is far in excess of domestic need. Contraction of the U.S. production lndustry is necessary. 2) Canadian and Australian supply is optional and not essential. Producers in those countries are expanding mainly by displacing higher cost production and not because they fill a void, 3) Prices may be essentially stable for some time. 4) Enriched uranium is in excess supply. That is 1981. 1982 is shaping up to look like this: 1) Prices will have bottomed out. (That is not Nuexco's opinion necessarily, by the way, but it is my opinion.) 2) There will still be substantial utility inventories, but fewer spot sales. 3) Canadian and perhaps Australian sellers will have made substantial sales in the U.S. and will be aggressively seeking more. 4) U.S. production will have been dramatically curtailed. U.S. utilities that wish to con- tract long term will have difficulty in finding domestic sellers. Concern will develop about the availability of U.S. production capability. Virtually all long term con- tracts signed will be with non-U.S. sellers. 5) An awareness will begin to develop among U.S. buyers that we are approaching a period of dependence upon foreign uranium (which will be true). The history of the uranium market has been one of dramatic changes and overreaction to those changes. The rapid price rise of a few years ago generated excess U.S. production capacity and the rapid price drop of the last two years will almost certainly result in too little capacity. It will soon be difficult for U.S. buyers to buy domestic material except on the spot market. The question is, "will they care?" The lack of demand, of course, is the underlying reason for the current poor health of the uranium industry. In 1972, 1973 and 1974 collectively, there were 105 nuclear reactors ordered in the U.S. That ordering rate was expected to continue and accelerate throughout this century. In 1975, 1976, 1977, 1978, 1979, and 1980 altogether, there were 56 more reactors cancelled than ordered. The net growth of our only customer since 1974 has been a negative 56. TO put this in perspective, if these 56 reactors were operating now it would more than double present U.S. uranium consumption. Underlying lack of demand is something that is simply not going to change in this decade. Time is going to be required. The NRC indicates that the maximum feasible number of new reactors that can be licensed each year is six. That would increase uranium consumption by only 10% per year. New reactors, if ordered tomorrow, would not generate new uranium demand until after 1990. Even so, United States' consumption of uranium will rise from the 1980 level of 18 million pounds per year, to
Jan 1, 1982
-
Metal Mining - Some Applications of Millisecond Delay Electric Blasting CapsBy D. M. McFarland
A FEW years ago a novel electric detonator known as the split-second or millisecond delay electric blasting cap was introduced for use in quarry blasting. Regular electric blasting caps fired in series may be depended upon to fire within a millisecond or so from the first to the last in a series. Regular delay electric blasting caps are provided that fire one period after the other period in intervals of 1/2 to possibly 11/2 sec. Most split-second or millisecond delays are designed to fire one period after the other period in possibly 25 to 50 millisecond intervals. The ear is not capable of detecting time intervals of this magnitude. The primary thought at the time millisecond delays were introduced was to investigate the results on rock breakage by firing a line of holes in a quarry face so that charges in adjacent holes would not be detonated simultaneously. This could not be accomplished satisfactorily with regular delays. The time interval between successive periods of 1/2 to 1 sec was sufficient to permit considerable movement of the burden. If the burden of one hole was reduced to a great extent by the firing of an adjacent hole, the firing of the hole with the reduced burden would likely reveal this lack of confinement by a terrific report and wild throw of rock. In the early blasts with millisecond delays it was observed that instead of the usual sharp report, the blast had a muffled sound and vibration was not as perceptible as when simultaneous firing was used. Because many quarry operators were being threatened with injunctions or suits for damages by neighbors who claimed structural damage to their buildings, millisecond delays were tried extensively in quarries. In the majority of these trials, the results were very satisfactory. The seismologists recorded the ground movement created by many blasts and verified the initial observations that millisecond delays could be used to reduce vibrations appreciably. In the past few years the advantages of this principle of nonsimultaneous firing of the charges in blasts has become generally accepted. Today the quarry operator who has vibration troubles, inadequate breakage, and excessive backbreak and has not investigated the possibilities of millisecond delay blasting is ignoring a remedy that has proved satisfactory for many. His complacency may be costing him money. Because of the results attained in quarry blasting, it was logical that millisecond delays should be tried in construction work such as in road cuts. As formations in this type of work are likely to change rapidly with advance of the cut, it is more difficult to evaluate results than in quarry blasting. However, this improved control over timing has been beneficial in limiting throw, promoting fragmentation, and reducing overbreak. In blasting near buildings the reduction in vibration and in throw has been especially helpful. As blasters employed in construction work learn what may be accomplished by closer control over the time of firing of explosives charges, more and more millisecond delays are being used to supplant instantaneous electric blasting caps. Improved Fragmentation Underground With this background of promising results, it was not surprising that millisecond delays should go underground. In limestone mining use of millisecond delays as compared with use of cap and fuse or electric blasting caps showed improved fragmentation in stopes and in slabbing operations. Then an opportunity developed to use millisecond delays in some tunnels being driven in a limestone mine (fig. 1). Using the normal charge employed and merely substituting three millisecond delay periods for three regular delay periods, there was a noticeable difference in the appearance and the position of the pile of rock after a blast. A greater portion of the face was exposed, the crest of the pile was farther from the face, and the pile was heaped high along the center line of the tunnel leaving room to walk along the ribs to the face. Fragmentation was appreciably increased. It gave the impression that the slabs had been thrown against each other with tremendous force, promoting the movement of the broken rock along the center line of the tunnel away from the face. Because the drilling and the charge weights were unchanged, the evidence was convincing that the difference in timing was responsible for the difference in results. Probably a greater portion of the energy from the explosives had been expended in doing useful work on the rock. Zeros followed by two periods of millisecond delays were used in the V cut and in two slabs to either side of the cut in this simple round. When millisecond delays, substituted period for period for regular delays, are first tried in a drift round in a mine, and the usual charge of explosives
Jan 1, 1951
-
Geophysics - The Gravity Meter in Underground ProspectingBy W. Allen
FOR the past six years gravity surveys have been used for underground prospecting in the copper mines at Bisbee, Ariz. The primary purpose of the surveys has been to reduce the diamond drilling and crosscutting necessary for exploration. Since many of the orebodies are small, and geologic control is not always apparent, any information that will direct the drilling and crosscutting is highly desirable. Because of extensive development and exploration work in the copper mines at Bisbee, it has been possible to cover more than 630,000 ft of crosscuts on 30 levels with the gravity surveys. In the process the gravity procedures have been refined to a high degree. Density Contrast: For a gravity survey to be successful, a sufficient density contrast must exist between the geologic feature sought and surrounding host rocks. Most mineralized areas will provide this contrast if fairly massive bodies are present. In the Bisbee area the entire sequence of formations, except for alluvium, appears to have specific gravities ranging from 2.65 to 2.70. These values have been determined by means of a large number of cut samples and diamond drill cores. As a further check, vertical gravity differences have been used where nonmineralized sections are known to occur.' The only known major gravity disturbances result from mineralization that has increased the density and the voids that have decreased density. The voids are caused by mining operations and by underground water movement that has developed several areas of caverns. Equipment: While not absolutely essential, a small rugged gravity meter, such as the Worden meter, is highly desirable. A tall tripod, about the height of a transit tripod, permits instrument set-ups in deep water and in locations where fallen timber and muck piles make it impossible to use a short tripod. An additional advantage of a tall tripod is that it places the meter in the center of the crosscut, reducing the error caused by the crosscut void. Size and weight are important, since the only satisfactory means of operating the meter underground is to carry it by hand. A backpack can be used in rare instances but is usually a hindrance because of the close station spacing. The operator's ability to move through tight clearances will improve survey coverage, as it is then possible to move through raises and caved areas and to pass mine cars and machinery with a minimum of trouble. Station Control: Gravity stations are normally located every 100 ft along the crosscuts, at each intersection, and in the face of all stub crosscuts. In areas of high gravity relief, or where small anomalies might be expected, stations may be located at 25 or 50-ft intervals. When possible, the stations should be offset to avoid effects of raises or other voids. The gravity stations on a level are tied to one or more base stations, which are usually located at the shaft or near the portal of an adit. The base stations may be part of a gravity control net that extends to each level in the mine as well as to the surface. Such a net extending throughout the potential area of the surveys is highly desirable, as it is then possible to compare all gravity stations on a uniform basis. The stations that are part of the base net should be carefully established by multiple readings and, if necessary, by a least squares adjustment of the loops. In some instances where levels do not have a shaft station, or where access may be blocked by caving, it may be necessary to establish secondary bases at the top and bottom of the raises that are between levels. Under fair conditions 70 to 90 gravity stations can be located and run in 6 hr by a two-man crew. The best field procedures depend on conditions. Reduction of Field Data: Most of the time required to produce a final gravity map is consumed in processing the data. Each meter reading must be corrected for a minimum of five factors that affect the gravity value in addition to the density contrast being sought. These factors are 1) instrumental drift, 2) station elevation, 3) topography, 4) latitude, and 5) regional gravity gradient. Mine openings, such as stopes and raises, will affect the value. However, it is seldom practical to make corrections for these voids. Usually a rotation is made on the field note on the station, and any
Jan 1, 1957
-
Iron and Steel Division - Equilibrium Between Blast-Furnace Metal and Slag as Determined by RemeltingBy E. W. Filer, L. S. Darker
ONE of the primary purposes of this investigation was to determine how far blast-furnace metal and slag depart from equilibrium, particularly with respect to sulphur distribution. In studying the equilibrium between blast-furnace metal and slag, there are two approaches that can be used. One method is to use synthetic slags, as was done by Hatch and Chipman;' the other is to equilibrate the metal and slag from the blast furnace by remelting in the laboratory. In the set of experiments here reported, metal and slag tapped simultaneously from the same blast furnace were used for all the runs. The experiments were divided into two groups: 1—a time series at each of three different temperatures to determine the t.ime required for metal and slag to equilibrate in various respects under the experimental conditions of remelting, and 2—an addition series to determine the effect of additions to the slag on the equilibrium between the metal and slag. An atmosphere of carbon monoxide was used to simulate blastfurnace conditions. The furnace used for this investigation was a vertically mounted tubular Globar type with two concentric porcelain tubes inside the heating element. The control couple was located between the two porcelain tubes. The carbon monoxide atmosphere was introduced through a mercury seal at the bottom of the inner tube. On top, a glass head (with ground joint) provided access for samples and a long outlet tube prevented air from sucking back into the furnace. The charge used was iron 6 g, slag 5 g for the time series, or iron 9 g, slag 7 % g for the addition series. This slag-to-metal ratio of 0.83 approximates the average for blast-furnace practice, which commonly ranges from about 0.6 to 1.1. A crucible of AUC graphite containing the above charge was suspended by a molybdenum wire in the head and, after flush, was lowered to the center of the furnace as shown in Fig. 1. The cylindrical crucible was 2 in. long x % in. OD. The furnace was held within &3"C of the desired temperature for all the runs. The temperature was checked after the end of each run by flushing the inner tube with air and placing a platinum-platinum-10 pct rhodium thermocouple in the position previously occupied by the crucible; the temperature of the majority of the runs was much closer than the deviation specified above. The couple was checked against a standard couple which had been calibrated at the gold and palladium points, and against a Bureau of Standards couple. The carbon monoxide atmosphere was prepared by passing COz over granular graphite at about 1200°C. It was purified by bubbling through a 30 pct aqueous solution of potassium hydroxide and passing through ascarite and phosphorus pentoxide. The train and connections were all glass except for a few butt joints where rubber tubing was used for flexibility. The rate of gas flow was 25 to 40 cc per min. As atmospheric pressure prevailed in the furnace, the pressure of carbon monoxide was only slightly higher than the partial pressure thereof in the bosh and hearth zones of a blast furnace—by virtue of the elevated total pressure therein. Simultaneous samples of blast-furnace metal and slag were taken for these remelting experiments. The composition of each is given in the first line of Table I. There is considerable uncertainty as to the significant temperature in a blast furnace at which to compare experimental results. This uncertainty arises not only from lack of temperature measurements in the furnace, but also from lack of knowledge of the zone where the slag-metal reactions occur. (Do they occur principally at the slag-metal interface in the crucible, or as the metal is descending through the slag, or even higher as slag and metal are splashing over the coke?) The known temperatures are those of the metal at cast, which averages about 2600°F, and of the cast or flush slag, which is usually about 100°F hotter. To bridge this uncertainty, remelting temperatures were chosen as 1400°, 1500" (2732°F), and 1600°C. For the time series the duration of remelt was 1, 2, 4, 8, 17, or 66 hr; crucible and contents were quenched in brine. The addition series were quenched by rapidly transferring the crucible and contents from the furnace to a close-fitting copper "mold." Of incidental interest here is the fact that the slag wet the crucible
Jan 1, 1953
-
Iron and Steel Division - Evaluation of pH Measurements with Regard to the Basicity of Metallurgical SlagBy C. W. Sherman, N. J. Grant
The correlation of the high temperature chemical properties of slag-metal systems with some easily measured property of either slag or metal at room temperature has been the goal of both process metallurgists and melting operators for many years. There are several rapid methods for estimating various constituents in steel in addition to the conventional chemical methods which are quite fast, but these do not reveal the nature of the slag as a refining agent, which is of primary interest to the steelmaker. Furthermore, there are several methods for examining slag, the three principal ones being slag pancake, petrographic examination, and the previously mentioned chemical analysis. The main objection to the last two is the lime required to make a satisfactory estimate of the mineralogical or chemical components. The objection to the first is the inadequacy of the information obtained. A new technique has been developed by Philbrook, Jolly and Henry1 whereby the properties of slags are evaluated from an aqueous solution leached from a finely divided sample of slag. It is known that the pH or hydrogen ion concentration (of saturated solutions that have dissolved certain basic oxides, notably calcium oxide) will indicate a pronounced basicity. Philbrook, Jolly and Henry devised the pH measurement technique in order to supply open hearth operators with a fast, reasonably accurate method of estimating slag basicity. They offered the method as an empirical observation and made no claims as to its theoretical justification. The results were presented as an experi-metally observed relationship which applied over an important range of basic open hearth slags. They found that, in plotting the measured pH against the basicity, the best relationship existed between the pH and the log of the simple V ratio, CaO/SiO2. Extensive investigation also showed that there were several variables in the experimental technique that influenced the results and necessitated following a standard procedure to obtain reproducible pH readings. These variables were: 1. Particle size of the slag powder used. 2. Weight of sample used per given volume of water. 3. Time of shaking and standing allowed before the pH was measured. 4. Exclusion of free access of atmospheric carbon dioxide to the suspension. 5. Temperature of the extract at the time the pH was measured. In subsequent investigations of the pH method by Tenenbaum and Brown2 and by Smith, Monaghan and Hay3 the general conclusions of Philbrook's work were reaffirmed. It was the object of the present investigation to extend the technique to a point where it could be used to evaluate slags of all types. Experimental Results PARTICLE SIZK OF SLAG POWDER A large sample of commercial blast furnace slag of intermediate basicity (V-ratio 1.15) was selected for the study. The slag had been put through a jaw crusher until all of it passed through a 20 mesh screen. Five fractions of this crushed material were separated, -20 to +40, -40 to +60, -60 to +100, -100 to +200, and -200 mesh. A representative sample of 0.5 g was removed from each fraction and the pH determined using the method of Philbrook. Check pH analyses on the sample fractions varied due to the different amounts of shaking. To eliminate this variable, a mechanical shaker was employed. In order to know the exact time of contact between the slag and water, it was found necessary to filter the extract at the end of the shaking period. Using the mechanical shaker and a filtering apparatus, similar runs were made on the five fractions for contact times of 5, 10, 20, and 40 min. Random checks gave reproducible results within 0.02 pH. The data are plotted in Fig 1. It can be seen from the plot that each slag fraction is hydrolyzed to an extent that is roughly proportional to the surface area exposed to the water. The (—100 to +200) mesh material changed very little in pH after 10 min. shaking time. The curves are symmetrical and lie in proper relation to one another. The —200 mesh curve appears to be somewhat flatter than the others, but this can be attributed to the portion of very fine material that is not present in the other fractions. The closeness of the (-100 to +200) mesh curve to the —200 mesh curve and the fact that a —100 mesh sample would contain amounts of slag down to 1 or 2 microns in diam were considered sufficient reasons for selecting a —100 mesh sample as representative of the whole sample of slag for the purposes of this investigation.
Jan 1, 1950
-
Technical Notes - Matrix Phase in Lower Bainite and Tempered MartensiteBy F. E. Werner, B. L. Averbach, Morris Cohen
THAT bainite formed near the M, temperature bears a striking r esemblance to martensite tempered at the same temperature has been shown by the electron microscope.' By means of electron diffraction,' it has been established that carbide and cementite are present in bainite formed at 500°F (260°C); these carbides are also found in martensite tempered at 500°F (260°C).' The investigation reported here is concerned with an X-ray study of the matrix phases in lower bainite and tempered martensite. These phases have turned out to be dissimilar in structure; the matrix of bainite is body-centered-cubic while that of tempered martensite is body-centered-tetragonal. A vacuum-melted Fe-C alloy containing 1.43 pct C was studied. Specimens of 16 in. diam were sealed in evacuated silica tubing and austenitized at 2300°F (1260°C) for 24 hr. One specimen was quenched into a salt bath at 410°+7 °F (210°+4°C), held for 16 hr, and cooled to room temperature. The structure consisted of about 90 to 95 pct bainite, the re: mainder being martensite and retained austenite. A second specimen was quenched from the austen-itizing temperature into iced brine and then into liquid nitrogen. It consisted of about 90 pct martensite and 10 pct retained austenite. The latter specimen was tempered for 10 hr at 410°+2°F (210°+1°C). The specimens were then fractured along prior austenite grain boundaries (grain size about 2 mm diam) by light tapping with a hammer. Single aus-tenite grains, mostly transformed, were etched to about 0.5 mm diam and mounted in a Unicam single crystal goniometer, which allowed both rotation and oscillation of the sample. Lattice parameters were measured by the technique of Kurdjumov and Lyssak. This method takes advantage of the fact that martensite and lower bainite are related to austenite by the Kurdjumov-sachs orientation relationships Thus, the (002) and the (200) (020) reflections can be recorded separately, permitting the c and a parameters to be determined without interference from overlapping reflections. According to these findings, the matrix phase in bainite is body-centered-cubic and, within experimental error, has the same lattice parameter as ferrite (2.866A). On the other hand, martensite, tempered as above, retains some tetragonality, with a c/a ratio of 1.005t0.002. Most workers in the past have assumed that bainite is generated from austenite as a supersaturated phase, but the nature of this product has not been established. The question arises as to whether bainite initially has a tetragonal structure and then tempers to cubic, or if it forms directly as a cubic structure. If it forms with a tetragonal lattice, it might well be expected to temper to the cubic phase at about the same rate as tetragonal martensite. The martensitic specimen used here was given approximately the same tempering exposure, 10 hr at 410°F, as suffered by the greater part of the bainite during the isothermal transformation. About 50 pct bainite was formed in 6 hr at 410°F. On tempering at this temperature, martensite reduces its tetragonality within a few minutes to a value corresponding to 0.30 pct C.' Further decomposition proceeds slowly, and after 10 hr the c/a ratio is still appreciable, i.e., 1.005. Thus, even if the bainite were to form as a tetragonal phase with a tetragonality corresponding to only 0.30 pct C, which might be assumed to coexist with e carbide, it would not be expected to become cubic in this time. It seems very likely, therefore, that bainite forms irom austenite as a body-centered-cubic phase and does not pass through a tetragonal transition. The carbon content of the cubic phase has not been determined, but it could easily be as high as 0.1 pct, within the experimental uncertainty of the lattice-parameter measurements. It has been postulated that retained austenite decomposes on tempering into the same product as martensite tempered at the same temperature. There is now considerable doubt on this point. The isothermal transformation product of both primary and retained austenite at the temperature in question here is bainite," and the present findings show that bainite and tempered martensite do not have the same matrix. Acknowledgments The authors would like to acknowledge the financial support of the Instrumentation Laboratory, Massachusetts Institute of Technology, and the United States Air Force.
Jan 1, 1957
-
Discussion - Impacts Of Land Use Planning On Mineral Resources - Technical Papers, Mining Engineering, Vol. 36, No. 4, April, 1984, pp. 362 -369 – Ramani, R. V., Sweigard, R. J.By G. F. Leaming
The paper by R.V. Ramani and R.J. Sweigard is a wonderful description of the labyrinthine web that has been spun about the mining industry by energetic bureaucrats and politicians over the past 50 years. The remedy for the problem, however, is not more of the same, but less. That may be difficult for the industry to achieve, for it is not a technical solution but a political one. And the current fervor for more detailed planning at all levels of government and private enterprise has become deeply ingrained. The authors recommend the provision of more information about mining and mineral resources to "macro" (i.e., government) land use planners. They apparently overlook, however, the already strong tendency on the part of most government land use planners to consider themselves omniscient. Thus, giving them more information about the technical problems of mining will only make them want to get more and more involved in the "micro" (private, site specific) mine development and production plans of the individual mining firm. In fact, this has already happened at all levels of jurisdiction from municipal to federal government. Examples are legion. The most effective way to ameliorate the adverse impacts of government land use planning on existing and potential mining operations is to: (1) introduce greater flexibility in the definition of land use zones by local and state governments; (2) adopt realistic and relevant ambient environmental performance standards in governing relationships between mineral land uses and concurrent or subsequent nonmining land uses; (3) allow greater leeway for economic considerations in land use decisions in contrast to the explicit legalistic approach now in vogue; (4) recognize that all minerals are not the same and that sand and gravel mining should not be treated the same as underground metal mining, coal stripping, oil field production, or in situ leaching; and (5) eliminate the notion that mining operators should be responsible for determining in detail the use of land by subsequent owners of mined land. This last bit of conventional ethic really makes no more sense than requiring the builders of every shopping center or government office complex to provide detailed plans for the use of that land when its use for shopping or government is ended. Did the builder of Ebbetts Field plan for Brooklyn after the Dodgers went to Los Angeles? Should the developer of the Bingham Pit plan for suburban Salt Lake City after the copper mining goes to Chile? The nation's mining industry must address these questions before further bankrupting itself to provide more data to planners and spending thousands of dollars per acre to create land that when reclaimed is worth only a few hundred dollars per acre. ? Reply by R.V. Ramani and R.J. Sweigard We thank Mr. Learning for his valuable contribution. His views on the problems of land use planning and mineral resources are most welcome additions to our paper. As the title indicates, our paper was more concerned with the impacts of land use planning on mineral resource conservation than with the details of the planning process. On the whole, his five recommendations would be helpful for mineral resource conservation. However, we would suggest that the argument he presents for his final recommendation does not address the differences between mining as a land use and commercial or institutional uses. We believe that this difference is the crux of the issue. We share Mr. Learning's desire to ameliorate the adverse impacts of land use planning. Possibly the most detrimental impact is the loss of mineral resources. Any development, whether mineral or community, that does not give proper consideration to other resources can result in permanent loss or sterilization of resources. With proper planning, some of these losses can be avoided. As our paper indicated, one factor that limits the consideration of mineral resources, and ultimately leads to their sterilization, is the generally inadequate levels of resource characterization and understanding of the unique nature of mineral resources and mining operations. The last point raised by Mr. Learning is also important. In terms of reclamation and land use planning in mining districts, we certainly do not advocate spending more than what the results are worth. The main thrust of the paper was to explore the avenues for conserving the mineral resources so that, at some appropriate time, the issue of mining and reclamation can still be addressed. ?
Jan 1, 1986
-
Part VI – June 1968 - Papers - Internal Oxidation of Iron-Manganese AlloysBy J. H. Swisher
When an Fe-Mn alloy is internally oxidized, the inclusions formed are MnO which contains some dissolzled FeO. In the internal oxidation reaction, not all of the manganese is oxidized; some remains in solid solution as a result of the high Mn-0 solubility product in iron. Taking these factors into consideration, the rate of internal oxidation of an Fe-1.0 pct Mn alloy is computed as a function of temperature, using available thermodynanzic data and recently published data for the solubility and diffusivity of oxygen in iron. The predicted and experimentally determined rates for the temperature range from 950 to 1350°C are in good agreement. ThE rates of internal oxidation of austenitic Fe-A1 and Fe-Si alloys have been studied extensively.1"4 Schenck et al. report the results of a few experiments with Fe-Mn alloys at 854" and 956C, and Bradford5 has studied the rate of internal oxidation of commercial alloys containing manganese in the temperature range from 677" to 899°C. When Fe-Mn alloys are internally oxidized, the inclusions formed are solutions of FeO in MnO, the composition depending on the experimental conditions. Since the thermodynamics of the Fe-Mn and FeO-MnO systems have been investigated,6"9 and since the solubility and diffusion coefficient of oxygen in y iron have been determined recently,' it is possible to predict the rate of internal oxidation from known data. The calculations used in predicting the rate of internal oxidation will first be outlined, then the results of the prediction will be compared with the experimental results of this investigation. PREDICTION OF PERMEABILITY FROM THERMODYNAMIC AND DIFFUSIVITY DATA Oxygen is provided for internal oxidation in these experiments by the dissociation of water vapor on the surface of the alloy. The dissociation reaction is: + H2(g) + [O] [1] where [0] denotes oxygen in solution. The equilibrium constant for this reaction is known as a function of temperature:' log As oxygen diffuses into the alloy, oxide inclusions are formed which are MnO with some FeO in solid solution. The reactions occurring are: [Mn] + [0] = (MnO) [31 and [Fe] + [0] = (FeO) [41 where [ Mn] is manganese dissolved in iron and (FeO) is iron oxide dissolved in MnO. The overall reactions may be written as follows: [Mn] + HOte) = (MnO) + H2(£) [5] and [Fe] + H20(g) = (FeO) + Hz(R) [61 The standard free-energy changes and equilibrium constants for Reactions [5] and [6] are known.6 Therefore the equilibrium constants for Reactions [3] and [4] may be obtained by combining known thermodynamic data for Reactions [I], [5], and [6]. For Reactions [3] and [4]: K = and For the present purpose, both the Fe-Mn7,8 and FeO-~n0' systems can be considered to be ideal, i.e., [amn] = [NM~] and (aFeO) = (NM~~) = 1 - (NFeO) where the Ns are mole fractions. These relations, together with Eqs. [I] and [8], permit us to compute both the oxide and metal compositions as a function of temperature and oxygen potential at any point in the specimen. For cases where the oxygen concentration gradient between the surface and the subscale-base metal interface is linear, the kinetics of internal oxidation is an application of Fick's first law: where dn/dt is the instantaneous flux of oxygen into the specimen, g-atom per sq cm sec; 6 is the instantaneous thickness of the subscale, cm; Do is the diffusion coefficient of oxygen in iron, sq cm per sec; p is density of iron, g per cu cm; h[%O] is the oxygen concentration difference between the surface and sub-scale-base metal interface, wt pct. B6hm and ~ahlweit" derived an exact solution to the diffusion equation for systems in which there is a stoichiometric oxide formed. They showed that the oxygen concentration gradient is given by a rather complex error function relation. For the Fe-Mn-0 system and for most other systems that have been studied, however, variations in oxide compositions are small and rates of internal oxidation are sufficiently slow that the deviation from linearity in the concentration gradient of oxygen is negligible. The mass of oxygen transported across a unit area of the specimen for the total time of the experiment is given by the mass balance equation:
Jan 1, 1969
-
Logging and Log Interpretation - An Electrodeless System for Measuring Electric Logging Parameters on Core and Mud SamplesBy I. Fatt
A recently developed system for measuring electrical resistivity of liquids without use of electrodes offers some interesting possibilities in electric logging technology. The equipment as supplied by the manufacturer is satisfactory for continuous mud logging on a drilling rig or for measuring mud or filtrate resistivity in the laboratory. A simple modification of the commercially available instrument makes it suitable for measuring resistivity of core samples in the laboratory. The continuous measurement of mud resistivity on a drilling rig is a convenient means for detecting mixing of formation water with the drilling mud. Such information is useful to the geologist, the mud engineer and the logging engineer. However, continuous mud resistivity logging by conventional electrode-type resistivity cells is beset with difficulties. The mud, sand and rock chips abrade the electrodes, thereby changing the cell constant and eventually destroying the cell. Also, additives and crude oil in the mud may poison the electrodes by coating them with a nonconductive material. An electrode-type resistivity cell. therefore, may give erroneous readings under certain conditions. Electric logging companies circumvent the electrode poisoning problem by using a four-electrode resistivity cell for measurement of mud resistivity. In this cell, change in electrode area does not change the cell constant. However, the four-electrode cell is difficult to adapt for continuous reading and does not solve completely the problem of electrode abrasion by the sand and cuttings in the mud. The measurement of electric logging parameters on core samples in the laboratory encounters some of the same problems discussed in connection with mud logging. Ideally, the electrical resistivity of a core sample should be measured by placing platinum black electrodes in direct contact with the plane ends of a cylindrical or rectangular sample. Platinum black electrodes however, are much too fragile and easily abraded to be brought in contact with a rock sample. Also, oil or other constituents in the fluid contained in the sample will poison platinum black. In practice, gold-plated brass electrodes, in an AC bridge circuit operating at about 1,000 cps, are used for routine core analysis. For more precise work in research studies, a four-electrode scheme is used.',' Preparation of the samples for the four-electrode method is much too involved for routine core analysis. An apparatus for measuring resistivity of liquids without use of electrodes was described by Guthrie and Boys3 in 1879. They suspended a beaker, containing the electrolyte, by a torsion wire and rotated a set of permanent bar magnets around the vessel. The eddy currents induced in the electrolyte reacted against the rotating magnetic field to develop a torque, which was measured as a deflection of the torsion wire. In 1879 this method could not be made precise or convenient because of the lack of strong permanent magnets. The writer described a very greatly improved apparatus similar to that of Guthrie and Boys, but it was not suitable for continuous measurements or core samples.' Many electrodeless resistivity devices using radio frequency current are described in the literature.5, 6 These generally are suitable only for noting the end-point in a chemical titration. They do not measure resistivity, instead measuring a complex quantity which includes the dielectric constant and the magnetic permeability. The first description of the apparatus to be discussed in this paper was given by Relis.7 Improvements and modifications are described by Fielden,s Gupta and Hills,> and Eichholz and Bettens.10 DESCRIPTION OF APPARATUS The apparatus used in this study is based on the principle that the solution under measurement can form a loop coupling two transformer coils, as shown in Fig. 1. For a fixed AC voltage applied across Coil A, the voltage appearing across Coil B is a function of the resistance of the liquid-filled loop. The details of the voltage generating and measuring circuits are given in Refs. 7, 8, 9 and 10. A block diagram of the equipment is given in Fig. 7. Special features worth mentioning are the operating frequency of 18,000 cps and the automatic temperature compensation which results in the given resistivity readings being automatically correlated to 25°C. The liquid loop supplied by the manufacturer, shown in Figs. 1 and 2, was modified for use in core analysis (Fig. 3). The core sample under test is substituted for a section of the original loop. As shown in Fig. 3, the unit accepts only plastic-mounted cylindrical core specimens. A Hassler-type sleeve easily can be designed for the unit if unmounted cores are to be measured. EXPERIMENTAL PROCEDURE MUD LOGGING A simulated mud line was set up in the laboratory.