Search Documents
Search Again
Search Again
Refine Search
Refine Search
- Relevance
- Most Recent
- Alphabetically
Sort by
- Relevance
- Most Recent
- Alphabetically
-
Geophysics - Seismic-Refraction Method in Ground-Water ExplorationBy W. E. Bonini, E. A. Hickok
IN the course of an investigation directed toward expanding ground-water facilities in Essex and Morris counties, New Jersey, the Board of Water Commissioners of the city of East Orange authorized a seismic-refraction survey' for the purpose of de-lineating bedrock topography below unconsolidated overburden. Results of the survey were highly satisfactory and led to the preparation of a comparatively detailed bedrock contour map. Knowledge of the bedrock depth and configuration was an important aid in selection of sites for test drilling. The portion of the East Orange Water Reserve under consideration is in the flood plain of the Pas-saic River about 10 miles west of Newark, N. J. The flood plain is about 175 ft above mean sea level and is bordered by low hills rising to elevations of approximately 250 ft. The bedrock underlying the Water Reserve consists of sandstone and shale of the Triassic Brunswick formation and is covered everywhere by deposits of unconsolidated glacial outwash sand and gravel, lacustrine clay, and recent river silt as much as 150 ft thick. Yield of wells in the sandstone and shale averages 100 to 200 gpm. Since production wells constructed in the sand and gravel aquifer in the buried river valley shown on the contour map (Fig. 1) yield 300 to 1400 gpm, it was proposed to locate additional production wells in this buried valley, where the yields per well would be maximum. In 1939 and 1946 the East Orange Water Dept. had electrical-resistivity surveys made to determine depths to bedrock. From the resistivity data the exploration company prepared a bedrock contour map. A well field expansion program begun in 1955 utilized this information to locate sites for test wells along a predicted northward extension of the buried valley in which existing production wells are located. After several test wells (wells 201-205) had been drilled, it became apparent that the resistivity information was unreliable." For example, test well 201 recorded bedrock at a depth of 72 ft, whereas the resistivity depth determination was 130 ft. As a consequence, the test drilling program was temporarily suspended and a seismic survey was under- taken to determine the topography and extent of the buried valley known from well records to underlie the existing well field. In the first phase of this study, several seismic shot point locations were placed at sites where well logs had been obtained previously. This procedure is necessary in a new area to determine whether the seismic method is applicable and what degree of accuracy is to be expected. At the East Orange Water Reserve, depths obtained from the shot points near test wells 202, 203, and 204 were within 8 to 11 pct of the depths logged (Table I). With this assurance that accurate results could be obtained, additional seismic spreads were located on the Water Reserve. Using a portable refraction seismograph, in the fall of 1955 a crew of four men shot a total of 29 reversed seismic spreads in a period equivalent to six field days. Charges as heavy as 3 1b of 40 pct dynamite were necessary at a few places to overcome ground vibrations caused by traffic on nearby highways. At most other sites, a 1-1b charge was sufficient. Travel-time plots were made for all spreads, and depths and true velocities were calculated according to formulas for multiple sloping layers by Ewing, Woollard, and Vine.' The plot of spread 7 (Fig. 2) is typical of the short spreads in which bedrock was shallow—about 50 ft in this case. Where there were not enough arrivals through the bedrock to define the high velocity bedrock line, the spreads were lengthened. This was done by placing shots on line several hundred feet away from each end of the line of geophones. It was then possible to construct complete reverse plots for both short and extended shot points (see spread 27, Fig. 3). Four individual depths were calculated from each extended spread. Three and in some cases four seismic layers were observed. The surficial layer had a velocity range of 900 to 1200 fps, the lowest velocity recorded. This seismic layer is above the water table and is interpreted as recent river silt. The bedrock had the highest velocities, which ranged from 10,600 to 16,400 fps. Intermediate velocities ranged from 4500 to 6800 fps. In every case the intermediate layer was within
Jan 1, 1959
-
Institute of Metals Division - 475°C (885°F) Embrittlement in Stainless SteelsBy A. J. Lena, M. F. Hawkes
Changes in hardness, tensile properties, microstructure, electrical resistance, and X-ray diffraction effects indicate that lattice strains are necessary for the embrittlement of ferritic stainless steels when heated for relatively short times at 475°C (885°F). It is suggested that 475°C (885°F) embrittlement is due to the accelerated formation of an intermediate stage in the formation of s under the influence of these strains. FERRITIC stainless steels (low carbon alloys of iron with more than 15 pct Cr) are subject to two forms of embrittlement when heated in the temperature range of 375° to 750°C. The embrittlement which occurs after long time heating between 565" and 750°C is well understood; it is caused by the precipitation of the hard, brittle s phase. Sigma is an intermetallic compound of approximate equi-atomic composition with an extended range of formation in Fe-Cr alloys. The maximum temperature at which this form of embrittlement can occur is dependent upon chromium content; and is approximately 620°C for a 17 pct Cr steel and 730°C for a 27 pct Cr steel. The other form of embrittlement occurs after relatively short heating periods in the range of 375" to 565°C; in the higher chromium steels, hours may be sufficient as compared to months for s embrittlement. This phenomenon is not at all well understood and several controversial theories have been proposed. The rate and intensity of embrittlement increase with increasing chromium content but the maximum rate occurs at 475°C re-gardless of chromium content. As a result of this, the phenomenon has been termed 475°C (885°F) embrittlement. The effect of 475°C embrittlement on the properties of ferritic stainless steels has been thoroughly reviewed by Heger.1 The embrittlement causes a pronounced decrease in room temperature impact strength and ductility, a large increase in hardness and tensile strength, and a decrease in electrical resistivity and corrosion resistance. Microstructural changes accompanying embrittlement are minor and difficult to interpret with a general grain darkening, appearance of a lamellar precipitate, grain boundary widening, and precipitation along ferrite veins having been reported at various times. With the exception of reported line broadening, X-ray diffraction studies by conventional Debye analysis of solid samples have been of little value. BY making use of electron diffraction methods, Fisher, Dulis, and Car-roll' have recently shown the existence of a chromi-um-rich, body-centered cubic phase in 27 pct Cr steels which had been aged at 482°C (900°F) for as long as four years. Two types of theories have been advanced to account for the embrittlement. The first of these requires the precipitation of a phase not inherent in the Fe-Cr system with various investigators suggesting a carbide,3 nitride,3 phosphide,4 or oxide." Theories of this type have difficulty accounting for the influence of alloying elements on the embrittlement and for the facts that a minimum chromium content is necessary for embrittlement and the intensity of embrittlement increases with increasing chromium content. The second type of theory that has been proposed relates 475°C embrittlement to s phase formation which is inherent in the Fe-Cr system. An assumption of this kind can adequately explain the influence of alloying elements, for they exert an effect on 475°C embrittlement similar to that on s phase for-mation as can be seen in Table I. The minimum chromium content is essentially the same for both phenomena and it has been shown12,13 that s is a stable phase in the embrittling temperature range. In addition, it has been reported14,15 that pure alloys embrittle to the same extent as commercial type alloys. There are, however, several factors which have prevented complete acceptance of a s phase theory. Foremost of these is that the embrittlement can be removed by reheating for short time periods above 600°C, which in the higher chromium steels is within the stable s region. No s has ever been observed after one of these curing treatments, nor has any s been found as a result of embrittlement at 475°C. In addition, the simple precipitation of s cannot explain the time-temperature relationships for reactions between 350°and 750°C. This behavior is shown schematically in Fig. 1. Newell 16 and Ried-rich and Loib4 have shown that 475°C embrittlement follows a C-type curve as illustrated, while Short-
Jan 1, 1955
-
Institute of Metals Division - Distribution of Lead between Phases in the Silver-Antimony-Tellurium SystemBy Voyle R. McFarland, Robert A. Burmeister, David A. Stevenson
The distribution of lead between phases in the Ag-Sb-Te system was studied using microautoradio -graphy. Two compositions were investigated, both containing an intermediate phase Known as silver antimony telluride as the major phase, and one containing AgzTe and the other SbzTes as the minor phase. For both compositions, two thermal treatments were used: nonequilibrium solidification from the melt and long equilibration anneals of the as-solidified structure. For each composition, lead was segregated in the minor phase of the as-solidified structure, but was distributed in the matrix after anneal. The electrical resistivity and carrier type were insensitive to the distribution of lead in the two-phase structure. ThERE has been considerable interest in the Ag-Sb-Te system because of its thermoelectric properties. The major interest has been in compositions on the vertical section between AgzTe and SbzTes, particularly the 50 mole pct SbzTes composition AgSbTez (compositions are conveniently expressed as mole percent SbzTes along the AgzTe-SbzTes section). One of the major problems in the proper evaluation and utilization of this material is the inability to control the electrical properties through impurity additions: all alloys prepared to date have been p-type, even with the addition of large amounts of impurities. It has been shown Wit all the compositions previously studied contain an intermediate phase of the NaCl st'ructure as a major phase (denoted by b) and a second phase, either AgzTe or SbzTe3, as a minor phase.'-3 One explanation for the unusual electrical behavior of this material is that the impurity additions have a higher solubility in the second phase than in the matrix; the impurity would segregate to the second phase, leaving the bulk matrix essentially free of impurity.4 In order to investigate this mechanism with a specific impurity element, the distribution of lead between the two phases was determined using autoradiography. Lead 210 was chosen because of the suitability of its 0.029 mev 0 particle for autoradiography and also because of the interest in lead as an impurity in this system.5'6 EXPERIMENTAL PROCEDURE Two compositions were taken from the vertical section between AgzTe and SbzTes, 50 mole pet SbzTes (Viz. AgSbTez) and 75 mole pct SbzTes, in which AgzTe and SbzTes appear, respectively, as the minor phase. Lead containing radioactive lead (pb210) was added to the above compositions to provide a concentration of 0.1 wt pct Pb. The material was placed in a graphite crucible in a quartz tube which was then evacuated and sealed. The samples were melted and solidified by cooling at a rate of 8°C per min and then removed and prepared for microa~toradiography. After autoradiographic examination of these samples, they were again encapsulated and annealed in an isothermal bath at 300°C for a number of days and prepared for examination. An alternate method of preparation employed a zone-melting furnace; the molten zone traversed the sample at a rate of 1.2 cm per hr and the solid was maintained at a temperature of 500°C both before and after solidification. This treatment had the same effect as solidification at a slow rate followed by an anneal for several hours at 500°C. In order to obtain the best resolution, thin sections of the alloy were prepared by hand lapping to a thickness of approximately 20 p. Other samples were prepared for examination by lapping a flat surface on the bulk sample. The resolution, although somewhat better in the former procedure, was adequate in both instances and the majority of the samples were treated in the latter fashion. A piece of autoradiographic film (Kodak Experimental SP 764 Autoradiographic Permeable Base Safety Stripping Film) was stripped from its backing, care being taken to avoid fogging due to static-electrical discharge. A small amount of water was placed on the sample, the film applied emulsion side down on the surface of the sample, and the sample and the film dipped into water in order to assure smooth contact. After drying, the film was exposed for 2 to 5 days, the period of time selected to give the best resolution. The film was developed on the specimen and fixed and washed in place. Two major factors must be considered in establishing the reliability of an autoradiograph: the in-
Jan 1, 1964
-
Part VI – June 1969 - Papers - Surface Self-Diffusion of NickelBy P. Douglas, G. M. Leak, B. Mills
The sinusoidal surface relaxation technique has been used to measure the surface self-diffusion coefficient of spectroscopically pure nickel over a wide temperature range under a hydrogen atmosphere. A kink in the Arrhenius plot has been observed. In the temperature range T/T 0.98 to 0.80 (T in O K and T, is the melting temperature) the average self diffusion coefficient is given by Below the temperature T/T,- 0.80a decrease in the slope of the log Ds us 1/T plot is observed. This is associated with a diffusion process characterized by a lower activation energy (-20,000 cal mole'') and smaller preexponential term (-10- sq cm sec"). A series of experiments were carried out at T/Tm = 0.61 under a hydrogen atmosphere of higher oxygen partial pressure than for the rest of the experiments. It was found that Ds was significantly depressed due to oxygen adsorption. This evidence supports the opinion that the low temperature process (activation energy -20,000 cal mole-') is unlikely to be due to oxygen adsorption. An interesting feature of the present data is that the transition temperature (T/Tm - 0.80) is a function of orientation. For a small number of crystals of measured orientation the transition temperature was observed to be higher towards the low index (100) pole. Theories of surface diffusion are briefly reviewed and it is concluded that the present reszuts are best explained by invoking a surface roughening process. GJOSTEIN has recently analyzed available surface diffusion data for a wide range of metals. He suggested that two mechanisms were operative for fcc metals, an adatom process at high temperatures and a vacancy process at low temperatures. Results for nickel can be summarized as follows. At low temperatures (T/T, - 0.3 to 0.44) under ultra high vacuum conditions, Melmed2 measured an activation energy Q of 21 kcal mole-' using field electron emission microscopy. At higher temperatures (T/T - 0.7 to 0.9) under a vacuum of 10- ' torr, Maiya and lakel measured y as 39 kcal mole-' using the multiple scratch smoothing technique. The present work was undertaken to try to find out if two distinct processes could be observed. High temperature results give Q about 47 kcal mole-': there is evidence also for a low temperature value of about 20 kcal mole-'. These measurements were all made under a hydrogen atmosphere, in the temperature range 860" to 1412°C. Concurrent with the present study Bonze1 and jostein> have also observed a break in the Arrhenius plot for the (110) surface of nickel. These measure- ments under ultrahigh vacuum conditions using the laser diffraction technique are in excellent agreement with the work reported here under hydrogen annealing conditions. THEORY The available surface relaxation techniques include single and multiple scratch smoothing and grain boundary grooving. The processes have been compared in detail by Gjostein for conditions where surface diffusion dominates6 and Mills et al? where volume diffusion dominates. In summary the relevant points are as follows. Grain boundary grooving gives an average Ds for the two surfaces adjacent to the boundary and this can, to some extent, be simplified by using symmetrical bicrystals. This technique has been used to study the effect of environment on Ds for silver and copper.'-'' Scratch techniques yield Ds values for the small orientation range exposed by the scratches (-2 deg). The multiple scratch process is preferable because the profile rapidly becomes sinusoidal and can then be interpreted theoretically in a relatively simple way. Also corrections for mass transport processes other than surface diffusion can be introduced easily. Mullins" considered a sinusoidal profile described the wavelength of the profile. After time t the profile can be described by the equation The terms A, A', C, and B which account, respectively, for contributions due to evaporation-condensation, diffusion through the gas phase, volume diffusion through the lattice, and surface diffusion are defined as: where Ds = the surface self diffusion coefficient ys = the surface energy per unit area p = the equilibrium vapor pressure over a flat surface pa = the equilibrium vapor density over a flat surface DG= the diffusion coefficient of vapor molecules in the inert gas DM = the mass transfer diffusion coefficient which for a pure cubic metal is Dv/f where Dv is the radiotracer diffusion coefficient and f is the correlation factor H = the molecular volume V = the surface density of atoms, il2'3 M = mass of an evaporating molecule
Jan 1, 1970
-
Part VI – June 1968 - Papers - Internal Oxidation of Iron-Manganese AlloysBy J. H. Swisher
When an Fe-Mn alloy is internally oxidized, the inclusions formed are MnO which contains some dissolzled FeO. In the internal oxidation reaction, not all of the manganese is oxidized; some remains in solid solution as a result of the high Mn-0 solubility product in iron. Taking these factors into consideration, the rate of internal oxidation of an Fe-1.0 pct Mn alloy is computed as a function of temperature, using available thermodynanzic data and recently published data for the solubility and diffusivity of oxygen in iron. The predicted and experimentally determined rates for the temperature range from 950 to 1350°C are in good agreement. ThE rates of internal oxidation of austenitic Fe-A1 and Fe-Si alloys have been studied extensively.1"4 Schenck et al. report the results of a few experiments with Fe-Mn alloys at 854" and 956C, and Bradford5 has studied the rate of internal oxidation of commercial alloys containing manganese in the temperature range from 677" to 899°C. When Fe-Mn alloys are internally oxidized, the inclusions formed are solutions of FeO in MnO, the composition depending on the experimental conditions. Since the thermodynamics of the Fe-Mn and FeO-MnO systems have been investigated,6"9 and since the solubility and diffusion coefficient of oxygen in y iron have been determined recently,' it is possible to predict the rate of internal oxidation from known data. The calculations used in predicting the rate of internal oxidation will first be outlined, then the results of the prediction will be compared with the experimental results of this investigation. PREDICTION OF PERMEABILITY FROM THERMODYNAMIC AND DIFFUSIVITY DATA Oxygen is provided for internal oxidation in these experiments by the dissociation of water vapor on the surface of the alloy. The dissociation reaction is: + H2(g) + [O] [1] where [0] denotes oxygen in solution. The equilibrium constant for this reaction is known as a function of temperature:' log As oxygen diffuses into the alloy, oxide inclusions are formed which are MnO with some FeO in solid solution. The reactions occurring are: [Mn] + [0] = (MnO) [31 and [Fe] + [0] = (FeO) [41 where [ Mn] is manganese dissolved in iron and (FeO) is iron oxide dissolved in MnO. The overall reactions may be written as follows: [Mn] + HOte) = (MnO) + H2(£) [5] and [Fe] + H20(g) = (FeO) + Hz(R) [61 The standard free-energy changes and equilibrium constants for Reactions [5] and [6] are known.6 Therefore the equilibrium constants for Reactions [3] and [4] may be obtained by combining known thermodynamic data for Reactions [I], [5], and [6]. For Reactions [3] and [4]: K = and For the present purpose, both the Fe-Mn7,8 and FeO-~n0' systems can be considered to be ideal, i.e., [amn] = [NM~] and (aFeO) = (NM~~) = 1 - (NFeO) where the Ns are mole fractions. These relations, together with Eqs. [I] and [8], permit us to compute both the oxide and metal compositions as a function of temperature and oxygen potential at any point in the specimen. For cases where the oxygen concentration gradient between the surface and the subscale-base metal interface is linear, the kinetics of internal oxidation is an application of Fick's first law: where dn/dt is the instantaneous flux of oxygen into the specimen, g-atom per sq cm sec; 6 is the instantaneous thickness of the subscale, cm; Do is the diffusion coefficient of oxygen in iron, sq cm per sec; p is density of iron, g per cu cm; h[%O] is the oxygen concentration difference between the surface and sub-scale-base metal interface, wt pct. B6hm and ~ahlweit" derived an exact solution to the diffusion equation for systems in which there is a stoichiometric oxide formed. They showed that the oxygen concentration gradient is given by a rather complex error function relation. For the Fe-Mn-0 system and for most other systems that have been studied, however, variations in oxide compositions are small and rates of internal oxidation are sufficiently slow that the deviation from linearity in the concentration gradient of oxygen is negligible. The mass of oxygen transported across a unit area of the specimen for the total time of the experiment is given by the mass balance equation:
Jan 1, 1969
-
Technical Notes - Some Fundamental Properties of Rock NoisesBy Wilbur I. Duvall, Wilson Blake
The microseismic method of detecting instability in underground mines was developed by the U.S. Bureau of Mines (USBM) in the early 1940's. ,3 The method relies on the fact that as rock is stressed, strain energy is stored in the rock. Accompanying the buildup of strain energy are small-scale displacement adjustments that release small amounts of seismic and acoustic energy. These small-scale disturbances, which can be detected with the aid of special geophysical equipment, are called micro-seisims or self-gene rated rock noises. It was further determined that as failure of rock is approached, the rate at which rock noises are generated increases. Thus, by monitoring a rock structure at intervals and plotting rock noise rates vs. time, a semi quantitative estimate of the behavior and stability of the structure can be made. Since sufficient use of the microseismic method is still being made by various mining and construction companies, USBM undertook a comprehensive review of the method and a study of the fundamental properties of rock noises. As all prior work on rock noises has been done with resonant-type geophones, which prevented any analysis of their vibration records, it was necessary to develop the instrumentation and field techniques in order that their properties could be investigated, such as their frequency spectrum and absorption characteristics, and to determine if both P and S-waves are generated by a rock noise. The aim of this program is the design of microseismic instrumentation which can be better utilized as an engineering tool than the presently available microseismic equipment. This new design, based on the basic properties of rock noises, should allow better utilization of these phenomena in the study and location of zones of incipient instability in both underground and open-pit mines. EXPERIMENTAL PROCEDURE To study the waveform of rock noises, it was necessary to develop a microseismic system with a broad bandwidth. To achieve high sensitivity and broad frequency response, commercial ceramic accelerometers were used. The present broad-band microseismic system consists of accelerometers as geophones, low-noise preamplifiers, high-gain amplifiers, and an FM magnetic tape recorder. This seven-channel system has a flat frequency response from 20 to 10,000 Hz, a noise level of less than 2.0 kv, and a dynamic range (including manual set attenuation) of greater than 100 db; it can detect signals with acceleration levels as low as 2 ug. The entire system is solid state and hence battery operated and portable (Fig. 1) Analysis procedures consist of playing back the 30-in-per-sec (ips) magnetic tape recordings at 1 7/8 ips to expand the time scale of a recorded rock noise event and then recording this on a high-speed direct-writing oscillograph. The oscilIographic records are then digitized and run through Fourier integral analysis computer programs to determine the frequency spectrum of a rock noise event. The oscillographic records are also examined visually to determine if both P and S-waves can be recognized in a rock noise waveform. Broad-band microseismic recordings have been made at field sites in a wide variety of rock types and in both underground and open-pit mines. Sites include the Kimbley Pit, Ruth, Nev.; the Galena Mine, Wallace, Idaho; the Colony Development Mine. Grand Valley, Colo.; the Cliff Shaft Mine, Ishpeming, Mich; and the White Pine Mine, White Pine, Mich. DATA AND DISCUSSION Analyses of the recorded data have shown that rock noise frequencies are very broad. Fig. 2 and 3 show typical rock noise events and their frequency spectrums. In addition, it is evident from these figures that the wave form of a rock noise is very complex. The wide frequency variation, 50 to 7500 Hz, is due to many variables; the effect of travel distance is the only one examined in this study. The higher frequency components of the wave are rapidly absorbed with distance or increasing travel time. Fig. 4 shows the change in waveform resulting from an additional travel distance of 195 ft. From these data, it is apparent that a resonant-type microseismic geophone cannot respond to all frequencies generated by a rock noise, and in spite of the fact that the tuned geophone is more sensitive at resonance, a geophone with less sensitivity but broader band width is much more effective in detecting rock noises. In addition, a study of broad-band microseismic records shows that both P and S-wave arrivals are easily detected, as shown in Fig. 5. All records analyzed to date show that most of the energy is in the S portion of the wave; hence, microseismic geophones should be well
Jan 1, 1970
-
Reservoir Engineering - General - Effect on Gas Saturation on Static Pressure Calculations from T...By J. R. Elenbaas, J. A. Vary, D. L. Katz
The development of gas fields, oil fields and aquifers for storing natural gas is treated from two main vieu.-points: (I) the volumetric storage capacity for gas in a given situation and (2) the prediction of the number of wells required for the delivery of gas. Other experiences in the design and operation of storagc fields are incluclerl INTRODUC TION Storage of natural gas in underground reservoirs near the terminus of long distance pipelines has been the prime factor in opening the space heating market to the natural gas industry. Storagc has permitted a major. increase in both the load and the load factor of pipe-lines; some are now operating at steady load throughout the year. Thus, underground storage has been responsible for the rapid increase in demand for natural gas in recent years. Three types of reservoirs have been used for gas storage: natural gas reservoirs, oil reservoirs, and waterbearing sands or aquifers. This paper presents the factors to be considered when developing gas storage reservoirs of these three categories. There are two prime considerations tor any storage reservoir: (1) the volume of gas which a given reservoir will store advantageously and (2) the number 01 wells needed to provide the required peak deliverability. These two problems will be considered for the three types of reservoirs just noted STORAGE IN PARTIALLY DEPLETED GAS FIELDS Early storage operations consisted of replenishing the natural gas in a depleted gas field situated adjacent to the market. Today, newly discovered fields near the market may be considered for storage, and this discussion applies equally to both types of reservoirs. For reservoirs originally containing gas or oil, the question of the impermeability of the cap rock nor-mally does not arise. However, such fields are likely to have many wells drilled either to or through the reservoir under consideration. Positive assurance must be obtained that such wells are or can be made mechanically tight. Corroded casings may need to be lined or permanently plugged. Abandoned wells should bc reopened and properly cemented. The volumetric capacity for gas storage depends upon space available in the porous rock as well as pressure and temperature of the gas in the reservoir. The production-pressure decline data on partially depleted gas reservoirs without water drive permit calculation of the reservoir space for gas. Isopachous maps of sand volume and porosity data for the reservoir rock provide an alternate method of calculating the pore volume for water-drive reservoirs. The pressure range selected for the storage cycle depends upon ()) the safe upper limit of pressure. 2) the flow capacity of wells and (3) compression requirements when injecting gas into the reservoir or delivering to market. Normally, gas and oil fields have pressures at discovery in the range of 0.43 to 0.52 psi/ft of depth. Pressures of around 1.0 to 1.2 psi/ft of depth appear to lift the overburden1-3 and invite uncontrolled movement of fluids in the porous rock. Some top pressure is normally selected for a storage reservoir ranging from below discovery pressure for deeper reservoirs to 0.65 psi/ft of depth for shallower reservoirs. Pressures to 0.66 psi/ft have been experienced without difficulty. The lower pressure limit is set by water intrusion accompanying low pressures, reduced flow capacity for wells at lower pressures and compression requirements. Depletion-type gas reservoirs often encounter water problems in the later stages of gas production. Such water intrusion may be due to movement from the surrounding aquifer. Accordingly, displacement of this water back into the aquifer by gas pressure and subsequent surges of water corresponding to the gas storage pressure cycle must be considered. Storage fields often produce in four months a volume of gas equal to its initial content. Rapid decreases in reservoir pressure occur, such as 20 psi/day. Accordingly, closed-in pressure observation wells which reflect the pressure in the bulk of the reservoir are required for following the operation of the reservoir. It has been found that a plot of observation wellhead pressures against gas content, Fig. 1, is very useful in observing operation of the field, checking the inventory and predicting future behavior. The plot is based on a given quantity of base or cushion gas in place. The injection and withdrawal curves may spread depending upon the homogeneity of the reservoir rock. permeability of the rock, well spacing and flow rates.
-
Instrumentation For Mine Safety: Fire And Smoke Problems And SolutionsBy Ralph B. Stevens
INTRODUCTION Underground fires continue to be one of the most serious hazards to life and property in the mining industry. Although underground mines are analogous to high-rise buildings where persons are isolated from immediate escape or rescue, application of technology to locate and control fire hazards while still in their controllable state is slow to be implemented in underground mines. Even in large surface structures such as hotels, often only fire protection systems which meet minimal laws are implemented due to the high cost of adding extensive extinguishing systems, isolation barriers, alternate ventilation, escape routes and alarm systems. Incomplete and ineffective protection occasionally is evidenced where costs would not seem to be a factor, such as the $211 million MGM Grand Hotel fire November 21, 19801. Paramount in increasing fire safety and decreasing the threat of serious fire is early warning followed by proper decision analysis to perform the correct action. However, very complex fire situations can be produced in structures such as high-rise buildings and underground mines simply because of the distances between the numerous fire-potential locations and fire safe areas. Other complexities arise when normal activities occur that emit products of combustion signaling a fire condition to a sensitive fire/smoke sensor. For example, the operation of diesel equipment or the performance of regular blasting can produce combustion products that reach the sensitive alarm points of many sensors2. Smoke detectors for surface installations provide fire warning when occupants are at a distant location or when sleeping, thus greatly reducing injuries and property damage. However, when installed in the harsh environments of underground mines, fire and smoke detection equipment soon becomes inoperative, unreliable, or requires excessive maintenance. The U.S. Bureau of Mines has performed many studies and tests to improve fire and smoke protection for underground mine workers3. This paper describes several USBM safety programs which included in-mine testing with mine fire and smoke sensors, telemetry and instrumentation to develop recommendations for improving mine fire safety. It is hoped that the technology developed during these programs can be added to other programs to provide the mining industry with the necessary fire safety facts. By recognizing fire potentials and being provided with cost-effective, proven components that will perform reliably under the poor environmental conditions of mining, mine operators can provide protection for their working life and property equal to that which they provide for themselves and their families at home. The basis of this report is two USBM programs for fire protection in metal and nonmetal mines4,5 and one coal program6. The data was collected beginning in May 1974 and continuing through the present with underground tests of a South African fire system installed at Magma Mine in Superior, Arizona, and a computer-assisted, experimental system at Peabody Coal Mine in Pawnee, Illinois. The conduct of each program was as follows: • Define the problem and its magnitude in the industry • Develop concepts to solve or diminish the problem • Review available hardware or systems approaches to fit the concepts • Install and demonstrate the performance of a prototype system through fire tests in an operating mine. MINE FIRE FACTS Whether in coal or metal and nonmetal mines, the potential severity of fire hazard is directly related to location. As shown in Figure 1, fire in intake air at zones A, B, C or D can cause contamined air to route throughout the mine quickly if not detected, isolated or rerouted. Causes and location of former metal and nonmetal fires are represented in Table 1; the cause and location of fatalities and injuries is shown in Table 2. Coal-related fires and their impact on deaths and injuries are graphed in Figure 2; their locations are described in Table 37. Significantly the table shows that the hazard to personnel was three times greater for fires occurring in shaft or slope areas, and the percentage of deaths and injuries was four times that of other areas. Number of Persons Affected A 129-mine sample indicated that from 8 to 479 employees per shift work in underground metal and nonmetal mines, and that deeper mines have larger populations, as shown in Figure 3. Coal mining relates similar employment, and a 16-state sample of 670 mines employing at least 25 persons shows the distribution in Figure 4. Drift mines accounted for 58 percent of the sample but employ only 45 percent of the underground workers.
Jan 1, 1982
-
Institute of Metals Division - The Cadmium-Uranium Phase DiagramBy Allan E. Martin, Harold M. Feder, Irving Johnson
The cadmium-uranium system was studied by thermal, metallographic, X-7-ay and sampling techniques; special emphasis was placed on the establishment of the liquidus lines, The single inter metallic phase, identified as the compound UCd11 melts peritectically at 473°C to form a-umnium and melt containing 2.5 wt pct uranium. The cadmium-rich eutectic (0.07 wt pct uranium) freezes at 320.6°C. Solid solubilities in uraizium and cadmium appear to be negligible. Between 473°C and 600°C the liquidus line is retograde. NO publication relating to the cadmium-uranium phase diagram was found in the literature. The establishment of this diagram was of considerable interest to us because of a possible application of the system to the pyrometallurgical reprocessing of nuclear fuels. Analysis of liquid samples, metallographic examination, thermal analysis, and X-ray diffraction analysis were used to establish the phase diagram from about 300° to 670°C. Particular emphasis was placed on the establishment of the liquidus lines. The same system was concurrently studied in this laboratory by the galvanic cell method.' Both studies benefited from a continual interchange of information. MATERIALS AND EXPERIMENTAL PROCEDURES Stick cadmium (99.95 pct Cd, American Smelting and Refining Co.) contained 140 ppm lead as the major impurity. Reactor grade uranium (99.9 pct U, National Lead Co.) was most often used in the form of 20-meshspheres. This form was particularly suitable because it does not oxidize as readily as finer powder. The liquidus lines were determined by chemical analysis of filtered samples of the saturated melts. The liquid sampling technique is described elsewhere2 alumina crucibles (Morganite Triangle RR), tantalum stirring rods, tantalum thermocouple protecthecadmiumtion tubes, Vycor or Pyrex sampling tubes, and grades 60 or 80 porous graphite filters were used. Uranium dissolves in liquid cadmium rather slowly. In order to achieve saturation of the melts it was necessary to modify the procedure of Ref. 2 by the use of more vigorous stirring and longer holding periods (at least 3 hr) at each sampling temperature. The samples were analyzed for uranium by spectro-photometry (dibenzoyl methane method) or by polar- ography. The analyses are estimated to be accurate to 2 pct. Thermal analysis was performed on alloys contained in Morganite alumina crucibles in helium atmospheres. Standard techniques were employed; heating and cooling rates were about 1°C per min. For the determination of the peritectic temperature, Cd-10 pct U charges were first held for at least 50 hr at temperatures in the range 435° to 460°C to form substantial amounts of the intermediate phase. For the determination of the effect of cadmium on the a-p transformation temperature of uranium, charges of Cd-25 pct U (-140+100 mesh uranium spheres) were first held near the transformation temperature, with stirring, to promote solution of cadmium in the solid uranium. The holding times and temperatures for these treatments were 18 hr at 680°C for the cooling run and 28 hr at 630°C for the heating run. Alloy specimens for X-ray diffraction and metallographic examination of the intermediate phase were prepared in sealed, helium-filled Vycor or Pyrex tubes. Ingots from solubility runs and thermal analysis experiments also were examined metallographically. Crystals of the intermediate phase were recovered from certain cadmium-rich alloys by selective dissolution of the matrix in 20 pct ammonium nitrate solution at room temperature. Temperatures were measured with calibrated Pt/Pt-10 pct Rh thermocouples to an estimated accuracy of 0.3°C. However, the depression of the freezing point of cadmium at the eutectic is estimated to be accurate to 0.05°C because a special calibration of the thermocouple was made in place in the equipment with pure cadmium just prior to the measurement. EXPERIMENTAL RESULTS The results of this study were used to construct the cadmium-uranium phase diagram shown in Fig. 1. This diagram is relatively simple; it is characterized by a single intermediate phase, 6 (UCd11), which decomposes peritectically, and which forms a eutectic system with cadmium. The solid solubilities in the terminal phases appear to be negligible. An unusual feature of the diagram is the retrograde slope of the liquidus line above the peritectic temperature. The Liquidus Lines. The liquidus lines above and below the peritectic temperature are based on three separate solubility experiments. The data are shown in Fig. 1 and are given in Table I. It is apparent from the figure that the solubility data obtained by the approach to saturation from higher temperatures fall on substantially the same lines as those obtained
Jan 1, 1962
-
Institute of Metals Division - Electron Microscope Study of the Effect of Cold Work on the Subgrain Structure of CopperBy L. Delisle
This work represents the first step of an attempt to test the applicability of the electron microscope to the study of subgrain structures in copper. Observations on annealed and deformed single crystals and polycrystalline samples of copper are described. IN the course of study of the structure of fine tungsten wires and tungsten rods with the electron microscope, well defined subgrain structures were observed. The size, size distribution, and orientation uniformity of the etch figures varied widely in different samples. Figs. 1 and 2, electron micrographs of a tungsten wire and of a tungsten rod, respectively, are illustrations of the difference in size and size distribution of the etch figures in different samples of the same metal. The observed differences, as pointed out in a previous paper,' appeared to be related to the heat and mechanical treatments of the samples. They were also consistent with the results reported in the literature on the mosaic structure of metals.' For that reason a program of research was initiated in an effort to obtain more systematic evidence of the possible relation of heat and mechanical treatments to the subgrain structure of metals as observed in the electron microscope. The purpose of this paper is to present observations made on the effect of cold work on the subgrain structure of copper. Procedure Starting Materials: Copper was the metal studied because it can be obtained in a high degree of purity, much information is available in the literature on its properties and its response to cold work and heat treatment, it shows no allotropic change, and it is sufficiently hard to be handled without great difficulty. Two groups of specimens were used: 1—single crystals cast from spectroscopically pure copper and 2—polycrystalline samples of oxygen-free high conductivity copper. Single crystals were studied because it was hoped that the elimination of a number of variables, such as grain boundaries, orientation differences, degree of purity, would simplify the problem and perhaps permit a better understanding of the phenomena that would be observed. The polycrystalline samples were designed to give a general picture of the changes considered. The single crystals were made of copper which analyzed spectroscopically to better than 99.999 pct Cu. They were cast in vacuum, by the Bridgman method, in crucibles made of graphite with a maximum ash content of 0.06 pct. The mold design is shown in Fig. 3. It permitted casting crystals of the size and shape required for the experiments, so that the danger of introducing cold work in the original samples by cutting or other machining would be eliminated. The polycrystalline samples were pieces, 3/4 in. long, cut from a rod of oxygen-free high conductivity copper, % in. in diameter. A flat surface, 1/4 in. wide, was milled along the rods, polished, and etched. The samples were then annealed in vacuum at 850°C for 1 hr. Polishing and Etching: Work previously done on tungsten,' polished mechanically and etched chemically," had shown that: 1—the general appearance of the etch figures of a given sample was not altered by repeated polishings and etchings under similar conditions; 2—variations in the time of etching and the concentration of the etchant changed the definition of the etch figures, but did not alter their general size nor orientation distribution within the limits of observation. Further work confirmed the reproducibility of the subgrain structures observed in, 1—single crystals and polycrystalline samples of copper when polishing and etching were repeated under similar conditions, and 2—specimens of tungsten and polycrystalline copper when electrolytic polishing and etching were substituted for mechanical polishing and chemical etching, respectively. On the strength of these observations, it was felt that, if conditions of polishing and etching were kept constant, changes observed in the subgrain structure of a sample upon deformation and annealing would be attributable to such treatments. For that reason the conditions of polishing and etching were kept as constant as possible. The single crystals were polished electrolytically in a bath of orthophosphoric acid in water, in the ratio of 1000 g of acid of density 1.75 g per cc to 1000 cc of solution, under a potential drop of 1.6 to 1.8 V. Electrolytic polishing was selected to prevent the formation of distorted metal in polishing. The same samples were etched by immersion in a 10 pct aque-
Jan 1, 1954
-
Drilling-Equipment, Methods and Materials - Rheological Measurements on Clay Suspensions and Drilling Fluids at High Temperatures and PressuresBy K. H. Hiller
A rotational viscometer has been designed which perrnits the measurement of the rheological properties of drilling muds and other non-Newtonian fluids under conditions equivalent to those in a deep borehole (350F, 10,000 psi). The important mechanical features of this instrument are described, and its design criteria are discussed. The flow equations for the novel configuration of the viscometer are derived and the calibration procedures are described. The data and their interpretation, resulting from measurement of the flow properties and static gel strengths of homoionic montmorillonite suspensiom at high temperatures and pressures, are presented. Data are also presented for the flow behavier of typical drilling fluids at high temperatures and pressures. The pressure losses in the drill pipe and the annulus depend critically upon the flow parameters of the drilling fluid. This work demonstrates the need to measure these parameters under bottom-hole conditions in order to obtain a reliable estimate of the pressure losses in the mud system. INTRODUCTION The rheological properties of drilling fluids are affected by temperature and pressure, but the extent of these effects on the dynamic flow properties is not well known. Measurements of changes of the flow properties of clay-water drilling muds with temperature have been reported by Srini-Vasan and Gatlin.1 The temperatures reported did not exceed 200F, a limitation imposed by the apparatus used by these authors. The rheological properties of clay suspensions were measured at temperatures up to 100C by Gurdzhinian.' Neither the nature of the exchange ions in the clay suspensions nor the degree of purity were defined in his work, nor were the measurements extended to currently used drilling fluids. The lack of systematic measurements of dynamic flow properties at high temperatures and pressures seems the more surprising since during the last decade the importance of the control of the hydraulic properties of drilling fluids has come to be widely recognized. Very good mathematical treatments of the friction losses in drill pipe and annulus have been developed.3 4 These treatments are based on the assumption that drilling fluids behave as Bingham plastic fluids. Quite often this assumption is justified, while in other cases a power law equation pro- duces better fit than the Bingham model does. For convenience in applying viscometer data to pressure-drop calculations, the Bingham plastic flow equation is preferable and, therefore, has been applied to the data reported in this paper, although other equations may fit these data more accurately. In a Bingham plastic fluid the relationship between the shearing stress 7 and the rate of shear D is given by the following equation: where is the plastic viscosity and 4 the yield point. If 4 = 0, the equation for simple Newtonian flow, 7 = pD, is obtained. Two empirical constants are required for the description of laminar flow of a Bingham plastic fluid, and calculations of the flow behavior at high temperatures and pressures cannot be better than is permitted by the accuracy with which these constants are known. For this reason a high-pressure, high-temperature rhe-ometer has been designed to measure the plastic viscosity the yield point +, and the static gel strength S, at pressures up to 10,000 psi and temperatures up to 350F. The important features of its design will be described. The results of measurements on homoionic clay slurries will be discussed insofar as they are relevant to an understanding of the general flow behavior of clay-water drilling fluids. The results of measurements on some typical drilling fluids will be presented also, and their practical implications will be briefly discussed. DESCRIPTION OF EQUIPMENT MECHANICAL FEATURES A viscometer designed to measure the plastic viscosity, yield point and gel strength of non-Newtonian fluids must permit the measurement of the shearing stress t at any given rate of shear D. This is possible only if t and D are approximately uniform throughout the entire sheared sample. A Couette apparatus is the most convenient method of realizing this condition, as has been pointed out by Grodde." The "high-pressure, high-temperature rheometer" described in this paper is basically a rotational Couette viscometer that is immersed in a cell in which pressure and temperature can be controlled over the range of interest. Fig. 1 shows schematically the important features of the pressure cell and associated equipment. The heart of the instrument is the rotating cup. It is shown more clearly in Fie. 2. which revresents the lower one-third of the pressure cell (below the input drive shaft shown in Fig. 1), and it is shown in detail in Fig. 3. For measurements of dynamic flow properties, the rotating cup is driven by a 1/2-hp electric motor, which operates through a Vickers
-
Part VII – July 1968 - Papers - Structures and Migration Kinetics of Alpha:Theta Prime Boundaries in AI-4 Pct Cu: Part I-Interfacial StructuresBy H. I. Aaronson, C. Laird
Although the past results of X-ray experiments indicate that the broad faces of 0' plates are coherent with their matrix, dislocations lying in arrays have frequently been observed at these boundaries by transmission electron microscopy. Critical experiments employing the latter technique have been carried out in order to determine the origin of these dislocations. It is concluded that 8' plates are essentially coherent with the matrix at their broad faces throughout the aging temperature/time envelope studied. Virtually all of the dislocation arrays observed are deduced to have been formed by plastic deformation accompanying transformation. The proportion of dislocations arising from convexity of the plates is shown to be negligible by comparison with that from plastic deformation. At the higher aging temperatures, a[001] dislocations appeared in moderate numbers. These dislocations were traced directly, however, to the ledgewise dissolution of 0' occasioned by the formation nearby of 0 crystals. On the other hand, since there is a parametric difference normal to the broad faces of the ?' plates, mismatch dislocations do form at their edges. A previous conclusion that these dislocations have Burgers vectors of type a[001] was confirmed directly. The edges of 0' plates were observed to develop octagonal shapes when growing, but circular shapes during dissolution. 1 HIS paper presents the results of an investigation of the interfacial structures of plates of the transitional phase, 8', formed in an A1-4 pct Cu alloy. In a companion paper, Part 11, the effects of these structures upon the migration kinetics of a:?f boundaries are reported. This work is pa.rt of a general program designed to establish the basis of precipitate morphology. The present authors in Al-Ag,1 and whitton2 previously in U-C alloys, have used transmission electron microscopy to examine directly the vander Merwe3-6 networks of dislocations anticipated7 to compensate the small amount of lattice misfit normally founda at the broad faces of Widmanstatten plates. Since the broad faces of 0' plates are considered to be perfectly coherent with the corresponding habit planes in the a matrix,' no dislocations should be present at these faces. Many reports have been published, however, giving evidence to the contrary.10-18 The primary objective of this investigation was therefore to ascertain the nature of these dislocation structures. An attempt to do this is described in the first three sections of this paper. Inspection of the matching of the a and 8 ' lattices at the orientations of the 0:0' boundary corresponding to the edges of 0' plates raises the possibility that these edges may be made up of rather closely spaced edge- type misfit dislocations oriented so as to be sessile with respect to the lengthening or shortening of the plates. Since this structure should severely inhibit migration of the plate edges (Ref. 7, Part II), a situation not originally anticipated,' an experimental determination of the interfacial structure of the edges of 8' plates was clearly in order, and is reported in Section III. Those aspects of the experimental procedure applicable to both Parts I and I1 are presented in the next section. Specific procedures applicable to individual aspects of each investigation, and also the relevant surveys of the literature, are then individually reported in the appropriate sections. I) GENERAL EXPERIMENTAL PROCEDURE The material used in both parts of these studies was the same as that of a previous investigation:" strips of A1-3.93 pct Cu, 0.009 in. thick, prepared as before, solution-annealed at 548°C for 6 hr, and quenched. Details of subsequent aging, and in some cases deformation treatments, are given in the Experimental Procedure sections of the individual parts of both papers. Specimens of the heat-treated strips were electro-thinned as beforeLg and examined in a Philips EM 200 microscope equipped with a goniometer stage. A commercial hot stage, of the grid-heater type and capable of * 30-deg tilt about one axis in the plane of the specimen, was also used for kinetic studies. The usual precaution of calibrating for the additional heat supplied by the electron beam was taken.19 A 16-mm cine cam-I era mounted outside the viewing window was frequently used to record the transformations. Conventional selected-area diffraction and dark-field viewing techniques were used to identify the precipitates in the foils. Normal bright-field images corresponding to two-beam diffracting conditions or dark-field images were employed to characterize the dislocations observed at the interfaces of the precipitates. The application of these techniques to the study of an interphase boundary, and the interpretation of the images,20'21 has been fully described in a previous paper.'
Jan 1, 1969
-
Reservoir Engineering - General - Fluid Migration Across Fixed Boundaries in Reservoirs Producing...By B. L. Landrum, J. Simmons, J. M. Pinson, P. B. Crawford
Patentiometric model data have been obtained to estimate the effect of vertical fractures on the areas swept after breakthrough in water flooding and miscible displacement programs such as gas cycling where the mobility is near one, The data are presented for the case of the fire-spot pattern in which the cemer well is fractured various lengths and orientations, the data indicate that for 10-acre spacing, fractures extetidirrg over 1300 ft in either directior1 from the fractured well may re.srrlt in reductions in sweep efficiencics from 72 to approximately 34 per cent. However. the area swept after break through may be quite largr and only 10 or 12 per cent 1ess than would be obtained if the reservoir were trot fractured. For the specific case when the volume of fluid injected is equivalent to 100 per cent of the pattern vol-unie, the swent area may vary from 80 to 88 per cent, depending on the lenght of the fracture. The former value is that which occurs when the break through or sweep efficiency was orrly 34 per cent and the latter figrrre of 88 per cent is that which is obtained if the reservoir were unfrac-ttm'd. It is pointed out that although the sweep efficiency may he very low in vertically fractured five-spot patterrz.s, the area swept at low water-oil ratios may be only 5 to 10 per cent less than those achieved if the reservoir were unfractured. INTRODUCTION Since the initiation of commercial reservoir fracturing techniques it has been desirable to determine the effect of fractures on the areas swept after breakthrough. Most water flooding or gas cycling projects are continued for substantial periods after the brcakthrough of the injected fluid. Although the sweep efficiency serves as one criterion for rating various flooding patterns. the area swept after breakthrough for various water-oil ratios or percentage wet gas, if cycling. is of perhaps more importance than the sweep efficiency alone. Sweep efficiency data on the vertically fractured five-spot have been presented3. Previous work on the line-drive pattern has shown the effect of vertical fractures on the area swept after breakthrough for the case in which the distance between injection and producing wells divided by the distance between adjacent input wells was equivalent to 1.5 (see lief. 2). The data indicated that for the line-drive pattern it may be desirable to flood or cycle substantially perpendicular to the fractures in order to achieve the greatest recovery for the smallest volume of fluid injected. For this study the center well of a five-spot is assumed as the fractured well. All fractures were assumed to originate at this well and extend into the reservoir for various distances and orientations. All the fractures are straight and are of large permeability compared to the matrix proper. These data are presented to aid the engineer in estimating fractured five-spot pattern performance. ANALOGY The potentiometric model was used in making this study. The model used was 20 20 in. by approximately 1-in. deep. For certain portions of the study one corner of this model was considered to be an injection well and the opposite corner a production well. To simulate vertical fractures a copper sheet was soldered to the wire well and made to conform to the desired length and orientation. In other studies the same model was used except that the four corners of the model might be considered as the corner wells of a five-spot pattern and a fifth well was placed in the center of the model. The well placed in the center of the model was fractured. The total fracture length is L and the well spacing. d. The complimentary fracture angles will be obvious from Figs. 3 and 4. The data obtained on the potentio-metric model assumes the pay to be uniform and homogeneous, the mobility ratio is one, steady-state conditions exist and gravity effects arc neglected. The permeability of the fractures is very great compared to that of the matrix proper. The po-tentiometric model has been used widely both in water flooding and gas cycling projects, and may be used for miscible displacement; how-ever. it is believed that the poten-tiometric model data are more properly applicable to gas cycling than water flooding because the model as-
-
Geology-Its Application and Limitation in the Selection and Evaluation of Placer DepositsBy William H. Breeding
The remarks that follow are based substantially on experience covering 45 years, 80% of which has been in placer work, rather than on a review of available literature. Most commercial placers have been deposited by the action of water. The richer and more- difficult-to-mine placers are those in the headwater areas where gradients are steepest. The most lucrative placers are generally in inter- mediate areas where volumes are greater, fewer boulders are present, and gradients are from 3% to 1-1/2%. The higher volume, lower grade placers are in the lower reaches of river systems where gradients are lower. Where gold-bearing rivers have discharged into the sea, wave action can concentrate values on beaches, past and present. Most of the rich, readily accessible placers were mined by our forefathers. Current opportunities exist: (1) in remote areas where infrastructure has been absent in the past, or development has been prohibited by adverse ownership - political or commercial; (2) in deposits that could not be mined by equipment available to our forefathers; (3) in deposits unidentified by our forefathers; (4) where the-price-of-product/cost ratio is substantially better than in earlier years; or (5) a combination of those factors. When I entered the placer business in the late 1930s, and subsequently, a prevailing opinion believed that glacial deposits should be avoided as irregular in mineral content and composition, and unrewarding to explore and develop; yet an operator has been mining a fluvio-glacial deposit profitably for the past 17 years. Rich buried placer channels, of ten called paleo-channels were worked in the last century, generally by hand methods, and under conditions that would be unacceptable today. Exploration and mining equipment now available make some of these channels attractive targets. Well-known examples are in California and Australia. The formation of a commercial placer requires a source of valuable minerals. Above primary deposits, there may be eluvial deposits formed by the erosion of gangue minerals and the concentration "in situ" of valuable minerals. Down slope from these deposits are the hillside or colluvial deposits, and below them are the alluvial deposits of redeposited material. Most of the great placer fields of the world are the result of several generations of erosion and deposition. Well-known examples are in California and Colombia. Gold is a very resistant and malleable material, and gold placers may extend for 64 or 80 km (40 or 50 miles) along a river system. Platinum is less malleable, but is very resistant to disintegration. Diamonds are extremely hard, and (especially gem diamonds) may be found over great lengths of a river system. Cassiterite is less resistant to disintegration, and tin placers seldom extend over two miles without resupply from an additional source or sources of mineralizaton. Tungsten minerals are generally more friable, and within a few hundred yards of the source disintegrate to the point that they are uneconomical to recover. Rutile, ilmenite and zircon placers generally result from the weathering of massive deposits, and may be encountered over extensive areas; most are fine grained and durable. What does a geologist or mining engineer look for in placer exploration? The old adage to look for a mine near an existing mine is still valid. You need a source of valuable mineral. Then you require conditions for concentration, which means a satisfactory gradient and/or other conditions that will permit heavy minerals to settle. Nicely riffled gravel, often called a shingling of the bars, is conducive to placer formation. Coarser gravel is logically associated with coarser gold. Excessive clay and/or high stream velocities in narrow channels can carry gold far downstream and distribute it uncommercially over a large area. When material is extremely fine, in situ weathering and concentration become more important. Placers frequently occur distant from lode mines, and one must remember that in a larger watershed the exceptional floods that occur once in a hundred or a thousand years can move great quantities of material long distances. The carrying power of water is said to vary with the fifth or sixth power of its velocity. I am not ready to disagree with Waldemar Lindgren and accept that many commercial placers are substantially enriched by the chemical deposition of gold from solutions; however, I have seen crystalline gold in clayey material quite distant from known sources of primary gold that is dif-
Jan 1, 1985
-
Institute of Metals Division - Effect of Aluminum on the Low Temperature Properties of Relatively High Purity FerriteBy H. T. Green, R. M. Brick
True stress-strain data on alloys of pure iron with up to 2.4 pct Al were obtained in the temperature range +100° to —185°C. Alumi-num was found to reduce yield and flow stresses of iron at low temperatures but to have little or no effect on ductility. The effects of temperature and composition on strain hardening are discussed. SEVERAL independent studies of the behavior of high purity iron binary alloys at low temperatures are now in progress in attempts to evaluate systematically the variables affecting the low temperature brittleness of ferritic steels. This paper reports the results of one such investigation in which the tensile properties of aluminum and aluminum plus silicon ferrites were measured from 100" to —192°C. True stress-natural strain data have been obtained in order to evaluate as many as possible of the parameters which describe the behavior of the materials involved. In comparable studies at the National Physical Laboratory in England, iron and iron alloys of high purity have been produced' and tested at subat-mospheric temperatures.' True stress-natural strain curves were obtained there also. The purest iron contained 0.0025 pct C and 0.001 pct O and N. Even this, as normalized at 950°C following hot rolling, showed little ductility at -196°C. The grain size was ASTM No. 3, and the room-temperature yield strength was 17,800 psi (which seems too high for pure iron). Some of the NPL irons contained considerably more oxygen and demonstrated intergran-ular fracture at —196°C. The authors2 carefully differentiated between intergranular fractures associated with excessive oxygen content and transcrys-talline cleavage with little ductility encountered at —196°C in the purer material. The cleavage stress was half again as great as that associated with inter-granular fracture. Test Material, Preparation, and Procedures Of a number of Fe-A1 alloys produced, eight were considered to be sufficiently pure for testing. Partial chemical analyses (Table I), low observed yield points, and high ductilities indicate these alloys to be comparatively pure for vacuum-melted irons of sizable ingots, 5 Ib or more. To produce the binary Fe-A1 alloys, electrolytic iron was melted in air, cast into slabs, and rolled to strips 0.010 in. thick. These strips, joined into a continuous ribbon and wound into 2 1/2 in. diameter spools, were subjected for four weeks to a moving atmosphere of purified dry hydrogen in a stainless-steel tube at 1050" to 1150°C. Charges of these spools were melted in beryllia crucibles under good vacuums (1 micron), and aluminum (99.97 pct Al) was added to the melts. Compositions of these alloys are recorded in Table I. The ingots were hot forged and then cold rolled at least 65 pct to 3/8 in. rods which were vacuum annealed to the desired grain size, approximately ASTM No. 4, prior to machining into tensile test bars. All tensile specimens had gage sections 1 in. long, with a fillet of 1.5 in. radius to the shoulder. Gage diameters were 0.250 in, except for a few rods where additional cold work required use of a 0.200 in. gage section. After machining, 0.002 in. was removed from the gage diameter using 240, 400, and 600-grit metallo-graphic papers. The final polish with 600 grit left the fine scratches running in the longitudinal direction. By this means, surface metal strained during machining was removed. A few specimens heat treated after machining were similarly reduced 0.004 in. to remove any material affected chemically by the atmosphere during heat treatments, as is discussed in a later section. Tensile tests of the eight alloys at constant temperatures from +100° to —185°C were performed in apparatus which has been described." The essentials include a double-walled insulated metal vessel which contained the liquid heat-transfer medium surrounding the test specimen. A constant temperature was maintained by means of a pyrometer which regulated the pressure of dry air driving liquid air through a copper coil. Temperature variation was less than ±2°C during a specific test. For axial straining, two lengths of case-hardened chain, terminating in simple shackles, loaded the specimen through threaded grips. The lower grip bar passed through a hole in the bottom of the test vessel to which it was joined by a thin-walled
Jan 1, 1955
-
Iron and Steel Division - Thermal Conductivity Method for Analysis of Hydrogen in Steel (Discussion page 1551)By J. Chipman, N. J. Grant, B. M. Shields
The vacuum tin-fusion method of analysis for hydrogen, developed by Carney, Chipman, and Grant, has been modified to permit the analysis of the evolved gases for hydrogen by means of a thermal conductivity cell. A properly prepared sample can be analyzed in 10 min with a probable error of ±0.12 ppm. A study of various methods for storage of hydrogen samples shows that samples can be safely held in a dry ice-acetone bath as long as six days. Storage in liquid nitrogen is necessary for samples to be held one week or more. HE vacuum tin-fusion method, as developed by I- Carney, Chipman and Grant,' is the only analytical procedure which has shown promise of being fast enough for use in the control of hydrogen during steelmaking. It was felt that further simplification and faster speed of operation could be effected by the use of thermal conductivity measurements for analysis of the gases evolved in the tin-fusion method. The application of conductivity measurements to the tin-fusion method is possible because: 1—the evolved gas is essentially a mixture of hydrogen, nitrogen and carbon monoxide with a hydrogen content usually over 50 pct, 2—the evolved gas is collected at a relatively low pressure, and 3— the thermal conductivities of CO and N2 are practically identical while that of hydrogen is very much greater. The major part of this research program was devoted to the construction and calibration of a vacuum tin-fusion apparatus which analyzes the evolved gases for hydrogen by means of a thermal conductivity cell. The second phase of the problem was associated with the development of a procedure for storage of samples prior to analysis. With the rapid quenching method for hydrogen sampling,' which seems to be the most practical for steel mill use, it is necessary that the samples be stored safely during the interval between sampling and analysis if the hydrogen content of the molten metal is to be maintained in the supersaturated solid samples. The thermal conductivity bridge has been used for a number of years in the analysis of certain gas mixtures. An elementary discussion of the theory and practice of gas analysis by thermal conductivity measurements is given by Minter.3 A more comprehensive discussion of the theory and of the various measuring circuits is presented by Daynes.' A complete knowledge of the theory and properties of the thermal conductivity of gases and gaseous mixtures can be gained by a study of the standard textbooks on the kinetic theory of gases."' The existing data on the thermal conductivity of single gases are reviewed by Hawkins: that for a number of binary gas mixtures by Daynes' and Lindsay." The thermal conductivity method may be applied to the determination of the composition of a binary mixture if: 1—the thermal conductivity of the mixture varies monotonically with composition, and 2— the two gases have measurably different thermal conductivities. The greater the difference between the two gases, the greater the sensitivity of the method.10 he method is applicable to the analysis of multicomponent mixtures when all of the gases in the mixture except one have nearly the same thermal conductivity. Fortunately, the mixture of hydrogen, nitrogen, and carbon monoxide evolved by the tin-fusion analysis' falls in this latter classification. The thermal conductivities of nitrogen and carbon monoxide are practically equal; and the thermal conductivity of hydrogen is approximately seven times that of the other two. Therefore, the thermal conductivity of a gaseous mixture of hydrogen, nitrogen, and carbon monoxide at known temperature and pressure can be related directly to the percentage of hydrogen in the mixture by suitable calibration. Usually the thermal conductivity of a mixture of gases is measured at atmospheric pressure where the thermal conductivity is independent of pressure over a wide pressure range. At very low pressures (below 1 mm Hg), the thermal conductivity of gases varies with the pressure. This phenomenon has been utilized in the Pirani vacuum gage for the measurement of pressures in the range of 10" to 10-0 mm of mercury.= Very little has been published concerning the variation of thermal conductivity with pressure at intermediate pressures between 1 mm Hg and 1 atm. However, preliminary measurements indicated that the thermal conductivities did vary with pressure over the range of pressures (up to 10 mm Hg) at which gases are delivered from the vacuum pump. Therefore, the calibration of the thermal conductivity cell had to be planned to include the effects of both gas composition and pressure. Such a calibration chart is shown in Fig. 4. Most industrial applications of the thermal conductivity method of gas analysis have used a compensated Wheatstone bridge circuit containing two
Jan 1, 1954
-
Mining - Portable Crusher for Open Pit and Quarry Operations (MINING ENGINEERING. 1960, vol. 12. No. 12. p. 1271)By B. J. Kochanowsky
The idea of a portable crusher is not new. Many such crushers are available but they are small and designed for construction work. For many years the author has suggested, both in this country and in Europe, the building of larger portable crushers intended expressly for use in quarries or open pits. Although not applicable under all conditions, there are mining operations where a mobile crusher arrangement could be more profitable than the facilities now used. The primary use of a portable crusher, i.e., a crusher mounted on crawlers or tires, in the rock and mining industries is to reduce costs by permitting the substitution of conveyor belt haulage for truck or track haulage. The usual sequence of operations in surface mining is drilling, blasting, loading, haulage, and crushing. Haulage is normally accomplished by truck or track-mounted cars, the latter method being used for the longer distances. However, by using a portable crusher in the pit, the sequence of operations would be changed so that the crushing stage would occur before haulage (Fig. 1). Such a sequence would permit the use of conveyors to replace the more expensive truck or track haulage methods. Since most quarry and open pit operations normally require a crushing stage, the only additional costs incurred will be due to the investment required to purchase or construct a mobile arrangement for a crusher. But this factor has to be weighed against the advantages to be gained by conveyor haulage. As shown in Fig. 2, transportation of material by belt conveyor over short distances is less expensive than by truck. The inclination of the belt has no effect on belt speed; consequently, the hourly tonnage moved remains the same. Conversely, the output rate of trucks as expressed in tons or ton-miles per shift decreases proportionally to the haulage speed, which is considerably slowed by the steepness of the road (Fig. 3, left). Although maximum possible grades and maximum economic grades of haulage are greater for a belt than for a truck (over the same total lift), the longer haulage distances favor the use of trucks. Although power consumption for hauling on a grade increases for both conveyances, the rate of power consumption increases faster for trucks than for conveyor belts (Fig. 3, right). Since the output rate and related fixed costs are affected by the travel speed, total haulage costs with trucks would increase with the grade more rapidly than the similar costs of conveyor belts (Fig. 4). Travel distance, road grade, speed, size and number of pieces of equipment, efficiency of operation, and many other factors affect such haulage costs. In general terms it can be said that the shorter the distance, the steeper the grade, and the greater the output, the more advantageous the belt becomes in comparison to truck or track haulage. In addition to potential cost savings in haulage procedures, a portable crusher would allow better utilization and performance of shovels. Loading operations would not be interrupted as often by the necessity of waiting for cars or trucks. Unfortunately, the application of belts in open pits for haulage from bench sites is generally not practical under existing conditions because a belt fed directly by a mechanical shovel can be torn, damaged, or worn out quickly by the large rock fragments falling on it during loading. However, by using a mobile crusher this situation can be avoided. As shown in Fig. 1 (b), the shovel feeds rock into the crusher located behind it. The crushed material is initially transported by an extensible and/or movable belt, thence by a longer stationary conveyor to the plant where the material is subjected to further treatment by secondary crushing, screening, etc. The first-mentioned conveyor, needed to bridge the distance between the shovel and the stationary conveyor, is necessarily variable in length owing to the continuous movement of the shovel and the desire to keep the stationary belt at a safe distance from the bench during blasting operations. The remarkable part of mobile crusher operations is the extra-ordinarily high output per man-shift, the low maintenance and power requirements for haulage, and the increased output of the loading shovel. A cement quarry which has been using a portable crusher and conveyor since 1956 requires only three men to operate the shovel and crusher and to transport the crushed rock by belt from the quarry face to the screening plant. If truck haulage
Jan 1, 1961
-
Part I – January 1968 - Papers - On the Constitution of the Pseudobinary Section Lead Telluride-IronBy R. W. Stormont, F. Wald
The phase diagram of the Pseudobinary section PbTe-Fe was determined. It was found to contain a monotectic and a eutectic reaction, the latter one taking place at 14 at. pct Fe and 875° * 5°C. The solid solubility of iron in PbTe was found to be 0.3 at. pct by electronmicroProbe analysis. No solubility of PbTe was detected in iron. Slight deviations from true pseudobinary behavior were found to occur in the range of - 5 to 10 at. pct Fe. In the course of a general investigation of reactions of various metals with lead and tin telluride,' the lead telluride-iron system was reinvestigated. It had been established much earlier than iron does not chemically react with lead telluride but forms a eutectic with a melting point of 879" The eutectic composition or other related information has never been reported, but for a number of years iron has been in general use for contacting of lead telluride and lead telluride alloys for thermoelectric applications. It seems therefore desirable to clarify the exact constitution of the system to furnish a base for the long-term evaluation of bonds made between lead telluride and iron either by pressure contacting or by brazing methods. I) EXPERIMENTAL METHODS Lead telluride-iron alloys were prepared in 10-g charges, using premelted lead telluride. This material was prepared from high-purity, semiconductor-grade lead and tellurium obtained from the American Smelting and Refining Co. and described as 99.999 pct pure. The iron used was "Armco" iron; the major impurities found here were 0.02 pct C, 0.018 pct Si, and 0.015 pct Cr. All remaining impurities were less than 0.01, the total of all impurities not exceeding 0.15 pct. Charges were prepared in closed quartz arnpoules which were evacuated and in some cases backfilled with high-purity argon to retard excessive lead telluride evaporation and deposition in slightly cooler parts of the ampoule. For high iron concentrations, this can lead to total separation of the constituents, since the vapor pressure and the sublimation rate of PbTe are quite high.4 Nevertheless, since the ampoules are closed, no change in overall composition was expected and the nominal composition of all alloys was assumed to be retained. X-ray diffraction analysis, thermal analysis, and microsections were used in the evaluation of the alloys. The nature of the system was such that X-ray diffraction was not particularly helpful. It merely served to establish that at all concentrations PbTe and a! iron were in equilibrium at room temperature. Thermal analysis was carried out by taking direct temperature vs time curves on a Sargent recorder where a width of 10 in. was kept as 1 or 0.5 mv by use of an automatic bucking voltage network. Quartz ampoules with minimized dead space, coated with boron nitride and fitted with a thermocouple reentrant, were used as containers for the charge. At high temperatures and over long periods of time, boron nitride reacts with iron. For the thermal analysis runs, however, this was not significant. More significant was the fact that the vapor pressure of PbTe at some of the meas -uring temperatures apparently exceeded I atrn quite considerably. This, in some cases, caused the slightly softened quartz tubes to blow out if great care was not taken to contain them and minimize time and temperatures used. As containers pure nickel tubes were used which also served to avoid temperature gradients in the quartz ampoule. Nevertheless, the experimental difficulties at high temperatures were severe and the monotectic temperature could therefore not be determined accurately. In general, the accuracy reached by the thermal analysis setup in this case is *4"C as determined with gold, silver, and tin, under the conditions of analysis here. Inherently, the apparatus is capable of reaching accuracies better than i 1°C. Also, difficulties were encountered in microsection-ing. They were related to polishing, since it is rather difficult to avoid pulling the iron out of the weak and brittle lead telluride matrix. It proved best to follow a procedure where, after grinding to 600 grit on carborundum paper, a polish with 6 p diamond was used on nylon cloth. Finally, #3 "Buehler" alumina and an automatic polisher were used for -5 min only, to avoid relief. The best etching results were achieved with
Jan 1, 1969
-
Natural Gas Technology - Natural Gas Hydrates at Pressures to 10,000 psiaBy H. O. McLeod, J. M. Campbell
This paper presents the results of the data obtained in the first stage of a long-range study at high pressures of the system, vapor-hydrate-water rich liquid-hydrocarbon rich liquid. The data presented are for the three-phase systems in which no hydrocarbon liquid exists. Tests were performed on 10 gases at pressures from 1,000 to 10,000 psia. One of these was substantially pure methane, and the remainder were binary mixtures of methane with ethane, propane, iso-butane and normal butane. Several conclusions may be drawn from the data. 1. Contrary to previous extrapolations, the hydrocarbon mixtures tested form straight lines in the range of 6,000 to 10,000 psia which are parallel to the curves for pure methane, when the log of pressure is plotted vs hydrate formation temperature. 2. The hydrate formation temperature may be predicted accurately at pressures from 6,000 to 10,000 psia by using a modified form of the Clapeyron equation. The total hydrate curve may be predicted by using the vapor-solid equilibrium constants of Carson and Katz' to 4,000 psia and joining the two segments with a smooth continuous curve between 4,000 and 6,000 psia. 3. The use of gas specific gravity as a parameter in hydrate correlations is unsatisfactory at elevated pressures. 4. The hydrate crystal lattice is pressure sensitive at elevated pressures. INTRODUCTION Prior to 1950 many studies had been made of the hydrate forming conditions for typical natural gases to pressures of 4,000 psia.""'"'"" Most of these attempted to correlate the log of system pressure vs hydrate formation temperature, with gas specific gravity as a parameter. One of the more promising correlations was made by Katz, et al, which utilized vapor-solid equilibrium constants. The only published data above 4,000 psia are those of Kobayashi and Katz7 for pure methane to a pressure of 11,240 psia. In the intervening years, most published charts for the high-pressure range have represented nothing more than extrapolations of the low-pressure data, with the methane line serving as a general guide. The reliability of these charts has become increasingly doubtful (and critical) in our present technology as we handle more high-pressure systems. The portion of our high-pressure hydrate research program reported here was designed to: (1) investigate the reliability of existing charts; (2) obtain actual data on gas mixtures to 10,000 psia; and (3.) develop a simple hydrate correlation that was more reliable than those which simply used specific gravity as a parameter. Binary mixtures of methane and ethane, propane normal butane, or iso-butane were injected into a high-pressure visual cell containing an excess of distilled water. Hydrates were formed and then melted to observe the decomposition temperature of the hydrates at pressures from 1,000 to 10,000 psia. EQUIPMENT The equipment consisted of a Jerguson 10,000-lb high-pressure visual cell, a 10,000-1b high-pressure blind cell and a Ruska 25,000-1b pressure mercury pump. The visual cell was placed in a constant-temperature water bath controlled by a refrigeration unit and an electric filament heater. A Beckman GC-2 gas chromatograph was used in analyzing the gas mixtures after each run was completed. EXPERIMENTAL PROCEDURE After evacuating the gas system, the heavier hydrocarbon was injected into the high-pressure mixing cell to that pressure necessary to give the desired composition. This cell then was pressured to 1,100 to 1,200 psia by methane from a high-pressure cylinder. The mixing cell holding the gas contained a steel flapper plate and was shaken intermittently over a period of 15 minutes. After mixing, the valve to the high-pressure visual cell containing excess distilled water was opened, and the gas mixture was allowed to flow into the cell. The temperature in the water bath was lowered 10" to 15'F below the estimated hydrate decomposition point. As a first check, the temperature was increased at a rate of 1°F every six minutes to find the approximate point of decomposition. It was again lowered 1.5° to 5°F to form hydrates. The temperature was raised to within l° of the estimated decomposition point and then increased 0.2F every 10 to 15 minutes until the hydrates decomposed. This procedure was repeated at various pressures to obtain 7 to 13 points for each mixture between 1,000 and 10,000 psia. After completion of the hydrate decomposition tests, the gas mixture composition was analyzed with a calibrated gas chromatograph. These gas analyses have an estimated error of ± .1 per cent.
-
Extractive Metallurgy Division - Free Energy of Formation of CdSbBy Richard J. Borg
The vapor pressure of Cd in equilibrium with CdSb in the presence of excess Sb has been measured using the Knudsen effusion method over the temperature range 276° to 379°C. The free energy of formation of CdSb is given by AF° = -1.58 + 1.53 x l0-4 T, kcal per mole. The enthalpy and entropy are obtained from the temperature coefficient of the .free energy. CADMIUM and antimony have almost imperceptible mutual solid solubility but form a single stable intermediate phase, CdSb. This phase, according to Han-sen,l extends from about 49.5 at. pct to 50 at. pct Cd at 300°C and has the orthorhombic structure. The free energy of formation of CdSb can be calculated from the vapor pressure of Cd for compositions which contain less than 49 at. pct Cd. The appropriate reaction and formulae are given by Eqs. [I] and [2]- CdSb(s, ~ Cd(g)-, +Sb(s) [1] Since Sb is in its standard state, Af - N,,AF'-,, = NcdRT In a,, = NcdRT InP/PO [2] In Eq. [2], P, is the vapor pressure of Cd in equilibrium with the alloy, and Po is the vapor pressure in equilibrium with pure solid Cd. It is implicit in this calculation that the free energy only slightly changes within the narrow limits of the single phase field. Thus, the value obtained from the antimony-rich boundary is truly representative of the stoi-chiometric compound. The results reported herein are obtained from a mixture near the eutectic composition, i.e. 59 at. pct Sb. Only two previous investigations" of the free energy of formation of CdSb have been made. Both relied upon the electromotive force method, and measurements were made over relatively narrow temperature ranges which strongly influences the reliability of the values of AH and aS. EXPERIMENTAL The eutectic composition is prepared by fusing reagent grade Cd and Sb by induction heating in vacuo with the starting materials held in a graphite crucible having a threaded lid. The material obtained from the initial melt is pulverized, sealed under high vacuum in a pyrex capsule, and annealed at 420°C for two weeks. X-ray analysis"gives the following lattize parameters: a = 6.436A, b = 8.230& and c = 8.498A using Cu Ka radiation with A = 1.54056. These values are in fair agreement with the result? previously reported by Al~in:4 i.e. a = 6.471A, b = 8.253A, and c = 8.526A. Vapor pressures are measured using an apparatus which has been described elsewhere,= however, with a single important modification. Knudsen effusion cells are made of pyrex with knife-edged orifices made by grinding the convex surface of the lid on #600 emery paper. Photographs taken at known magnifications using a Leitz metallograph enable the determination of the orifice area. Numerous calibration measurements of the vapor pressure of pure Cd give close agreement with values previously reported5,= thus indicating that no significant error can be ascribed to the substitution of glass cells for metal cells used in previous work. Because the vapor pressure of Cd is reliably established and because it is difficult to obtain Clausing factors for the glass cells, the final values used for the orifice areas are calculated from the calibration measurements of the vapor pressure of pure Cd. Effusion runs are started in an atmosphere of purified helium which is quickly evacuated as soon as the cell attains thermal equilibrium. Less than one minute is necessary to obtain high vacuum after evacuation begins, and the temperature seldom varies by more than 0.5oC from the value obtained prior to pumping out the helium. RESULTS The results of this investigation along with other pertinent data are tabulated in Table I. Fig. 2 is the familiar graph of log P against T-10 K. At least mean squares analysis of the data presented in Table I yields the following equation: log1DJP = 8.790 - 6472 x T"1 [3] The deviations of the individual measurements from the values calculated with Eq. 131 are given in column six of Table I; the average deviation is 4.0% of the calculated value. Although the partial molal properties change significantly with composition within the single phase region, the integral thermodynamic value should remain relatively constant. Hence the results of the following calculations, which use the data obtained for the eutectic composition, are probably representative of the equi-atomic compound. Eq. [4] describes the vapor pressure of pure Cd as a function of temperature and may be combined with Eq. [3] to
Jan 1, 1962