Search Documents
Search Again
Search Again
Refine Search
Refine Search
-
Technical Notes - Isothermal Austenite Grain GrowthBy M. J. Sinnott, H. B. Probst
AN extensive survey of the factors which affect austenite grain growth has already been made.' These factors are temperature, time at temperature, rate of heating, initial grain size, hot-working, alloy content, ofheating,initialand rate of cooling from the liquidus-solidus temperature. In the present work, a vacuum-melted temperature.electrolytic iron was used and the variables studies were temperature, time at temperature, and prior ferrite grain size. Other factors were maintained constant. The iron used in this study was vacuum-melted electrolytic iron of nominal composition of impurities of 0.07 wt pct. It was supplied as a ½ in. round cold-drawn bar. This iron was tested in three conditions: as-received, annealed 6 hr at 1200°F, and annealed 6 hr at 1600°F. Samples were ? in. disks cut from the bar. The prior anneals were carried out in vacuum and the isothermal treatments were carried out in vacuum-sealed Vycor tubing. The thermal etch technique was employed to determine the austenite grain size. Prior to sealing the test specimens, one surface of the sample was polished metallographically. This surface, after heating, was examined to determine the austenite grain size, since the austenite boundaries are revealed by thermal etching. This is essentially the only technique available for measuring the austenite grain size of low carbon steels or pure irons without altering the composition. It has been shown to yield results that are in agreement with other methods used for determining austenite grain sizes.' The specimen size was quite large compared to the grain size measured, so inhibition of growth due to size effects is probably negligible. After vacuum sealing, each sample was placed into a furnace at temperature and at the completion of the run was quenched into a mercury bath. The growth temperatures used were 1700°, 1800°, 1900°, and 2000°F controlled to -~10"F. Growth times were varied from 10 to 240 hr. The long times were used in order to eliminate the nucleation and growth effects occurring during the initial transformation. Time was measured from the introduction of the capsule into the hot furnace to the time of quench. Grain-size measurements were made with the use of a grain-size eyepiece of a microscope. By determining the number of grains per square millimeter at X100 and taking the square root of the reciprocal of this number, the average linear dimension of the grains was determined. Figs. 1 and 2 are plots of these data as a function of time and temperature for the various conditions investigated. The variation of D, the linear dimension of the grains, was assumed to follow the equation3 D = A tn. The curves of Fig. 1 were obtained from the data by the use of the least-squares method of analysis. Fig. 1 is for the growth of the as-received stock and Fig. 2 is for growth after prior treatments. Differentiating the foregoing equation gives an expression for the rate of growth dD/dt = G = nAtn-1 = nD/t. Both D and G as functions of t are given in Table I. It should be noted that G is a function of time; the growth rate is rapid at early stages and decreases with increasing time. Since increasing temperature increases the growth rate, it has been common practice to use the empirical relationship G = Go e-Q/RT to relate temperature to growth rate. The growth rate customarily has been taken at constant values of D on the basis that the rate of growth is related to the boundary surface tension and this is measured by the curvature of the boundary. At constant D values, the growth rate is a function of time and temperature. The growth rate can be related however to temperature at constant time, and this has the advantage that under these conditions the growth rate is a function only of temperature. Obviously the Q values, activation energies, obtained for each assumption will not be the same and the question of which is the more correct is a moot one, since the assumed exponential relationship in either case has no particular theoretical significance. By plotting G, at constant grain size, vs 1/T, the activation energy over the temperature range of 1800" to 2000°F is found to vary from 30,000 cal per mol at the smaller grain sizes to 50,000 cal per mol at the larger grain sizes. The 1700°F data do not correlate with the data at higher temperatures. The activation energies for the 1200" and 1600°F prior annealed materials were calculated as 50,000 and 62,000 cal per mol, respectively, using the reciprocal time to a given grain size as a measure of the growth rate. Plotting G, at constant times, vs 1/T yields an activation energy of 12,300 cal per mol for the tem-
Jan 1, 1956
-
Minerals Beneficiation - Particle Size and Flotation Rate of Quartz - DiscussionBy T. M. Morris, W. E. Horst
W. E. Horst—In regard to the flotation rate being described as "first orcler" for flotation of quartz particles below 65 p in size (or any size studied in this work) in this paper, it appears that the authors' conception of rate equations is not in agreement with cited references. A first order rate equation has as one of its forms the following: a In.=a/a-x=kt where a = initial concentration, a—x = concentration at time t, t = time, and k = constant. The constant, k, has the dimension of reciprocal time which is similar to the specific flotation rate, Q. described by Eq. 2 in the authors' article, as has been previously discussed by Schumann (Ref. 1 of original article). The plotted data presented in Fig. 4 of the article utilizes the specific flotation rate, Q (min.'); however, there is not adequate data given to indicate the order of the rate equation which describes the flotation behavior of the quartz system studied. Results from the experimental work indicate that the relationship between rate of flotation (grams per minute) and cell concentration (provided the percent solids in the flotation cell is less than 5.2 pct and the particle size is less than 65 p) is described by an equation of the first order (R, = k c+", n being equal to 1 in this size range) and the use of the first order rate equation does not apply. Similarly the relationship for other particle size ranges studied is expressed by equations of the second or third order depending on the magnitude of n. T. M. Morris—The authors are to be commended for the experiments which they performed. As they state in their discussion the concentration of collector ion In solution did change with change in concentration of solids in the flotation cell. Since for a given slze of particle, flotation rate increases with concentration of collector until a maximum is reached, the effect of concentration of particles in their experiments was to vary the concentration of collector ions. A collector concentration which insures maximum supporting angle for all particles eliminates the unequal effect of collector concentration on various sized particles and the effect of size of particles and concentration of particles upon flotation rate could be more clearly assessed. I believe that if the authors had increased the concentration of collector to an amount sufficient to attain a maximum supporting angle for all particles they would find that the specific flotation rate of particles coarser than 65 p would be constant with change in the concentration of solids in the flotation cell, and that a first order rate would apply to the + 65 as well as to the —65 p sizes. It might also be discovered when this change in collector concentration was made that the maximum specific rate constant would be shifted toward a coarser fraction than when starvation quantities of collector are used since this practice favors the fine particles and penalizes the coarse particles. P. L. de Bruyn and H. J. Modi (authors' reply)—The authors wish to thank Professor Morris for his kind remarks and for mentioning the influence of equilibrium collector concentration on flotation rate. With a collector concentration sufficient to insure maximum supporting angle for all particles, a first order rate equation may indeed be found to be generally applicable irrespective of size. Such a concentration would, however, lead to 100 pct recovery of the fine particles and consequently defeat the essential objective of the investigation to derive the maximum information on flotation kinetics. To establish absolutely the validity of any single rate equation for a given size range, the ideal method would be to work with a feed consisting solely of particles of that size range. Use of such a closely sized feed would also eliminate the possibility of the interfering effect of different sizes upon one another. The authors do not believe that increasing the collector concentration would shift the maximum specific flotation rate (Q) towards a coarser fraction. Experimentation showed Q to be independent of solids concentration for all particles up to 65 µ in size, whereas the maximum value of Q was obtained in the range 37 to 10 p. Professor Morris contends that the addition of starvation quantities of collector favors fine particles at the expense of coarse particles, but the reason for this is not entirely clear to the authors. The comments by Mr. W. E. Horst are concerned only with the concept of the term "first order rate equation." According to the usage of this term in chemical kinetics, time is an important variable, as is shown in the equation quoted by Mr. Horst. All the experimental results reported by the authors were obtained under steady state continuous operations when the rate of flotation is independent of time. To be consistent with the common usage of the "first order rate equation," it would be more satisfactory to state that under certain conditions the experimental results show that the relation between flotation rate and pulp density is an equation of the first order.
Jan 1, 1957
-
Geophysics - Uses and Limitations of the Airborne Magnetic GradiometerBy Milton Glicken
THE airborne geophysicist is a busy man these days. In his plane he may have the airborne magnetometer, the airborne scintillation counter, and the airborne electromagnetic surveying system. Each of these is an independent tool, but all require additional auxiliary equipment for locating the aircraft in space: recording altimeters and Shoran or aerial cameras. Now there is still another piece of equipment, the airborne magnetic gradiometer, an accessory to the magnetometer. To understand its uses, consider the function of the magnetometer itself. Aside from detecting magnetic ore, the airborne magnetometer finds greatest use in spotting intrusions of igneous material. Where there is enough contrast in magnetic susceptibility of igneous rock and adjacent formations, it outlines the intrusion. Certain minerals also influence the magnetometer directly, but with the exception of magnetite and possibly one or two others, their effect is weak and can be detected only when there is sufficient ore and the magnetometer flight passes very close to it. An igneous intrusion of infinite depth with vertical sides is represented on a magnetometer record by an anomaly, as in Fig. 1. Amplitude of the high depends on susceptibility contrast of the igneous rock. Generally speaking, the edge of the intrusion lies below the point of inflection of the curve, and this point, where the curvature changes from positive to negative on the magnetometer profile, would be near A in Fig. 1, with a counterpart, of course, on the other side. Location of the contact is one of the principal objects of the survey, but finding the precise point is not always easy, as inspection of the curve near A will show. Mineralization is often found at the contact zones, as at B. Magnetic effects, if detected, may be small, as in B', and when superimposed on the anomaly due to the instrusion they are very difficult to discern and analyze. Furthermore, if these small fluctuations are to be perceived by the magnetometer the vertical scale should be large. This increases the slopes of the anomaly and makes detection of small deviations and inflection points even more difficult. The airborne magnetic gradiometer was designed to help overcome these difficulties. What it presents is the first derivative of the magnetometer record with respect to time, that is to say, the slope at any point. Fig. 2 represents an actual magnetometer record (solid line) with the corresponding gradiometer record (dashed line) superimposed. Both records read from right to left. Vertical lines on the original magnetometer record are automatic steps designed to keep the pen from going off scale. The slope of any curve is greatest at the point of inflection or point where the curvature changes sign, and this point is a maximum (or minimum) on the gradi- ometer. The chief advantage of the gradiometer is that maxima or minima are much easier to see and to locate precisely; hence an accurate location for the point of inflection can easily be found. Note that points C and D are more sharply defined than C and D'. Similarly the small fluctuations of the original record, so important to the interpreter, are far more clearly shown at E, F, and G, than on the original record at E', F', and G'. Though not necessarily highs and lows on the gradiometer, they do show up clearly what would take a painstaking analysis to detect on the original magnetometer record. Will the gradiometer have a particular configuration which indicates an orebody? Not necessarily. The total intensity curve, or original magnetometer record, can display an orebody in various ways, depending on dimensions, orientation, latitude, and composition, as well as on direction, flight height, and instrumental sensitivity of the traverse. Where the total intensity can take on so many different shapes the gradiometer must vary too. It is generally recognized that interpretation of total intensity magnetometer records requires an expert analysis; the gradiometer can be of considerable assistance to the expert but it does not replace him. Mechanism of the gradiometer is simple. A Leeds & Northrup recorder in the aircraft records the magnetic gradient simultaneously with the total intensity, which is on another recorder. Fiducial marks are put on both records simultaneously and the speed of the paper through the recorders is kept the same on both. This makes it possible to place one record over the other for direct comparison. In the laboratory the flights are positioned on a map. Maximum and minimum points on the gradiometer, which can then be posted on the map at their proper locations, may be expected to fall along a trend crossing the direction of flight. Trends should indicate the edge of an intrusion, or some other important features, and when superimposed on the total intensity contour map help greatly to locate the points of inflection, or line of zero curvature.
Jan 1, 1956
-
Reservoir Performance - Field Studies - Reservoir Performance of a High Relief PoolBy E. P. Burtchaell
A method is presented for evaluating the effect of gravity drive upon the reservoir performance of a high relief pool. Conventional forms of reservoir analysis do not consider the alterations in the basic material balance data caused by gravity segregation of reservoir fluids. A procedure is outlined for structurally weighting physical and chemical data for use in the material balance equation. It is demonstrated how actual pool performance data can be utilized to evaluate the future reservoir performance of a gravity drive pool. INTRODUCTION Conventional reservoir engineering. procedure is inadequate for the analysis of an oil pool which has considerable structural relief, steep dips, and good permeability development. In, pools of this type, gravity drainage has an important part in the movement of oil to the wells and the effects of gravity on the overall pool performance should be included in any analysis of reservoir behavior. Many engineers have the opinion that the force of gravity in the movement of oil is not important until the later life of a pool.' Probably the basis for this belief is that gravitational effects may not be readily discernible until a pool is nearing depletion. This would be especially true for pools not having a high degree of structural relief and permeability development. Actually the effects of gravitational forces are at a maximum when the pool pressure is high, for during this period the hydrostatic head of the oil column is at a maximum and the viscosity of the oil is at a minimum. Oil recoveries from pools having favorable gravity drive characteristics may equal or even exceed recoveries which might be expected from water displacement. Field evidence indicates that in some reservoirs gravity drive has resulted in recoveries greater than that which could have been expected from gas expansion or water drive.'.3 Unfortunately, the possible effects of gravity drive on pool performance have been underestimated and other reasons have been sought to explain the high recoveries obtained. There are unquestionably many reservoirs to which the principles of gravity drainage can be effectively applied. It is the purpose of this paper to illustrate one method whereby gravity drive is included in the reservoir analysis of an oil pool. A hypothetical pool, typical of many California reservoirs, is used as an example. As used in this paper, "gravity drive" is defined as the overall effect of gravitational influences on the recovery of petroleum from the reservoir; "gravitational segregation" as the gravity separation of oil and gas within the reservoir; and "gravity drainage" as the downward movement of oil as caused by the force of gravity. SAND VOLUME DATA Fig. 1 presents a structural contour map of the pool under study. Maximum closure is 1950 feet with dips on the south flank approaching 45". The original gas-oil interface was set at -5200 feet. Average thickness of the producing sand was 200 feet. For use in subsequent calculations ill this paper, the pool was subdivided into 100-foot vertical increments and the sand-volume content of each increment was obtained. If the gross sand thickness is small, under 100 feet, the sand-volume content can be obtained by superimposing an isopachous map upon a structural contour map and planimetering the average thickness of each 100-foot increment. For sand thicknesses over 100 feet, one approacli would be to construct a sufficient number of cross-sections of the pool from which the weighted sand-volume of each 100-foot increment could be obtained. Variations in the sand body with depth, as determined by core data, can also be included in the above process. Table I presents a summary of sand-volume calculations, core data, and the original distribution of reservoir hydrocarbons in the pool. Fig. 2 illustrates the structural distribution of the sand-volume content. A total of 171,398 acre-feet is contained within the productive limits of the pool. Assuming an average porosity of 25% and an interstitial water content of 20%, the original hydrocarbon content was computed to be 227,075,000 barrels. DEPTH-PRESSURE DATA The determination of the initial vertical pressure arrangement in the pool is necessary for PVT and material balance calculations. Whenever sufficient data are available, a plot of pressure versus subsea depth of measurement should be made. From this plot a representative fluid pressure gradient can be established. Lacking sufficient initial pressure data, an initial pressure gradient may he estimated or calculated from avail-
Jan 1, 1949
-
Institute of Metals Division - Diffusion of Zinc and Copper in Alpha and Beta BrassesBy R. W. Balluffi, R. Resnick
NUMEROUS investigations of chemical diffusion in a brass have been made and the results are collected in several places.1-3 This work has been mainly concerned with the determination of the chemical diffusivity as a function of composition and temperature. In 1947 Smigelskas and Kirken-dall' showed that zinc and copper diffuse at different rates in face-centered-cubic brass, and since then, a number of efforts have been made to determine the intrinsic diffusivities of zinc and copper in this alloy.1, 5-9 Horne and Mehl8 in particular have recently determined the intrinsic diffusivities as functions of temperature and composition using sandwich-type couples and inert markers. Inman et al." also have determined the intrinsic diffusivities in homogeneous alloys using tracer techniques. When the present work was started, no information of this type was available. Consequently, measurements of the intrinsic diffusivities were made as a function of temperature at a constant composition of 28 atomic pct Zn with vapor-solid diffusion couples where the zinc was diffused into the diffusion couple from the vapor phase. The application of these couples to the study of diffusion in a: brass has been described previously.0,7 The temperature dependence of the intrinsic diffusivities was found to follow the relation D, = A, exp(-Hi/RT) and the values of Hzn, and Hcu, were found to be closely the same. It is emphasized, however, that the chemical dif-fusivity (D = N1D2 + N2D1) is a composite diffusivity and does not necessarily follow this exponential form. It is usually found to do so within experimental error for substitutional alloys because the heats of activation of the intrinsic diffusivities generally are not greatly different.'" Also, at the onset of this work, there was no information available concerning possible unequal diffusion rates of individual components and the existence of a Kirkendall effect in alloys with other than face-centered-cubic structures. Since then, two reports indicating a Kirkendall effect in body-centered-cubic ß brass have appeared. Landergren and Mehl" have published a note describing Kirkendall diffusion experiments with sandwich-type couples. Inman et a1.9 also find a Kirkendall effect in this alloy using the tracer technique. In the present work, several aspects of the Kirkendall effect in ß brass were further investigated using vapor-solid couples. Two different couples were used, one in which the zinc was diffused into the specimen from the vapor phase and the other in which the zinc was diffused out of the specimen into the vapor phase. Briefly, the existence of a Kirkendall effect is confirmed and it is found that Dzn/Dcu = 3 at about the 46 atomic pct composition in this alloy at 600°, 700°, and 800°C. As a result of the unequal diffusion rates of zinc and copper, volume changes occur and subgrain formation is observed in the diffusion zone. In addition, significant porosity is produced by the precipitation of supersaturated vacancies. Diffusion in this alloy is therefore outwardly similar to diffusion in a brass where these effects are also observed, a Brass Experimental Methods—The use of vapor-solid couples in studying diffusion in a brass has been described in previous articles.6,7 The method briefly consists of sealing a copper specimen with Kirkendall markers initially placed on its surface in an evacuated quartz capsule along with a large zinc source of fine a brass chips and then diffusing the zinc into the specimen through the vapor phase. The zinc concentration at the specimen surface rises rapidly enough to a value near that of the a brass source so that the surface concentration may be regarded as constant during diffusion. Under these boundary conditions, values of the chemical diffu-sivity may be obtained by applying the Boltzmann-Matano analysis to the concentration penetration curve, and the intrinsic diffusivities may be obtained from Darken's5 equations when the velocity of marker movement is known. The diffusion specimens were made from OFHC copper in the form of disks 3.2 cm diam and 0.5 cm thick with faces surface-ground parallel to within +0.001 cm. Markers in the form of fine alumina particles <0.0002 cm diam were placed on the specimen surface. These specimens were then sealed in quartz capsules along with enough a brass chips of a 30.0 atomic pct Zn composition to keep the source concentration from decreasing by more than 0.3 atomic pct Zn as a result of the loss of zinc to the specimen during diffusion. The quartz capsules which were initially evacuated to a pressure of
Jan 1, 1956
-
Extractive Metallurgy Division - Developments in the Carbonate Processing of Uranium OresBy F. A. Forward, J. Halpern
A new process for extracting uranium from ores with carbonate solutions is described. Leaching is carried out under oxygen pressure to ensure that all the uranium is converted to the soluble hexavalent state. By this method), alkaline leaching can be used successfully to treat a greater variety of ores, including pitchblende ores, than has been possible in the past. The advantages of carbonate leaching over conventional acid leaching processes are enhanced further by a new method which has been developed for recovering uranium from basic leach solutions. This is achieved by reducing the uranium to the tetravalent state with hydrogen in the presence of a suitable catalyst. A high grade uranium oxide product is precipitated directly from the leach solutions. Vanadium oxide also can be precipitated by this method. The chemistry of the leaching and precipitation reactions are discussed, and laboratory results are presented which illustrate the applicability of the process and describe the variables affecting leaching and precipitation rates, recoveries, and reagent consumption. THE extractive metallurgy of uranium is influenced by a number of special considerations which generally do not arise in connection with the treatment of the more common base metal ores. Perhaps foremost among these is the very low uranium content of most of the ores which are encountered today, usually only a few tenths of one percent. A further difficulty is presented by the fact that the uranium often occurs in such a form that it cannot be concentrated efficiently by gravity or flotation methods. In these and other important respects, there is evident some degree of parallelism between the extractive metallurgy of uranium and that of gold and, as in the latter case, it has generally been found that uranium ores can best be treated directly by selective leaching methods. It is readily evident that this parallel does not extend to the chemical properties of the two metals. Unlike gold, which is easily reduced to metallic form, uranium is highly reactive. It tends to occur as oxides, silicates, or salts. Two ores are of predominant importance as commercial sources of this metal: pitchblende which contains uranium as the oxide, U3O51 and carnotite in which the uranium is present as a complex salt with vanadium, K2O-2UCV3V2O5-3H2O. These ores may vary widely in respect to the nature of their gangue constituents. Some are largely siliceous in composition, while others consist mainly of calcite. Sometimes substantial amounts of pyrite or of organic materials are present and these may lead to specific problems in treating the ore. Further complications may be introduced by the presence of other metal values such as gold, copper, cobalt, or vanadium whose re- covery has to be considered along with that of the uranium, or whose separation from uranium presents particular difficulty. In general, there are two main processes for recovering uranium in common use today.'.2 One of these employs an acid solution such as dilute sulphuric acid to extract the uranium from the ore. A suitable oxidizing agent such as MnO, or NaNO, is sometimes added if the uranium in the ore is in a partially reduced state. The uranium dissolves as a uranyl sulphate salt and can be precipitated subsequently by neutralization or other suitable treatment of the solution. The second process employs an alkaline leaching solution, usually containing sodium carbonate. The uranium, which must be in the hexavalent state, is dissolved as a complex uranyl tricarbonate salt, and then is precipitated either by neutralizing the solution with acid or by adding an excess of sodium hydroxide. The latter method has the advantage of permitting the solutions to be recycled, since the carbonate is not destroyed. This is essential if the process is to be economical, particularly with low grade ores. With each of these processes, there are associated a number of advantages and disadvantages and the choice between using acid or carbonate leaching is generally determined by the nature of the ore to be treated. In the past, more ores appear to have been amenable to acid leaching than to carbonate leaching and the former process correspondingly has found wider application. With most ores, acid leaching has been found to operate fairly efficiently and to yield high recoveries. One of the main disadvantages has been that large amounts of impurities, such as iron and aluminum, sometimes are taken into solution along with the uranium. This may give rise to a high reagent consumption and to difficulties in separating a pure uranium product. Excessive reagent consumption in the acid leach process also may result
Jan 1, 1955
-
Producing-Equipment, Methods and Materials - Emulsion Control Using Electrical Stability PotentialBy J. U. Messenger
A technique is described whereby the resistance of an emudian to breaking can be quantitatively determined. Produced ailfield emulsions are usually the water-in-oil type and, accordingly, do not conduct an electrical current. However, there is a threshold of A-C voltage pressure above which an emulsion will break and current will flow. The more stable an emulsion, the higher the required voltage. A Fann Emulsion Tester, modified so that low voltages (0 to 10 v) can be accurately measured, is suitable. This technique has application in evaluating the effect of a demuksifier on the stability of an emulsion. Emulsions can, in essence, be titrated with demulsifiers by adding a quuntity of demulsifier, stirring, and measuring the voltage required to cause current to flow. Any synergistic effect of two or more materials added simultaneously can be followed accurately. A demulsifier that significantly lowers the threshold voltage (from 100 to 400 v to 0 to 10 v for the emulsions in this study) is effective and can cause the enlulsion to break. A demulsifier that will bring about this drop in the threshold voltage at low concentration ir very desirable. The technique is also well adapted for rapidly screening demulsifiers. INTRODUCTION Stable emulsions in produced reservoir fluids resulting from certain well stimulation and completion procedures are common problems. The use of suitable demulsifiers can often mitigate these difficulties. At the present time, a rapid and efficient method for selecting satisfactory demulsifiers is not available. It is badly needed. Reliance is now placed primarily on trial-and-error procedures. A new test method has been developed which permits a more rapid and precise selection of demulsifiers. It involves measuring the electrical stability potential of an emulsion before and after a demulsifier has been added. This paper describes this method and shows where it should have application in field emulsion problems. NATURE OF OILFIELD EMULSIONS Two immiscible components must be present for an emuhion to form; we are concerned here with crude oil and water. An emulsifier must be present for tin emulsion to be stable. J Emulsifiers can be substances which are soluble in oil and /or mter and which lower interfacial tension. They can be colloidal solids such as bentonite, carbon, graphite, or asphalt which collect at the interface and are preferentially wet by one of these phases. Unrefined crude oils can contain both types of emulsifiers, A popular theory is that, of the two phases in an emulsion, the dispersed phase will be the one contributing most to the interfacial tension.' Usually this phase contains the least amount of emulsifier. The stability of a water-in-oil emulsion is affected by the fol1owing: l) viscosity; (2) particle or droplet size; (3) interfacial tension between the phases; (4) phase-volume ratios; and (5) the difference in density between the phases. A stable emulsion is usually characterized by high-viscosity, small droplets, low interfacial tensions, small differences in density between its phases, and slow separatian of the phases. It also has low conductivity (high electrical stability potential). Water-in-oil and oil-in-water emulsions"' are both common; however, oil field emulsions are predominantly water-in-oil emulsions. The emulsions which commonly occur during oompletion and stimulation operations contain a combination of several of the following: acids, fracturing fluids (oil, water, acid), and formation water and oil. Produced emulsions usually contain formation water and oil. Emulsions form in oil wells because oil and water are mixed together at a high rate of shear in the presence of a naturally occurring or unavoidably produced emulsifier. During the completion and stimulation of productive zones, and while formation fluids are being produced, oil and water are very often commingled. These mixtures are formed into emulsions by agitation which occurs when the fluids are pumped from the surface into the matrix of the formation or produced through the formation to the surface. Restrictions to flow (such as perforations, pumps, and chokes)".'" increase the level of agitation; tight emulsions are more likely to form under these conditions. Often an emulsified droplet is an emulsion itself.'" Therefore, emulsion-breaking problems can be quite complex. The complexity can be even greater if a third phase (gas) is included. Demulsifiers operate by tending to reverse the form of the emulsion. During this process, droplets of water become bigger, viscosity is lowered, color becomes darker, separation of the phases faster and electrical stability potential approaches zero. Any of these effects could be followed as a means of determining emulsion stability. However, electrical stability potential is the most reproducible and most easily measured parameter for following the stability of a water-in-oil emulsion.
Jan 1, 1966
-
Producing–Equipment, Methods and Materials - The Evaluation of Vertical-Lift Performance in Producing WellsBy R. V. McAfee
The fundamentals of vertical-lift performance are examined with the aid of computer-calculated flowing gradient charts. Flowing and gas-lift well performance characteristics are determined from available well test data. The effect of tubing size, gas-liquid ratio and wellhead pressure is discussed for both flowing and gas-lift wells. The effect of gas-injection pressure, formation gas, bottom-hole pressure and valve spacing is also discussed for gas lift wells. From these studies conclusions may be reached for improving or prolonging natural flow, obtaining optimum lift efficiency when natural flow ceases and improving existing gas-lift systems. The techniques perfected satisfy the requirement that the time involved to conduct an evaluation be practical for operating personnel. INTRODUCTION Flowing pressure gradients furnish the key to successful evaluation of vertical-lift performance in producing wells. Command of multiphase flow gradients in some readily usable form is a necessity before operating personnel can competently include vertical-lift performance evaluation of both flowing and artificial-lift wells in their over-all consideration of production efficiency. A readily usable form cannot be overemphasized since most of the decisions which confront the production engineer with a problem well must be made quickly. In moving a barrel of oil from the reservoir to the stock tank, the major portion of energy generally is expended in the vertical-lift phase. This may or may not be of concern during the flowing life of a well, depending upon the production requirements. It becomes of some concern when the flow performance of the well becomes erratic, and a conscious effort must be made to maintain natural flow. It is at this time that the first steps may be taken to modify existing conditions to relieve unnecessary limitations to proper flow. When natural flow ceases and some form of artificial lift must be installed, the amount of energy expended in lifting liquids becomes quite obvious. It is at this time, if no other, that lifting efficiency becomes important because that part which must be supplied from an outside source is now related directly as a cost per barrel of oil produced. Ten years ago, the majority of gas-lift wells were produced with gas-well gas. Today, the majority are produced by closed rotative gas-lift systems. This permits a direct evaluation of well performance in terms of horsepower requirements and has resulted in mixed conclusions as to the success of gas lift based upon the relative efficiency of a particular system. An increased awareness of the need to resolve vertical-lift performance on a readily usable, scientific basis was inevitable. An indication of the need for better applied science in this field is the often-asked question of whether or not gas-lift can efficiently deplete a given well or reservoir. This question cannot possibly be answered without first evaluating reservoir, surface and vertical-lift performance both as encountered today and as anticipated throughout the life of the well or wells. The technique presented in this paper was originally developed to upgrade gas-lift installation design from an applied art to an applied science. It has since been successfully used not only for this purpose, but also for the whole field of vertical-lift performance in its broadest sense. Lift efficiency should be considered important while the well is still flowing, as well as after natural flow ceases. Correct interpretation and proper modification of the vertical-lift performance of a producing well can provide dramatic improvement in production performance and/or efficiency. STATEMENT OF THEORY AND DEFINITIONS Fig. 1 illustrates the three divisions of production which will be used in this paper. The terms are a modified version of those presented in the very fine paper by Gilbert.' The fields of reservoir and surface performance both have been greatly improved over the years. A study of those writings which may be found indicates that the field of vertical-lift performance has not progressed as well. There are two possible reasons for this lack of progress. 1. It has not been recognized as a scientific field in itself by the oil companies, as has reservoir engineering. 2. The equipment companies have confined their efforts to mechanical design research rather than the more basic study of vertical-lift performance of producing wells. Both organizations must have an economic stimulus for doing research in this field, and most of the results obtained in past work has been so erratic as to arouse little enthusiasm. The basic purpose of interpreting vertical-lift performance is to predict operating conditions below the surface of the ground from available data. The success of the interpretation depends upon the accuracy with
-
Producing - Equipment, Methods and Materials - Percentage Gain on Investment – An Investment Decision YardstickBy M. Kaitz
A continuing discussion in both the petroleum engineering and economic literature is directed to the difficulties encountered in the use of discounted cash flow rate of return (DCF) as a measure of investment worth. Although useful in most Instances, DCF has been criticized because it is time-consuming in its trial-and-error solution, theoretically invalid, not adaptable to cash flow streams yielding multiple return solutions and not entirely reliable in selecting between mutually exclusive investments. The theoretical invalidity of DCF stems from the reinvestment assumption implicit in the calculation that earnings are reinvested at the DCF rate. The emerging consensus of the economic literature is that the net present value or the present worth of the net cash flow stream (discounting at the average opportunity or cost of capital rate) is more correct and reliable. Other criteria proposed have been ratios of net present value divided by initial investment or by present value of all investments in a project. All of these criteria are simple to determine and explicitly assume a reinvestment rate for [he income generated by a project. This paper develops and discusses a theoretically valid profitability criterion which is simple to compute and retains the appeal of a percent return on investment. It is called "percentage gain on investment" or PGI. It measures the gain an investment is expected to realize over like capital invested in the average opportunity and explicitly considers reinvestment potential. Why add another Concept to the large array of investment criteria now available, any one of which, or perhaps a combination of several, appears to embrace the form's (or individuals) objectives? The answer is that not one of the existing criteria provides both a readily comprehensible and theoretically valid measure of risk coverage that has general application. The proposed PGI does fulfill these requirements. INTRODUCTION An ancient expression warns that "one must yield to the times" — there are better ways of doing things. A review of the petroleum engineering and economic literature on one topic alone, measurement of investment worth, certainly is witness to this truth. In use for a number of years, the DCF has recently received attention, directed mainly to its theoretical invalidity. Several alternatives to DCF have been proposed to provide a valid, simply determined criterion to describe investment worth and to overcome the criticisms previously mentioned. This paper introduces another method called percentage gain on investment (PGI) and is proposed as but one of several yardsticks that should be used in making investment decisions. MEASURES OF INVESTMENT WORTH This paper will consider only those criteria which give weight to the time pattern of future earnings. These criteria are usually compared with an average opportunity rate or cost of capital of the firm to judge the relative worth of the investment. For purposes of demonstration, a 9 percent average opportunity rate will be used throughout this paper. Implicit in the DCF calculation is the assumption that earnings are reinvested at the DCF rate. Some argue, though, that there is no reinvestment assumption,' that the DCF rate is simply that maximum rate of interest one can pay on the investment over the life of the project and break even. The determination of DCF is accomplished by discounting the net cash flow stream at that rate (DCF) which will yield zero. The question is: why should reinvestment potential be explicitly considered in calculating return on investment or other criteria measuring economic worth? Perhaps the answer lies in consistent or equal treatment of future cash flow. It appears entirely illogical to give different present worth value to $1 received, say, 10 years from now, which is the circumstance resulting in comparing projects with different DCF returns. In fact, $1 received in 10 years has the same value regardless of which project generated the income. DCF return thus favors investment projects which are expected to provide early income as compared to those providing long-term income. Not surprisingly, the controversy on reinvestment assumption is an old one. Hoskold' discussed this same problem in 1877. He considered the future income from mineral properties as an annuity or a series of fixed future payments. (As will be demonstrated, his equation can be modified for variable income.) Prior to Hoskold, the value or what one could pay for the mine, was determined with the use of standard, single-interest tables. Here is what Hoskold said with regard to these tables. "This table, and others of its kind, to be found in most works on annuities, is constructed correctly according to
-
Natural Gas Technology - Evaluating a Slightly Permeable Caprock in Aquifer Gas Storage: I Caprock of Infinite ThicknessBy P. A. Witherspoon, S. P. Neuman
Evaluating the permeability of a caprock overlying a potential gas storage reservoir is a very critical problem. Pumping water from the reservoir can be used as an evaluation tool in analyzing this problem. Fluid level changes that occur in the aquifer us well as in the caprock can be measured with appropriately placed wells. If the leakage of water from the caprock into the aquifer is considerable, the effects will be apparent in the aquifer. If the leakage is slight, however, it will not be possible to detect it with certainty from observations in the aquifer alone. Fluid level measurements in the caprock must be relied upon. and improved methods of analyzing such effects have been developed which are based on a theoretical analysis of fluid flow through a caprock of infinite thickness. An example applying these methods to field data is discussed. INTRODUCTION One of the most critical problems in evaluating an aquifer gas storage project is determining the tightness of the caprock overlying the formation to be used as the storage reservoir. A formation that has previously held oil or gas obviously has a suitable caprock, but an aquifer that contains only water gives no such assurance. A number of aquifer projects in the United States have been troubled by gas leaking out of the intended storage zone, and the ensuing difficulties have led to the development of new evaluation methods. One of these new methods is pump testing wherein water is removed from the aquifer at some controlled rate prior to injection of gas. This fluid withdrawal causes a pressure drop to move out through the aquifer for considerable distances in a matter of days or weeks. Depending on the properties of the caprock, a pressure transient can also pass upward (as well as downward) through the caprock layers adjacent to the aquifer. Thus, if the operator has placed observation wells at appropriate distances from the pumping well, the rapidity with which the pressure transients reach different points in the system can be used to investigate the fluid transport properties of both the aquifer and its caprock. The usefulness of pump testing has been recognized by groundwater hydrologists for many years as a means of determining the potential yield and properties of aquifers used in water supply They have introduced the term "leaky aquifer" for a system in which an aquifer is overlain (or underlain) by semipermeable caprock layers. The ease with which water leaks into the aquifer during pumping can, of course, be very beneficial in bringing additional water to the pumped well. Hydrologists have therefore devoted considerable attention to this prob-lem. From the gas storage standpoint, however, the tighter the caprock layers that overlie the intended storage reservoir, the better are the conditions for minimizing or eliminating any vertical migration of gas. Thus, after a suitable geologic structure has been found, the emphasis in aquifer storage projects is in determining that the caprock is tight. Attention has recently been focused on the use of pump testing as one approach to solving this problem.23,24 This paper presents a further development on evaluating the permeability of a slightly leaky caprock when the caprock is of infinite thickness. From the practical standpoint, this means that the caprock layers are thick enough that pressure transients do not reach the outer boundaries of the system during the pumping test. In a subsequent paper, an analysis of the case where the caprock is of finite thickness will be presented. PREVIOUS WORK ON LEAKY AQUIFERS Jacob" developed a partial differential equation describing the flow of water in an aquifer of permeability k that is overlain by a leaky caprock of permeability k'. Fig. 1 shows a schematic cross-section of the system under consideration. One of his principle assumptions was that if k > > k', the direction of flow is essentially vertical in the caprock and horizontal in the aquifer. Neuman19 confirmed the validity of Jacob's assumption using a mathematical model. Another assumption was that a permeable source layer overlies the caprock (Fig. 1) and is able to maintain a constant hydraulic head at the upper boundary of the caprock. By neglecting the effects of compressibility within the caprock, Jacob1* developed a solution for a bounded circular aquifer. Later, Hantush and Jacobx5 used the same assumptions to solve the case of an infinite radial aquifer that is pumped at a constant rate. Their solution may be expressed in di-mensionless parameters by
-
Reservoir Performance - Field Studies - Reservoir Performance of a High Relief PoolBy E. P. Burtchaell
A method is presented for evaluating the effect of gravity drive upon the reservoir performance of a high relief pool. Conventional forms of reservoir analysis do not consider the alterations in the basic material balance data caused by gravity segregation of reservoir fluids. A procedure is outlined for structurally weighting physical and chemical data for use in the material balance equation. It is demonstrated how actual pool performance data can be utilized to evaluate the future reservoir performance of a gravity drive pool. INTRODUCTION Conventional reservoir engineering. procedure is inadequate for the analysis of an oil pool which has considerable structural relief, steep dips, and good permeability development. In, pools of this type, gravity drainage has an important part in the movement of oil to the wells and the effects of gravity on the overall pool performance should be included in any analysis of reservoir behavior. Many engineers have the opinion that the force of gravity in the movement of oil is not important until the later life of a pool.' Probably the basis for this belief is that gravitational effects may not be readily discernible until a pool is nearing depletion. This would be especially true for pools not having a high degree of structural relief and permeability development. Actually the effects of gravitational forces are at a maximum when the pool pressure is high, for during this period the hydrostatic head of the oil column is at a maximum and the viscosity of the oil is at a minimum. Oil recoveries from pools having favorable gravity drive characteristics may equal or even exceed recoveries which might be expected from water displacement. Field evidence indicates that in some reservoirs gravity drive has resulted in recoveries greater than that which could have been expected from gas expansion or water drive.'.3 Unfortunately, the possible effects of gravity drive on pool performance have been underestimated and other reasons have been sought to explain the high recoveries obtained. There are unquestionably many reservoirs to which the principles of gravity drainage can be effectively applied. It is the purpose of this paper to illustrate one method whereby gravity drive is included in the reservoir analysis of an oil pool. A hypothetical pool, typical of many California reservoirs, is used as an example. As used in this paper, "gravity drive" is defined as the overall effect of gravitational influences on the recovery of petroleum from the reservoir; "gravitational segregation" as the gravity separation of oil and gas within the reservoir; and "gravity drainage" as the downward movement of oil as caused by the force of gravity. SAND VOLUME DATA Fig. 1 presents a structural contour map of the pool under study. Maximum closure is 1950 feet with dips on the south flank approaching 45". The original gas-oil interface was set at -5200 feet. Average thickness of the producing sand was 200 feet. For use in subsequent calculations ill this paper, the pool was subdivided into 100-foot vertical increments and the sand-volume content of each increment was obtained. If the gross sand thickness is small, under 100 feet, the sand-volume content can be obtained by superimposing an isopachous map upon a structural contour map and planimetering the average thickness of each 100-foot increment. For sand thicknesses over 100 feet, one approacli would be to construct a sufficient number of cross-sections of the pool from which the weighted sand-volume of each 100-foot increment could be obtained. Variations in the sand body with depth, as determined by core data, can also be included in the above process. Table I presents a summary of sand-volume calculations, core data, and the original distribution of reservoir hydrocarbons in the pool. Fig. 2 illustrates the structural distribution of the sand-volume content. A total of 171,398 acre-feet is contained within the productive limits of the pool. Assuming an average porosity of 25% and an interstitial water content of 20%, the original hydrocarbon content was computed to be 227,075,000 barrels. DEPTH-PRESSURE DATA The determination of the initial vertical pressure arrangement in the pool is necessary for PVT and material balance calculations. Whenever sufficient data are available, a plot of pressure versus subsea depth of measurement should be made. From this plot a representative fluid pressure gradient can be established. Lacking sufficient initial pressure data, an initial pressure gradient may he estimated or calculated from avail-
Jan 1, 1949
-
Technical Papers and Notes - Institute of Metals Division - Hydrogen Embrittlement of Vanadium By Catalytic Decomposition of Water with ManganeseBy P. D. Zemany, G. W. Sear, B. W. Roberts
Vanadium metal is embrittled by hydrogen at a temperature as low as 250°C when held in the presence of manganese metal and water vapor in a rough vacuum. It is established that the property changes are caused by the catalytic decomposition of water vapor at the vanadium surface and the diffusion into and solution in the vanadium of the resultant hydrogen. It is found that manganese is a necessary component of the catalyst. The manganese is transported in the vapor phase by an unknown molecule. A deuterium tracer experiment demonstates the role of water vapor in the embrittle-ment process. VANADIUM metal foils were observed to become embrittled' at a temperature of about 300 °C when held in the presence of manganese metal and a small amount of moist air, This paper describes the investigation to find the embrittling agent and an understanding of the relatively low temperature reactions that are involved. Experimental The vanadium metal foil used was prepared by cold-rolling and pack-rolling 32 mil sheet" in a series of steps down to 1 mil foil. The original observation was confirmed by sealing vanadium foils of 3 x 10 sq cm into individual Pyrex tubes with manganese powder† and a con- trol tube containing only the vanadium foil. These tubes were evacuated to 10 -5 mm Hg without baking and sealed. After heat treatment for 200 hr at 300°C, the control foil showed no change in duetility, whereas the foil contained in the manganese— containing tube was embrittled. The visual appearance of each was unchanged. A series of Pyrex sample tubes, about 2.5 cm diam and 25 cm long, were prepared, each containing a 3 x 10 sq cm piece of foil and 5 g manganese powder at the lower end of the tube. By reducing the time of anneal and the temperature of these samples, it was found that embrittlement could be created at 250°C in a time as short as 1 hr. Since the vanadium metal used here has been drastically cold-worked by rolling, it is assumed that it contains a maximum number of dislocations. To check the possible necessity of dislocations in this low temperature reaction, a vanadium foil sample was annealed in Vycor for 2 hr at 800°C to re crystallize and reduce the dislocation concentration. Metallographic examination showed grains which were not visible before annealing. The embrittlement procedure was carried out at 300°C and 3 hr. Upon checking the foil no embrittlement was observed. Further experiments demonstrated that about 6 hr at 300°C are required to create embrittlement in the foil. This delay in the onset of embrittlement in the vanadium foil suggests but does not prove that dislocation channels play a role in the embrittlement phenomena. If manganese metal is necessary for this low temperature embrittlement, do other elements in the transition metals group yield the same result? To check this qualitatively, a group of elements of similar atomic radii were obtained and sealed as before into Pyrex tubes with a sheet of vanadium foil. These tubes were annealed at 250°C for 6 hr and included (with radii)-2 A1 (1.4A), As (1.25A), Be (1.2A), Co (1.25A), Cr (1.45A), Cu (1.25A), Fe (1.25A), Ga (1.2A), Ge (1.25L%), Mn (1.3A), Ni (1.25A), Si (1.2A), Ti (1.45A), Zn (1.3A), air, H,O, 10 cm Hg of dry hydrogen, and MnO, powder. Upon testing the above sample foils for brittleness, only the manganese-containing tube yielded a brittle foil. Manganese Transport—To eliminate contact of manganese metal powder and vanadium foil, sample tubes were prepared with fritted glass barriers. The embrittlement reaction was still found to occur. Thus, the mode of transfer of manganese is certainly vapor transport. A vanadium foil was embrittled by this mechanism in an evacuated Pyrex tube for 8 hr at 300°C. By means of X-ray fluorescence analysis,' the amount of manganese added to the surface was established at 5 ±2 x 10 -6 g per sq cm. Since the average rate of manganese deposition is known, an effective average pressure of an assumed carrier compound can be computed. ___ P = M/T v2p mkT
Jan 1, 1959
-
Logging and Log Interpretation - Prediction of the Efficiency of a Perforator Down-Hole Bases on Acoustic Logging InformationBy A. A. Venghiattis
A rational approach to the selection of the appropriate perforator to use in each specific zone of an oil well is presented. The criteria presently in use for this choice bear little resemblance with actual down-hole condilions. These environmental conditions affect the elastic properties of rocks. One of these elastic properties, acoustic velocity, is suggested as the leading parameter to adopt for the choice of a perforator because, being currently measured in the natural location of the formation, it takes into account all of the effects of compaction, saturation, temperature, etc., which are overlooked in the laboratory. Equations and curves in relation with this suggestion are given to allow the prediction of the depth of perforation of bullets and shaped charges when an acoustic log has been run in the zone to be perforated. INTRODUCTION When an oil company has to decide on the perforator to choose for a completion job, I wonder if it is really understood that, to date, there is no rational way of selecting the right perforator on the basis of what it will do down-hole. This situation stems from the fact that the many varieties of existing perforators, bullets or shaped charges, are promoted on the basis of their performance in the laboratory, but very little is said on how this performance will be affected by subsurface conditions such as the combination of high overburden pressure and high temperature, for example. The purpose of this paper is to show the limitations of the existing ways of evaluating the performance of perforators, to show that performances obtained in laboratories cannot be extended to down-hole conditions because the elastic properties of rocks are affected by these conditions and, finally, to suggest and justify the use of the acoustic velocity of rocks, as the parameter to utilize for the anticipation of the performance of a perforator in true down-hole environment. EVALUATING THE PERFORMANCE OF A PERFORATOR It is natural, of course, to judge the performance of a perforator from the size of the hole it makes in a predetermined target. Considering that the ultimate target for an oilwell perforator is the oil-bearing formation preceded in most cases by a layer of cement and by the wall of a steel casing, the difficulties begin with the choice of an adequate experimental target material. For obvious reasons of convenience, the first choice that came to the mind of perforator designers was mild steel. This is a reasonable choice for the comparison of two perforators in first approximation. Mild steel is commercially available in a rather consistent state and quality, and is comparatively inexpensive. The trouble with mild steel is that it represents a yardstick very much contracted; minute variations in depth of penetration or hole diameter and shape may be significant though difficult to measure. The penetration of projectiles in steel being a function of the Brinell hardness of the steel (Gabeaud, O'Neill, Grun-wood, Poboril, et al), it is often difficult to decide whether to attribute a small difference in penetration to a variation on the target hardness or to an actual variation on the efficiency of the projectile. Another target material which has been widely used for testing the efficiency of bullets or shaped charges in an effort to represent a formation—a mineral target as opposed to an all-steel target—is cement cast in steel containers. This type of target, although offering a larger scale for measuring penetrations, proved so unreliable because of its poor repeatability that it had to be abandoned by most designers. The drawbacks of these target materials, and particularly their complete lack of similarity with an oil-bearing formation, became so evident that a more realistic target arrangement was sought until a tacit agreement was reached between customers and designers of oilwell perforators on a testing target of the type shown on Fig. 1. This became almost a necessity about seven years ago because of the introduction of a new parameter in the evaluation of the efficiency of a perforator, the well flow index (WFI). The WFI is the ratio (under predetermined and constant conditions of ambiance, pressure and temperature) of the permeability to a ceitain grade of kerosene of the target core (usually Berea sandstone) after verforation. to its vermeabilitv before perforation. The value of this index ;or the present state if the perforation technique varies from 0 to 2.5, the good perforators presently available rating somewhere around 2.0 and the poor ones around 0.8, There is no doubt that, to date, the WFI type of test is by far the most significant one for comparing perforators. It is obvious that a demonstration of a perforator
-
Reservoir Engineering – Laboratory Research - Determination of Wettability by Dye AbsorptionBy O. C. Holbrook, George G. Bernard
A new theoretical treatment has been obtained for the behavior of pattern waterflood injection wells when closed in. Two cases are treated: Case I where oil and water are assumed to have the same properties, and Case 2 where they arc different. In applying the method, one plots log (p — p,) vs closed-in time, where p is well-bore pressure at any tims and p, is static pressure. The value of p. is determined by trial and error as that value which makes the plot linear at large time. A value for the permeability-thickness product can be determined from the intercept of this linear part, and a value of the skin factor from the injection pressure at time of closing in. Application of the method to data from water floods at three fields seems to give reasonable results. For the case of unit mobility ratio, it is proved that this new method should give the same value for permeability-thickness product as the conventional pressure build-up method. In addition, the new method gives correct values for static pressure, whereas the conventional method does not, often indicating negative static pressures. The new method may be used in cases where the surface pressure persists after closing in as well as in cases where it does not. INTRODUCTION It is of considerable interest and importance to be able to determine the characteristics of the reservoir in an area surrounding a water injection well. Thus, if we can determine early in the life of an injection well that there is a considerable "skin effect", remedial measures can be started before a full-scale pattern flood begins. Similarly, if it can be shown that a gradual buildup of skin effect is occurring with time, measures to free the water of plugging material can be taken. Determination of static pressure in the water-injection well may show that the water is entering a thief zone and not the desired reservoir. Finally, determination of the permeability of the sand around the injection well will allow estimation of the future relation between injection pressure and rate. It should be possible to determine average reservoir permeability, skin effect and static pressure from pressure fall-off data. However, at the time we began work on this subject, it was thought that no adequate theory on which to base such determinations' was available. According to the conventional method which considers the reservoir to be filled with one fluid of small compressibility (see Van Everdingen, Joers2, and Nowak2), shut-in pressure is plotted vs log where is injection time, and At is closed-in time. The physical significance of injection time, may well be questioned in this case, since in a reservoir completely filled with a single fluid (as required by this theory) and with input and output rates equal, the pressure behavior after an initial transient is independent of t,. Attempts by our Tulsa area to use this theory led to negative values of static pressure in most cases. Because of these limitations of the method discussed above, it was decided to attempt to develop a new theory of pressure behavior in water injection wells, one which would apply when there is a gas saturation, as is so often the case in water floods. In the following treatment the assumptions and basic equations are given first, then the method of application of the equations. A complete example is given to clarify details of application. All difficult mathematics has been placed in the appendices so that the reader can follow the text without difficulty. However, if he wishes only to apply the results without knowing the basis for them, he can learn how to do this from reading only the sections entitled "Plotting of Experimental Results" and "Example." ASSUMPTIONS AND BASIC EQUATIONS Statement of Problem It will be assumed that a horizontal layer of constant thickness contains in its pore system a mixture of oil, gas and water. While water is being injected into this pore- system through a well at constant rate, an oil bank is built up, gas being expelled from the space taken by the oil as shown in Fig. 1. The saturations within each
-
Producing - Equipment, Methods and Materials - Evaluation of a Stabilizer Charged Gas Lift Valve for Multiple-Phase Flow Using Graphical TechniquesBy H. W. Winkler, Discussion, V. L. Forsyth
The development of a new gar lift valve has removed many of the obstacles limiting over-all gas lift efficiency. The valve is pressure charged in place in a well, and operating pressure can be changed without pulling. Temperature and gas gradient compensation have been eliminated. Intermitting and constant flow installations, using conventional pressure-charged valves, are compared with designs incorporating the new automatic stabilizer controlled valve. Specific well performance data are presented. The means of obtaining and controlling multiple-point injection of gas are explained and contrasted with single point injection. The effect on small diameter strings and multiple string installations is discussed. The effect of flowing temperature gradient on a design using conventional pressure-charged valves, the limitations imposed by such temperature, and the obvious benefits which result from the use of automatic stabilizer controlled valves are shown. Reduction of gas requirements is stated mathematically and demonstrated with specific examples which are identified as to oil company and well. Increased fluid recovery resulting from greater drawdown is illustrated. The economic advantages are mentioned with emphasis on reduction of capital expenditures, as well as reduction in operating expenses. INTRODUCTION Much of the engineering and research work performed with gas lift has been based upon the supposition that under producing conditions gas would be injected at a single point in a well. Many tests were performed in determining the optimum size of the opening through which gas would be passed and the exact placement of the valve equipment. The objectives in a gas lift installation are to unload the well and to efficiently produce the well after it is unloaded. Until recently, the function of upper valves was simply to unload the well. An operating valve located at a proper place in a well served as a single injection point for lifting liquid to the surface. The basic problem in using such equipment is to arrive at operating depth with sufficient operating pressure to efficiently lift the liquid. Few gas lift installations are designed with all required well data available. However, regardless of the quality and availability of data, and regardless of the accuracy of the design, there is the practical problem of preparing and placing gas lift equipment in the well in exact accordance with the plans and intentions of the engineer. In recognition of these and many other problems, attention was directed toward a means of removing the need for temperature and gas gradient compensation, a way to change the gas lift installation to meet the changing requirements of the well itself, and a manner of compensating for changes in oil-water content, fluid volumes, PI, and bottom-hole pressure resulting from flood or repres-suring activity. A device which could actually set and change the operating pressure of gas lift valves in the well would meet most of these needs. There now exists a new gas lift valve which fulfills the original objectives and in so doing actually accomplishes much more. With this device, called an Automatic Stabilizer Controlled valve, there is an exact matching of valve operating pressure and well conditions. Gas is injected at more than one point in the well, thereby making controlled multiple-point injection of gas a reality. Installation calculations are materially simplified and the need for basic temperature and gas gradient data no longer exists. Section I—The Automatic Stabilizer Controlled Valve The operation of the ASC valve is more easily understood by first studying a conventional pressure-charged valve and then inserting the stabilizer element to observe the changes which result. Fig. 1 is a schematic of a conventional precharged pressure-operated valve. The pressure in the dome pd, acts downward, against the area of the bellows A,, to hold the valve closed. The resulting force Bf plus the spring effect of the bellows Sf represents the total force acting to close the valve. Correspondingly, the tubing pressure ptbg works upward against the port area Atbg and the casing pressure pc works upward against a portion of the area of the bellows
Jan 1, 1965
-
Reservoir Engineering-General - Two-Phase Flow in Two-Dimensional System-Effects of Rate, Viscosity and Density on Fluid Displacement in Porous MediaBy R. G. Hawthorne
This report is concerned with fluid displacement in porous media, in those cases where viscous and gravitational forces control the displacement. Such a system would usually be found in a sand body of large physical dimensions such as an oil reservoir, although it is possible to create such a system in the laboratory. It is shown that the position of the fluid interface can be predicted by numerical calculations using a basic idea presented by Dietz. Fluid flow is considered in a vertical plane in a homogeneous, porous medium of sufficient thickness that the capillary transition zone is small in comparison with the total reservoir. A theory developed by Dietz' is used to make numerical calculations of the position of the fluid interface. The results for several conditions are compared with scaled model experiments. The results show that, for gas drive in a reservoir of steep dip, a relatively low flow rate can displace large volumes of oil before gas breakthrough. On the other hand, water injection at favorable mobility ratio and low dip may show best performance at high rates. Water tends to underride the oil and, given sufficient rime, will break through without much oil displacement. For certain conditions, which include relatively low flow rate, the interface is a straight line and its behavior is simple to calculate. At higher flow rates, the interface is unstable, and a numerical solution was programed for an automatic computer. In general, good agreement is shown between the fluid model and the computed results so long as gravitational forces have control. For a water drive at very unfavorable mobility ratio, many small water fingers appear. These viscous fingers are not controlled by the relatively small gravitational forces. When viscous fingering becomes the controlling factor, the mathematical model is oversimplified, and results do not check the fluid flow model. INTRODUCTION Present methods of reservoir analysis depend upon certain simplifying assumptions to obtain mathematical descriptions of practical use. Material-balance methods (Muskat2 or Tarner2) assume uniform fluid saturations in the entire reservoir, or in a few subdivisions of the reservoir. An unsteady-state flow calculation by West, er at considered pressure and saturation changes in flow to a well during solution gas drive, and neglected gravity effects. Results showed only a 4 per cent difference for ultimate oil recovery by the Muskat method, even though the case chosen for study was one in which unsteady-state effects should be high. The Buckley-Leverett5 method commonly assumes a one-dimensional flow system. It is applicable at high flow rates where viscous forces predominate over gravity forces. Simultaneous, parallel flow of the two fluids is assumed, and the concept of a fluid interface is not introduced. Permeabilities to each fluid for a given saturation must be known. The method is not applicable for a two-dimensional system where cross flow becomes possible. Less well known is the displacement equation derived by Dietz. This method is designed for two-dimensional flow systems and assumes a definable fluid interface within the porous medium. Dietz showed that, for a range of low flow rates, the interface would be stable. straight and at an angle of inclination which could be simply calculated. At a certain critical flow rate, the calculated interface tilt would equal the formation dip. For higher flow rates, a finger of displacing fluid would invade the displaced fluid. Dietz indicated that his method applied only to macroscopic reservoir behavior, while the Buckley-Leverett method applied to the small transition zone at the fluid interface. The examples worked out in this report are based on the fluid-displacement theory of Dietz. It is shown that the Dietz theory may be used to derive equations analogous to the Buckley-Leverett equations. In contrast to the Buckley-Leverett method, flow is considered in a plane rather than being limited to a line. Rather than a frontal advance, the movement of a fluid interface is followed. For flow rates substantially exceeding the critical rate and for high viscosity ratio, many fingers of invading fluid occur-—rather than the single finger assumed by Dietz. On the other hand, so long as some gravitational influence remains, the flow is not entirely parallel to the bedding planes as assumed by Buckley and Leverett; therefore, both methods fail to give an adequate descrip-
-
Institute of Metals Division - The Use of Controlled Solidification in Equilibrium-Diagram StudiesBy W. A. Tiller
The conventional techniques1 for determining the liquidus and solidus surfaces of an alloy system containing more than two components are extremely tedious to use and do not provide a complete picture of the equilibrium relations between solid and liquid alloys. These techniques are unable to yield "tie-line" information concerning solid and liquid phase equilibrium, a very important parameter in the solidification description of the liquid alloy and a very necessary parameter in the preparation of "zone-levelled" alloy crystals. The tie-line in a polycomponent system is analogous to the partition coefficient, k, in a binary system, it gives the composition of the solid, Cs, in equilibrium with a liquid of composition CL. That is, CS = kCL, where CL = [CO, c1,...Cn] denotes the concentrations of the n + 1 constituents, and k = [ko, kb ,...kon] denotes the gross partition coefficient for the elements between the two phases; thus, k is the tie-line in this system. To provide complete information concerning the two-phase equilibrium in a polycomponent system it is necessary to know both the liquidus surface and the gross partition coefficient. From these two the solidus surface is obtainable. The conventional techniques are unable to provide this information in other than a binary system so we must look elsewhere. In recent years considerable insight has been gained into the correct description of the liquid-solid transformation, 2,3 and controlled solidification experiments may now be designed to both facilitate and enhance equilibrium diagram studies. In the present paper consideration is given to two methods for obtaining the liquidus surface and the tie-lines in polycomponent systems. The methods to be described below deal with the solidification of an n + 1 constituent liquid alloy of initial composition [co, Co ,...Con], where the superscripts refer to the elements present and the Oth element is considered as the solvent in which the n solutes are dissolved. The general assumptions made in the treatment are the following: (i) the solid and the liquid at their interface are in equilibrium during the growth of the solid phase, (ii) there is no diffusion in the solid phase, and (iii) the liquid phase is completely mixed and is therefore homogeneous in concentration. METHOD I In this section a general experimental method will be described which, in principle, is capable of giving an exact description of the liquidus surface and tie-lines. Consider the unidirectional solidification of the liquid alloy specimens L cm in length (freezing either horizontally or vertically). Allow the sample to be frozen very slowly from one end, as indicated in Fig. 1, with complete mixing in the liquid, and then analyze the solid bar to determine its chemical constitution as a function of position, x, along the bar. Let Fig. 2 represent a possible distribution of the ith constituent along the bar. At the point x' the concentration of the ith constituent is C1/5(x'), and the average concentration of the rest of the bar between x' and L is given by C:(X' - L) where [ A(x) c1s(x)dx c1/s(x'-L) = f A(x) dx x * and A(X) is the cross-sectional area at x. In a similar manner. all the Cf(x' - L) may be determined. Thus, the gross artition coefficient k for liquid of composition [Cs(x - L),...Cs(X' - L)] is given by -ko= CUs')/Cos(x' - L)" k0 = csn{x')/cs(x'- L) During the freezing of a charge of length L, the liquid composition may vary over a wide segment of the phase diagram and the gross partition coefficient over this segment of the phase diagram may be determined from one, bar. Fig. 3 illustrates the magnitude of Cs(g)/C,' as a function of the fraction g of the bar which has Solidified.2 We can see that the concentration in the bar will vary over a range of about 15 wt pct for C,' = 10 wt pct and k0 = 0.5. It appears that 4 or 5 specimens would be adequate to study a simple binary eutectic phase diagram.
Jan 1, 1960
-
Iron and Steel Division - Ingot Cracks in Killed, Fine-Grained C1020 SteelBy J. A. Pusateri, M. A. Orehoski, N. R. Arant
Plant studies on commercial-size ingots and laboratory experiments with induction furnace heats have demonstrated that the major source of ingot cracks is associated with two conditions arising during top-pouring practice: 1—solidification during pouring, and 2—turbulence created by the impact of the stream. Methods of controlling the two factors were effective in eliminating or significantly reducing ingot cracks. BECAUSE the process of removing surface defects from hot-rolled product is so costly, the steel industry is striving constantly to develop methods of preventing, or at least decreasing, the occurrence of surface defects. Investigations have revealed steel-making and processing variables related to major surface defects, and controlling these variables has led to improvements in surface quality. However, the fundamental causes of major surface defects, such as ingot cracks, have not been determined; consequently such defects persist. At the Research and Development Laboratory of the United States Steel Corp. in Pittsburgh, a seam research program was initiated to determine the fundamental causes of certain major defects. As a part of the program, ingot cracks in killed, finegrained C1020 steel were selected for study. A cost survey indicated that, of the steels produced in sizable tonnages, carbon steels in the range of 0.18 to 0.23 pct C content require the most conditioning. Since this is particularly true of C1020, any im-provement of surface quality that might be effected from the study would be beneficial. Also, since the frequency of ingot cracks is exceedingly high for this grade, the steel would provide an excellent opportunity for a thorough study of the ingot-crack-ing problem. Why steels in this carbon range tend to exhibit more ingot cracks than do other steels was not considered in the investigation; the mechanism of ingot-crack formation was of paramount importance. Also, only the top-pouring practice was considered in the investigation, because this pouring procedure is used more extensively than others. In this study, ingot cracks are defined. as deep surface defects that sometimes are observed in an ingot prior to rolling but usually are observed during the initial stages of rolling on the primary mills. These defects may occur at any angle to the rolling direction but are most prominent in the transverse or nearly transverse direction. The appearance of ingot cracks on the rolled product varies with respect to their angle of formation and with the extent of rolling after their first occurrence. Ingot cracks are also termed "deep seams," "arrowheads," "irregular cracks," and "transverse cracks." Fig. 1 shows ingot cracks in a rolled bloom. This report summarizes: l—the exploratory investigation of commercial-size ingots; 2—the labor-atory investigations related to determining the source of ingot cracks and developing corrective methods; and 3—the plant evaluations of laboratory methods of decreasing ingot cracks. Exploration of lngot Cracks in Commercial Ingots Materials and Experimental Work: At the Du-quesne Works of the United States Steel Corp., five heats of killed, fine-grained C1020 steel were selected for the exploratory phase of the seam-research program. The heats were made by open-hearth practices that are considered most conducive to good surface quality. They were top-poured into 22x25 in. big-end-up hot-topped molds. One as-cast ingot and four other ingots rolled to the following cross-sectional sizes were set aside from each heat: 16x20 in.; 9 1/2x10 in.; 5x5 in.; and 2x2 in. The processing of the heats, from the time of charging in the open hearth furnaces to the end of the rolling operation on the primary mill, was observed carefully and recorded. The cast ingot and the four rolled ingots from each heat were examined for surface defects. The cast ingots were split longitudinally near the vertical center plane to expose the ingot structure beneath badly cracked regions. Also, a series of transverse sections was cut at about 1 in. intervals in a badly cracked area of one ingot. All sections were ground, polished, examined by sulphur printing and deep etching, and photographed. The rolled ingots
Jan 1, 1955
-
Reservoir Rock Characteristics - Large-Scale Laboratory Investigation of Sand Consolidation TechniquesBy W. F. Hower, W. Brown
Large-scale sand consolidation tests were conducted in an effort to determine the reasons for the successes and failures of this method of sand control. Several different consolidating materials were used in treating both clean and bentonitic sands that were packed in a chamber having a capacity of 3.3 cu ft. The results were essentially the same for all of the different consolidating materials, The data show that low-viscosity consolidating materials pumped at a relatively slow rate gave the best results. Where the formation has produced sand, the treating fluids can compress the formation, thus permitting the channeling of fluids to another horizon. Pressure-packing these zones before attempting to consolidate is recommended. Sands containing more than 4 per cent of water-swelling clays are not good candidates for consolidation. It is indicated that loose sand, particularly when it is bentonitic, can be fractured during the placement of the treating fluids. INTRODUCTION Sand production in oil and gas wells has plagued the industry for many years, and numerous cures for this problem have been suggested. Most methods have been successful to a certain degree, but the great variety of well conditions that exist in the different areas has magni- fied the problem and limited the successful use of the various systems. Four review papers1-4 present a wealth of information concerning the degrees of success that have been obtained by the different sand-control methods. The bridging of sand grains by the use of gravel packs and screens has been quite successful. However, these methods do not leave the casing clear for all types of multiple completions, and the cure does not last for the production life of the well in some instance:;. The control of loose sands by sand consolidation with resins has never been as successful as desired. It has always been hoped that such a treatment would eliminate all sand problems for the life of the well, but. initial applications, starting in the middle 1940's, were only moderately successful. Lott, et a1,3 reported a success ratio of approximately 50 per cent and made the following conclusions. The highest percentage of successes were obtained where: a. Consolidation of a zone was made at the time of initial completion or prior to the production of sand. b. The interval treated was less than 12 ft in length. c. Between 30 and 50 gill plastic/ft of producing interval was displaced through the perforations. REASONS FOR SAND CONSOLIDATION FAILURES Our own experiences in the field of sand consolidation point toward the following conditions as the major reasons for the failure of sand consolidation attempts. 1. Mud-plugged perforations and mud invasion of the formation. 2. Sand in the casing covering all or part of the perforations. This sand could be either formation sand or one of the coarser sands used as propping agents in hydraulic fracturing. 3. Holes in the casing. 4. Channels behind the casing. 5. Attempting to treat too long a perforated section. 6. Too high a percentage of water-swelling clays in the formation. 7. Formations that have produced sand. Recent attempts were made to treat perforated sections ranging from 10 to 30 ft, in wells that have produced sand, by using a straddle packer that was raised and lowered through the perforations as the consolidating material was being pumped. In most instances, the pressure required to pump fluid into the formation varied considerably as the tool was raised and lowered. This suggested the possibility that significant differences in permeability were present or that only part of the formation had produced sand. There were times when a sudden break in pressure indicated that a fracture was being formed. Research conducted several years ago concerning the problem of the control of water in air and gas drilling indicated that shale sections could be fractured quite easily. In addition, it was determined that it was easier to pump fluids into shale bodies by fracturing the shale itself, or the interface between the shale and sand, than to pump into a fluid-saturated formation. Formations that produce sand are usually adjacent to shale bodies and frequently have shale streaks of various thicknesses inter-bedded in the sand. Therefore, where shale is exposed to fluid pressure it
-
Reservoir Engineering-General - Performance Predictions for Low Productivity ReservoirsBy G. W. Tracy, R. D. Carter
Numerical calculations were made to determine the behavior of reservoirs with high-pressure drawdown and wide well spacing where the initial productivity is low and the wells are completed by hydraulic fracturing. The two-phase flow equations were solved for the flow into a single well. This well was assumed to be producing from a reservoir with hydraulically created horizontal fractures (four different systems with fractures were studied). For comparison purposes, additional two-phase flow calculations were made assuming a reservoir with uniform rock properties. The two-phase flow results were also compared with the conventional calculation methods, which do not include the effect of saturation gradients resulting from a simultaneous flow of oil and gas which are normal to this type reservoir. It was found that the conventional methods predict (1) a high and too optimistic value of ultimate recovery, (2) a high producing rate and a high reservoir pressure at a given oil recovery and (3) a low trend of gas-oil ratio with oil recovery. Included in the two-phase flow calculations were provisions to control the oil production rate by an allowable rate and, also, by a gas-oil ratio penalty rule. For the systems with hydraulic fractures, the producing rate was controlled by the gas-oil ratio penalty rule for most of the life. This is in contrast to the system with uniform rock properties which went "on decline" almost immediately. An unexpected characteristic of the systems which included fractures was the early rise in producing gas-oil ratio from 730 cu ft/bbl to approximately 1,200 cu ft/bbl, followed by a "leveling off" before the normally expected gas-oil ratio rise began. Additional features which are a result of hydraulic fracturing are (I) greater ultimate recovery, (2) higher average producing rates and (3) a lower average reservoir pressure at a given oil recovery. INTRODUCTION Some oil fields discovered during the past few years are producing from certain volumetric ally controlled reservoirs (often referred to as solution or internal gas- drive reservoirs) which are characterized by high-pressure drawdown at the wells. Since the available pressure drawdown at a well is limited by the static reservoir pressure and the producing rate is controlled by the available drawdown, wells completed in this type of reservoir usually produce at a rate less than the allowable from the time of completion. Because of this, this type of reservoir is referred to as a low productivity reservoir. Economic considerations require the use of wide well spacing and well stimulation by hydraulic fracturing to make commercial wells in this type of reservoir. Performance predictions for volumetrically controlled reservoirs have been made using a combination of two standard equations. 1. The "Schilthuis" or "Muskat" type material balance equation is used to relate the average reservoir pressure and the cumulative oil recovery. 2. The results from the material balance equation and the productivity factor as described by Pirson' are used to relate the cumulative recovery with producing rate and time. The material balance equation assumed uniform pressure and liquid saturation conditions throughout a reservoir. The steady-state radial flow formula allows for a pressure gradient toward a well but assumes uniform liquid saturation. These calculation methods are adequate for application to reservoirs wherein the drawdown at the well to realize satisfactory producing rates is small compared to the total pressure. In low productivity, volumetrically controlled reservoirs, the pressure drawdown at the well is large cornpared to the total pressure. Although a precise number cannot be given for the magnitude of a large pressure drawdown, values in excess of 1,000 psi would definitely be included. For practical considerations, this usually occurs when the formation flow capacity is less than about 100 md-ft. However, this limit of formation flow capacity will vary with the well producing rate. The low pressure in the neighborhood of the well which results from a high drawdown causes evolution of large volumes of gas. This causes the gas saturation to be higher near the well than at a greater distance—-hence, a non-uniform gas saturation. Also, the relationship between the relative permeability to oil (K/K) and gas saturation is nonlinear but decreases approximately in an exponential way with increases in gas saturation. Because of this, the following chain reaction is established.