Page images
PDF
EPUB

Today, the phase diagrams published by the NIST/ ACerS program are proving to be vital to the development of emerging technologies. For example, carefully determined phase stability regions are essential for improving the manufacture of bulk superconductingwires and tapes. Advanced ceramics with high dielectric constant, low dielectric loss, and reliable temperature stability are needed to improve the performance and to lower the cost of the components of cellular communications circuits. To meet these needs and others, current work includes studies of materials used in a variety of areas such as high temperature superconductors, wireless communications, electronic packaging, fuel cells, and sensors.

Prepared by Ronald Munro, Howard McMurdie, Helen Ondik, and Terrell Vanderah.

Bibliography

[1] Robert Butler, Samantha Adams, and Meghan Humphreys, The American Ceramic Society: 100 Years, The American Ceramic Society, Columbus, Ohio (1998), pp. 3, 119-120.

[2] F. P. Hall and Herbert Insley, A Compilation of Phase-Rule Diagrams of Interest to the Ceramist and Silicate Technologist, J. Am. Ceram. Soc. 16, 455-567 (1933).

[3] F. P. Hall and Herbert Insley, Supplement to "A Compilation of Phase-Rule Diagrams of Interest to the Ceramist and Silicate Technologist," J. Am. Ceram. Soc. 21, 113-164 (1938).

[4] F. P. Hall and Herbert Insley, Phase Diagrams for Ceramists, J. Am. Ceram. Soc. 30, Part II, 1-152 (1947).

[5] Howard F. McMurdie and F. P. Hall, Phase Diagrams for Ceramists: Supplement No. 1, J. Am. Ceram. Soc. 32, Part II, 154-164 (1949).

[6] Ernest M. Levin, Carl R. Robbins, and Howard F. McMurdie, Phase Diagrams for Ceramists, The American Ceramic Society, Columbus, Ohio (1964).

[7] Michael L. Marx, Albert N. Link, and John T. Scott, Economic Assessment of the NIST Ceramic Phase Diagram Program, Planning Report 98-3, TASC, Inc., Arlington, Virginia (1998).

Determination of Reduced Cells in Crystallography

In theory, physical crystals can be represented by idealized mathematical lattices. Under appropriate conditions, these representations can be used for a variety of purposes, such as identifying, classifying, and understanding the physical properties of materials. Critical to these applications is the ability to construct a unique representation of the lattice. The vital link that enabled this theory to be realized in practice was provided by the 1970 paper of A. Santoro and A. D. Mighell, Determination of Reduced Cells [1]. This seminal paper led to a mathematical approach to lattice analysis initially based on a systematic reduction procedure and the use of standard cells. Subsequently, the process evolved to a matrix approach based on group theory and linear algebra, which offered a more abstract and powerful way to look at lattices and their properties. In the early 1960s, the Crystal Data Center at NBS started to build a database with chemical and crystallographic information on all classes of materials, including inorganics, organics, minerals, and metals. An immediate challenge was to organize the information in a systematic manner so that database users could readily determine material relationships. For example, one might wish to identify an unknown material by comparing its structure with structures already in the database. But this is not as simple as it might seem. The following anecdote illustrates the nature of the problem.

On an archaeological expedition, two colleagues were analyzing patterned designs on the walls of ancient buildings. They noted that by simply translating a small piece, or unit, of the design, one could create the entire pattern. Each researcher independently searched the archaeological site, selected a favorite design, and drew a repeat unit (a unit cell) on a notepad to take home. Later in their hotel, the colleagues carefully compared their repeat units. Finding them to be quite different in appearance, they concluded that their repeat units. defined different wall patterns. Upon returning home, they used the repeat units to recreate the wall patterns on their computers. To their surprise, the two wall designs were identical.

What happened? Why were they tricked into thinking that they had two patterns when in reality their unit cells described only one? The answer is illustrated by the simple example of a 2-dimensional lattice in Fig. 1. Since there are no terminal edges in the idealized lattice, the entire array of dots can be generated by

[ocr errors][merged small][merged small][merged small][ocr errors][merged small][ocr errors][ocr errors][ocr errors][ocr errors][merged small][merged small]

translating the small rhombus of dots labeled A, B, C, D. Here, translating means moving the unit cell right, left, up, and down, where "right and left" are defined as movements parallel to the direction AD, while "up and down" are defined by the direction AB. However, notice that the skewed parallelogram labeled E,F,G,H can be translated to generate exactly the same array of dots. In this case, "right and left" are defined as movements parallel to the direction EH, while “up and down" are defined by direction EF. At first glance, these two unit cells neither look the same nor translate in the same way, yet the readily apparent differences between a rhombus and a skewed parallelogram cannot be observed in the infinite lattices that they generate.

Exactly the same problem exists in describing the idealized lattice of a physical crystal. In this case, instead of a 2-dimensional planar wall pattern designed by man, we have a 3-dimensional crystal (e.g., a mineral such as an emerald, a ruby, or a diamond) designed by nature. Like the wall pattern which can be created by translating a 2-dimensional unit cell (a parallelogram), a crystal lattice can be created using a 3-dimensional building block (a parallelepiped), as illustrated in Fig 2.

[blocks in formation]

This figure shows the case of a crystal with rhombohedral unit cell (parameters a = 10.0 Å, α = 55.0°), and the equally correct alternative hexagonal cell (with a = 9.235 Å and c = 25.38 Å. Either cell when translated will generate the same crystal lattice.

More generally, the parallelepiped of the unit cell may be defined abstractly by three noncoplanar vectors a, b, c. However, to achieve the full utility of theory and practice, everyone must end up with the same a, b, c even though alternative parallelepipeds might be constructed with equal validity. The problem is that the equivalence of alternative parallelepipeds is not so readily established that computerized search routines could easily recognize two alternatives as being equivalent. Because there are infinite varieties of alternative cells, Santoro and Mighell chose to pursue the development of a procedure that would arrive at a unique representation.

The first step toward a unique representation was to recognize that any lattice could be defined on the basis of a cell with the smallest possible volume (known as a primitive cell). But for a given lattice, which of the many possible primitive cells should be selected? Some four decades earlier, Niggli [2] had considered this aspect of cell definition and had defined what was termed a reduced cell which turned out to be a unique cell. What remained to be established was the mathematical theory and associated algorithms for calculating

the reduced cell starting from any cell of the lattice. It was this practical realization of reduced cells that was achieved in the 1970 paper of Santoro and Mighell.

For a cell to be reduced, two sets of conditions, termed the main and special conditions for reduction, must be satisfied. The main conditions ensure that one has selected a cell based on the three shortest vectors of the lattice. The special conditions ensure that one has determined a unique cell for those cases in which the lattice has more than one cell with the same shortest three vectors. Based on the theory in the 1970 paper, algorithms were designed and software written that could be applied universally to any cell ever published or determined in the laboratory.

Application of the reduced cell to both the database work and the laboratory research at NIST was immediately successful. For example, reduction played a central role in the determination of the crystal structure of benzene II, a high-pressure polymorph of benzene that requires a pressure of about 1.2 GPa to stabilize. Using the NIST high-pressure diamond anvil cell (DAC), which is described elsewhere in this volume, benzene II became the first crystal structure determined in situ by high-pressure single crystal x-ray diffraction techniques [3]. Because of its narrow aperture, the diffraction data from the DAC were highly restricted. There was, however, a sufficient amount of data to determine the reduced cell for the specimen and to establish unambiguously that the crystal system was monoclinic. Other techniques, which required more extensive data, would have failed in this task.

In routine structure determination work, reduction became a practical tool for analyzing difficult cases in which traditional visual methods often failed. This was especially true for a rhombohedral crystal, where it is hard to find the right orientation to see the 3-fold symmetry. As a result, the structure determination for such a crystal was difficult and often incorrect. So extreme was the frustration level that it was said, in jest, that the best thing that could happen would be for the crystal to fall off the instrument and disappear in a crack in the floor. Reduction procedures, however, instantly resolved the difficulty, and the resulting highly characteristic reduced cell and form [4] immediately led the experimentalists to the correct answer.

Additional successes derived directly from the uniqueness property of the reduced cell, because it leads directly to a general method for materials characterization. By classifying all materials using the reduced cell, one obtains the basis for a powerful method for compound identification [5,7]. In this scheme, a unit cell of an unknown is transformed to the reduced cell, which is then matched against the file of known materials represented by their respective reduced cells.

Combining the reduced cell match with an element type match further enhances the selectivity. In practice, cell matching has proved an extremely practical and reliable technique to identify materials. Today this identification strategy is widely used, as it has been integrated into commercial x-ray diffractometers [8].

Due somewhat to serendipity, the most significant and lasting value of this work is probably not reduction itself. Rather, reduction has played a key transition role in helping to move the rather conservative discipline of crystallography in new directions with new insights. The research on reduction proved that there are excellent reasons for looking at the crystal lattice from an entirely different point of view. Consequently, with time, many other lattice-related papers followed, including papers on sublattices and superlattices, composite lattices, and coincidence site lattices. At NIST, the mathematical analysis of lattices was pursued further and evolved to a matrix approach that offered a more abstract and powerful way to look at lattices and their properties.

The matrix approach, in particular, has many applications, including for example, symmetry determination [6,7]. In sharp contrast to other methods that focus on the consequences of symmetry (such as dot products, d-spacings, etc.), the matrix approach deals with symmetry in its most abstract form, represented as matrices. The basis of the matrix approach is to generate the matrices that transform the lattice into itself. The resulting set of matrices comprise a mathematical group obeying the formal relations of group theory. These matrices may be used both theoretically and practically to analyze symmetry from any cell of the lattice. In this formulation, the mathematics and algorithms used to analyze symmetry become extremely simple since they are based on manipulating integers and simple rational numbers using elementary linear algebra. The matrix approach, therefore, provides both the conceptual and practical framework required in performing the experimental procedures in a logical and general manner.

The ability to determine a unique reduced cell and the subsequent achievements in lattice analysis, especially the matrix approach, have been critical milestones in

crystallography. They established an important mathematical rigor in crystallography and in the materials sciences and have stimulated many practical applications.

Antonio Santoro is a current member of the NIST Center for Neutron Research. His primary interests are the determination of crystal structures from neutron powder diffraction data and the application of the bond valence method to structural distortions.

Alan Mighell retired from NIST in 1998, after leading the Crystal Data Center for many years. Currently, he is a guest research scientist in the Materials Science and Engineering Laboratory at NIST. His principal research interests include the design and development of procedures for materials identification and for establishing lattice relationships.

Prepared by Alan D. Mighell, Vicky Lynn Karen, and Ronald Munro.

Bibliography

[1] A. Santoro and A. D. Mighell, Determination of Reduced Cells, Acta Crystallogr. A26, 124-127 (1970).

[2] P. Niggli, Krystallographische und Strukturtheoretische Grundbegriffe, Handbuch der Experimentalphysik, Vol.7, part 1, Akademische Verlagsgesellschaft, Leipzig (1928).

[3] G. J. Piermarini, A. D. Mighell, C. E. Weir, and S. Block, Crystal Structure of Benzene II at 25 Kilobars, Science 165, 1250-1255 (1969).

[4] A. D. Mighell, A. Santoro, and J. D. H. Donnay, Reduced-cells section, International Tables for X-ray Crystallography, Vol. 1, 530-535 (1952).

[5] A. D. Mighell, The Reduced Cell: Its Use in the Identification of Crystalline Materials, J. Appl. Crystallogr. 9, 491-498 (1976). [6] Vicky L. Himes and Alan D. Mighell, A Matrix Approach to Symmetry, Acta Crystallogr. A43, 375-384 (1987).

[7] Vicky L. Karen and Alan D. Mighell, Apparatus and Methods for Identifying and Comparing Lattice Structures and Determining Lattice Structure Symmetries, U.S. Patents 5,168,457 and 5,235,523 (1992, 1993).

[8] Susan K. Byram, Charles F. Campana, James Fait, and Robert A. Sparks, Using NIST Crystal Data Within Siemens Software for Four-Circle and SMART CCD Diffractometers, J. Res. Natl. Inst. Stand. Technol. 101, 295-300 (1996).

Speed of Light From Direct Frequency and Wavelength Measurements

The National Bureau of Standards has had a long history of interest in the speed of light, and no doubt this interest contributed to the measurement described here [1]. As early as 1907, Rosa and Dorsey [2] determined the speed of light from the ratio of the capacitance of a condenser as measured in electrostatic and electromagnetic units. Over the ensuing years NBS developed still other methods to improve upon the accuracy of this important physical constant.

By the late 1960s, lasers stabilized in frequency to atomic and molecular resonances were becoming reliable research tools. These could be viewed as providing stable reference for either optical frequency or wavelength. This duality of frequency and length produced the obvious suggestion that a simultaneous measurement of frequency and length for the same laser transition would yield a very good measurement of the speed of light. In fact, a 1958 measurement of the speed of light by Froome [3] was done by determining the frequency and wavelength of a microwave source at 72 GHz. The frequency measurement was fairly straightforward, since frequency in the microwave and lower ranges can be readily measured with great accuracy. The speed-of-light measurement was limited primarily by the difficulty in measuring the very long wavelength (about 0.4 cm) of the 72 GHz radiation. Clearly, a better measurement would result if higher frequencies could be employed, where wavelengths could be more accurately measured. The measurement technology of that era was not up to the task. The wavelength of visible radiation could be measured fairly well, but no accurate methods for measuring visible frequencies were available. Whereas frequency could be measured quite well in the microwave to millimeter-wave region, wavelength measurements were problematic.

The measurement of the speed of light by the Boulder group involved the development of a new method. The approach taken was to synthesize signals at progressively higher and higher frequency using harmonicgeneration-and-mixing (heterodyne) methods and to lock the frequency of a nearby oscillator or laser to the frequency of this synthesized signal [4]. Photodiodes, as well as metal-insulator-metal diodes, fabricated by adjusting a finely tipped tungsten wire against a naturally oxidized nickel plate, were used for harmonic

generation and mixing. With this approach, a frequencysynthesis chain was constructed linking the microwave output of the cesium frequency standard to the optical region, so that the group could directly measure the frequency of a helium-neon laser stabilized against the 3.39 μm transition of methane. When the measurements were completed, the uncertainty limitation was found to be the asymmetry of the krypton line on which the definition of the meter was then based. The experiment thus showed that the realization of the meter could be substantially improved through redefinition.

This careful measurement resulted in a reduction of the uncertainty of the speed of light by a factor of nearly 100. The methods developed at NIST were replicated in a number of other laboratories, and the experiments were repeated and improved to the point where it was generally agreed that this technology could form the basis for a new definition of the meter. An important remaining task was the accurate measurement of stillhigher (visible) frequencies which could then serve as more practical realizations of the proposed new definition. The Boulder group again took the lead and provided the first direct measurement of the frequency of the 633 nm line of the iodine-stabilized helium-neon laser [4], as well as a measurement of the frequency of the 576 nm line in iodine [5]. These measurements, and similar measurements made at other laboratories around the world, were the last ingredients needed to take up the redefinition of the meter.

The new definition of the meter, accepted by the 17th Conférence Générale des Poids et Mesures in 1983, was quite simple and elegant: "The metre is the length of the path traveled by light in vacuum during a time interval of 1/299 792 458 of a second." A consequence of this definition is that the speed of light is now a defined constant, not to be measured again. NBS had played a key role in pioneering measurement methods that resulted in this redefinition and in the optical frequency measurements that contributed to practical realizations of the definition. In subsequent years, measurement of other stabilized-laser systems added to the ways in which the meter could be realized. This way of defining the meter has proven to be particularly robust, since unlike a definition based on a standard such as the krypton lamp, length measurement can be continuously improved without resorting to a new definition.

« PreviousContinue »