Skip to Content Skip to Search Skip to Utility Navigation Skip to Top Navigation
Los Alamos National Laboratory

Los Alamos National Laboratory

Delivering science and technology to protect our nation and promote world stability

Science Highlights, April 15, 2015

Awards and Recognition

George “Rusty” Gray chairs National Academy of Sciences panel

Rusty Gray

Rusty Gray (Materials Science in Radiation and Dynamics Extremes, MST-8), has been re-appointed to a second, two-year term as chair of The National Academy of Sciences Panel on Ballistics Science and Engineering at the U.S. Army Research Laboratory (ARL). The panel annually reviews the scientific and technical quality of ARL ballistics science and engineering research and development programs, providing input to the ARL Technical Assessment Board for its biennial assessment report on the overall quality of ARL scientific and engineering research. The 15-member panel includes leaders from academia, industry, and national laboratories.

Gray said that his National Academy panel position helps him keep pace with international and national developments in the technical areas of high-rate deformation, shock physics, equation of state, dynamic damage and fracture, high explosives and propellant science, sensors, and guidance and control, from both an experimental and modeling perspective.

He pursues fundamental and applied research primarily in the elucidation of the structure and property behavior of materials subjected to dynamic and shock-wave deformation. Gray has a PhD in metallurgical engineering from Carnegie Mellon University, and joined Los Alamos National Laboratory in 1985. He is a Fellow of Los Alamos National Laboratory, the American Physical Society, ASM International (formerly the American Society for Metals), and The Minerals, Metals, and Materials Society. Technical contact: Rusty Gray

Return to top

Ludmil Alexandrov receives Harold M. Weintraub Award

Ludmil Alexandrov

Ludmil Alexandrov (Theoretical Biology and Biophysics, T-6), former PhD student at the Wellcome Trust Sanger Institute and the University of Cambridge, is one of 13 recipients of the Harold M. Weintraub Graduate Student Award. The Basic Sciences Division of the Fred Hutchinson Cancer Research Center in Seattle, WA, awards the prize annually to celebrate outstanding achievement during graduate studies in the biological sciences. Alexandrov, the only European recipient this year, will present his paper “Signatures of Mutational Processes in Human Cancer”, published in Nature, at the Center’s annual scientific symposium in May 2015.

The theoretical and computation framework that Alexandrov devised enabled him to create the first-ever catalogue of the mutational processes operating in human cancer. He uncovered more than 20 different signatures of processes that mutate DNA and cause cancer. For many of the signatures, the research team also identified the underlying biological process responsible. Professor Stratton, Director of the Wellcome Trust Sanger Institute, mentored Alexandrov as part of the Cancer Genome Project.

Reference: “Signatures of Mutational Processes in Human Cancer,” Nature 500, 415 (2013); doi:10.1038/nature12477.

The American Society of Clinical Oncology highlighted his paper as an important step forward in cancer research. Forbes Magazine selected him as one of the “30 brightest stars under the age of 30 in the field of Science and Healthcare.” The magazine’s list selects the entrepreneurial, creative and intellectual best of their generation.

As an Oppenheimer Fellow at LANL, Alexandrov collaborates with Professor Stratton on the cancer signatures project. They plan to analyze the mutational signatures of 20,000 patient samples to create a comprehensive map of mutational processes in cancer. Alexandrov also studies the molecular processes that cause aging. William Hlavacek and Thomas Leitner (T-6) co-mentor him. Technical contact: Ludmil Alexandrov

Return to top

Ricardo Mejia-Alvarez honored as outstanding young alumni

Ricardo Mejia-Alvarez

The Department of Mechanical Science and Engineering at the University of Illinois at Urbana-Champaign recognized Ricardo Mejia-Alvarez (Neutron Science and Technology, P-23) with the Outstanding Young Alumni Award. His was one of nine recipients of the award. The Department states “The inaugural award recognizes recent alumni who have excelled early in their career, distinguishing themselves in a professional or technical capacity. Recipients are nominated by department professors, staff, and alumni board members.”

Mejia-Alvarez joined Los Alamos as a postdoctoral research associate in 2010 after earning a PhD in theoretical and applied mechanics from the University of Illinois at Urbana-Champaign. In 2013, he became a staff scientist on the Extreme Fluids Team in P-23. He investigates shock-driven fluid instabilities, turbulent boundary layers, and free shear flows of non-Newtonian fluids. Mejia-Alvarez conducts experimental research of Richtmyer-Meshkov instabilities to understand the role of shock-driven turbulent mixing in phenomena that range from inertial confinement fusion to supernova explosions. His research is part of the High Energy Density Plasmas and Fluid (HEDPF) thrust of the Lab’s Nuclear and Particle Futures science pillar. Technical contact: Ricardo Mejia-Alvarez

Return to top

Kathy Prestridge to serve on editorial advisory board of fluid dynamics journal

Kathy Prestridge

Kathy Prestridge, leader of the Extreme Fluids Team in Neutron Science and Technology (P-23), has accepted an invitation from Springer-Verlag to join the Editorial Advisory Board of the journal Experiments in Fluids. Her appointment begins with the May 2015 issue.

Experiments in Fluids is the primary journal in fluid dynamics for communicating advanced and novel flow measurement techniques that bring new physical insights to fluid flows. She will advise the editors in her areas of technical expertise, review and adjudicate manuscripts, and encourage journal submissions. Prestridge has developed a strong experimental fluids program in the areas of mixing and turbulence over the past decade. The appointment recognizes the influence and importance of her experimental work in fluid dynamics.

Prestridge applies cutting-edge diagnostics to flows that are high-speed and difficult to measure. New physical insights into shock-driven turbulent mixing and multiphase flows provide important data in support of the Laboratory’s Common Model for mixing. By performing experiments at the small, laboratory scale, her team makes important, novel measurements and performs parametric analyses of difficult mixing and hydrodynamic problems. Researchers incorporate these analyses into models for better predictions of complex flows. The NNSA Science Campaigns fund her research in support of predictive science, advanced strategic computing and modeling efforts.

Prestridge has received four NNSA Defense Programs Awards of Excellence. She won the Lab’s Postdoctoral Publication Prize in Experimental Science and the Star Award for contributions to the Laboratory and Community. Prestridge chairs the American Physical Society’s (APS) Committee on the Status of Women in Physics, and she is co-investigator on a National Science Foundation Grant for the APS’s Professional Skills Development Workshops. Prestridge has mentored 10 postdocs and 20 students during the past decade. Technical contact: Kathy Prestridge

Return to top


Discovery of genotoxic mechanisms of PAH-induced immunotoxicity

Polycyclic aromatic hydrocarbons (PAHs) are a group of abundant environmental pollutants that are produced by the incomplete combustion of organic matter, such as coal, wood, tobacco, and charbroiled meat. They are ubiquitous in the atmosphere, soil, and water on a global scale due to the rapid increase in industrial development. Exposure can produce harmful health effects that include an increased cancer risk and immune system toxic effects. Extensive studies have shown that PAH parent compounds do not produce toxicity; they require metabolism to exert both carcinogenic and immunotoxic effects. The Environmental Protection Agency (EPA) has developed a strategy to estimate the risk of human exposure to PAH mixtures. It is important to understand the potential metabolic pathways for bioactivation of these PAHs and to make accurate risk assessments to human health. Jun Gao (Biosecurity and Public Health, B-10) and Professor Scott Burchiel (University of New Mexico) have studied these potential metabolism pathways and summarized their new discovery in a book chapter “Genotoxic Mechanisms of PAH-Induced Immunotoxicity” in the book Molecular Immunotoxicology.

Well-documented immunosuppressive PAHs include those that are highly immunotoxic, such as benzo[a]pyrene (BaP), 7,12-Dimethylbenz[a]anthrance (DMBA), dibenzo[d,e,f,p]chrysene (DBC), and 3-methylcholanthrene (3-MC); and intermediate immunotoxic PAHs, such as dibenz[a,c]anthracene (DAC). There are numerous mechanisms associated with immunotoxicity depending upon the individual PAH, the dose and route of exposure, and the target tissue examined.

PAH metabolites activate numerous aryl hydrocarbon receptor (AhR)-dependent and AhR-independent pathways that lead to suppression of both adaptive and innate immune responses.

Schematic mechanism

Figure 1. Schematic mechanism of benzo[a]pyrene-induced toxicity by AhR (1) and cytochrome P450 (2).

PAHs can activate both AhR-dependent and AhR-independent immunosuppressive pathways based on their binding affinities to AhR receptor. They bind to AhR in cytoplasm, translocate into nucleus, and ultimately activate the primary phase I enzymes- cytochrome P450s. Some PAHs with low AhR affinity may be directly activated by cytochrome P450s, and produce active metabolites that can bind to DNA, cause DNA damage, and trigger genotoxic stress. These metabolites can also damage the redox reactive oxygen species generation pathway, destroy mitochondrial electron transfer mechanisms, and produce immune cell cytotoxicity. To maintain genomic stability, cells trigger complex signaling pathways following exposure to genotoxic stimuli. It is not completely known how these enzymes respond to PAH-induced DNA damage. 

Gao and Burchiel are the first to report that ataxia telangiectasia mutated protein (ATM) and ataxia telangiectasia and rad3-releated protein (ATR) play a significant role in PAH-induced immunotoxicity. ATM and ATR crosstalk with each other and function through the p53 effector in response to PAH-induced DMA damages. Molecular Immunotoxicology featured the significant discovery of this mechanism pathway on the book’s cover.

Reference: “Genotoxic Mechanisms of PAH-Induced Immunotoxicity” in Molecular Immunotoxicology, (eds. Emanuela Corsini and Henk van Loveren), Wiley-VCH Verlag GmbH & Co. KGaA. Weinheim, Germany (2014). doi: 10.1002/9783527676965.ch12  

The National Institutes of Health funded the work, which supports the Lab’s Global Security mission area and the Science of Signatures science pillar through detection of environmental toxins. Technical contact: Jun Gao

Return to top

Capability Enhancement

Ethernet Backbone enables high performance computing infrastructure

The secure network at the Laboratory has three compute clusters, ranging in size from 400 nodes to 9,000 nodes, a visualization cluster, three large shared parallel file systems, one parallel file system for visualization, a separate Network File System (NFS) for home and project space, plus a variety of service systems providing support functions. Each cluster has IO (Input/Output) nodes that act as gateways from the clusters to the external file systems and services. The HPC Ethernet Backbone network ties all these systems together. The Network Team in High Performance Computing Systems (HPC-3) has completed a multi-year project to enhance the high performance computing infrastructure at the Laboratory.

The previously deployed Ethernet backbone in the secure network was called Parallel Scalable Backbone (PasScalBB). The backbone design in the secure network started with 6 lanes, deployed on 6 Force10 (F10) chassis switches. The lane switches handled all IO nodes and parallel file systems. Other systems, such as front-end nodes and file transfer nodes, were connected to a top level F10 chassis switch. In 2008, an upgrade of the lane section to 12 lanes in 2008 resulted in a backbone that consisted of 13 chassis switches. Another F10 chassis switch was needed to connect the HPSS Archive to this backbone.

Figure 2. PaScalBB.

These switches deployed 10G technology and were oversubscribed when fully populated. The F10 switches cost approximately $450,000 per year to maintain. After approximately 7 years of service, it needed to be replaced or upgraded. Due to the deployment configuration, the loss of one lane switch would cause a complete outage for all parallel scratch file systems – and therefore all clusters. With 40G and 100G technologies becoming widespread, it became clear that the backbone should be replaced with the Next Generation Backbone – Ethernet (NGBB-Ethernet).

The transition to the new backbone began in October of 2014 and continued through the end of February 2015. During the transition, the two backbones were connected to allow systems that had been moved to continue to communicate with systems that had not. While this required complex planning, it allowed the Laboratory to move the entire production network one piece at a time during normally scheduled DSTs (Dedicated System Time).

Transition Backbone

Figure 3. Transition Backbone. SSWRT - Secure switch router (hostname for old switches), OSPF - Open Shortest Path First (link state network protocol), BBSW - Backbone Switch (name for new switches), VM – Viewmaster (visualization cluster).

The NGBB-Ethernet consists of two high-density, full line rate, non-blocking Arista switches. They are built to provide 40G and/or 100G capabilities directly, or can be deployed in conjunction with a patch panel system that breaks those ports into several 10G ports. This will facilitate the move to higher bandwidth systems while still being able to support legacy systems. The Laboratory reduced the complexity of the network by logically grouping systems by functionality. These groupings have similar usage models and therefore similar access patterns. This allows LANL to control traffic in and out of these subnets easily. The changes resulted in a stronger security posture. As of March 2015, the NGBB-Ethernet implementation is fully functioning. The old system has been dismantled, and some of those switches have been loaned to Lab-partnered projects and organizations.


NGBB Ethernet

Figure 4. NGBB Ethernet. OSPF is Open Shortest Path First (link state network protocol), BBSW is Backbone Switch (name for new switches).

The Network Team (Susan Coulter, Jesse Martinez, Alex Montano, Katy Protin and Chuck Wilder) in High Performance Computing Systems (HPC-3) conducted the work. DOE funded the improvements, which support the Lab’s high performance computing infrastructure and the Lab’s mission areas and the Information, Science, and Technology science pillar. Technical contact: Susan Coulter



Direct metal laser sintering machine supports additive manufacturing

The Laboratory installed a direct metal laser sintering machine for additive manufacturing. This type of manufacturing, which uses computer-aided designs (CAD) to build 3-dimensional components, is significantly more efficient than conventional fabrication methods. Additive manufacturing enables innovative design options that were once impossible to fabricate. The EOS M280 direct metal laser sintering machine enables additive manufacturing of stainless steel parts at Los Alamos.
Schematic of the additive

Figure 5. Schematic of the additive manufacturing process. (a) and (b) Computer aided design (CAD), (c) direct metal laser sintering of 20 micron layers followed by cleaning of excess powder, and (d) final part is unpacked from a bed of metal powder.

In a joint venture by Metallurgy (MST-6) and Mechanical and Thermal Engineering (AET-1), the metal powder bed-based additive manufacturing machine fabricates parts through sequentially sintering 20-40 μm thick cross sections. Through the repetition of steps that include laser sintering a part cross section, dropping the building surface, recoating the powder bed, and sintering the subsequent part cross section, a component can be built systematically. The total build volume of the system is 10”×10”×12”. Thus metal components of significant size can be fabricated from computer aided design files. Initial production runs with this equipment fabricated parts for a portable gamma (γ)-ray imaging system (Cobalt Pig project) for Advanced Nuclear Technology (NEN-2).

Engineers initially designed the 15-5 stainless steel parts for this project for conventional manufacturing, which involved 10 components and 6 weldments. The capabilities of the additive manufacture machine reduced the complexity to only four components and two weldments. Additive manufacturing fabricated all parts in a single build cycle. The team completed the first set of these components in February 2015.

CAD rendering

Figure 6. (Left): CAD rendering for Cobalt Pig steel components. (Right): First completed set of components for the project.

The process of powder bed-based additive manufacturing requires features at less than 30º from the plane of the baseplate to be supported by partially sintered support material. This support material is similar to a honeycomb structure and has the benefit of being both rigid and brittle, making it easy to remove after fabrication. Supports fix the component to the baseplate and act as a heat sink to avoid thermal gradient-based distortion in as-fabricated parts. Inadequate support of parts could cause significant degradation of feature tolerance to the point where the assembly might not fit together properly. The left photo shows an example of the interaction of support material and part in the build process. The lighter gray material is the partially sintered support structures, which fix the fully dense part to the baseplate. After fabrication the support structure is carefully broken away from the components. The surfaces are cleaned and await final assembly (right photo).

The team includes: Don Bucholz, Michael Brand, John Carpenter, Dave Alexander, Rick Hudson, Mark Paffett and Cameron Knapp (MST-6); Larry Bronisz, John Bernardin, Don Quintana, and James Thompson (AET-1). The collaborative work will focus on weapons program support, high-temperature rocket engines, and enhanced efficiency compact heat exchangers.


Photo. (Left): As-deposited components with supporting structure attached. (Right): Final assembly of the Cobalt Pig components.

The Principal Associate Directorate of Science, Technology, and Engineering (PADSTE) provided Strategic Capital Investment Funds to purchase the machine. NNSA funded the fabrication of the Cobalt Pig. This work supports the Laboratory’s Nuclear Deterrence and Energy Security mission areas and the Materials for the Future Science pillar. The effort is an example of the process-aware manufacturing that could be employed at MaRIE (Matter-Radiation Interactions in Extremes), the Laboratory’s proposed experimental facility. MaRIE would improve predictive capability for materials and accelerate the qualification, certification, and assessment of those materials for the Lab’s national security mission. Technical contact: Don Bucholz

Return to top

Computer, Computational and Statistical Sciences

Numerical methods reduce spurious mixing in a LANL ocean model

The oceans play an important role in the earth's climate. They transport heat from equator to pole, provide moisture for rain, and absorb carbon dioxide from the atmosphere. Ocean models are a critical component of climate change research used to inform policy makers of the risks of greenhouse gas emissions. These models must be evaluated in real-world simulations to fulfill the goal of decadal and century-long climate predictions. Research published in Ocean Modelling compares a new model developed at LANL with other models for simulations of mixing in the ocean.

 The Laboratory’s Model For Prediction Across Scales-Ocean (MPAS-Ocean) is a new ocean model that was created for climate change studies at high spatial resolution. Much effort has been spent on model validation and improvement because of the important role these models play in climate change projections. A major goal in ocean modeling is to reduce spurious vertical mixing. Observations show that vertical mixing in the deep ocean is very low. Ocean models typically overestimate this mixing because they represent the water with discrete grid cells. This leads to errors in the density and locations of deep ocean currents, making it difficult to study the effects on these currents in a changing climate.


Ocean currents

Figure 7. Ocean currents and eddies in a high-resolution global ocean simulation with the Antarctic in the center. MPAS-Ocean is used to investigate the effects of climate change. Colors show speed, where white is fast and blue is slow. The image displays kinetic energy, in m2/s2.

Los Alamos researchers tested MPAS-Ocean in five configurations ranging from idealized to real-world domains. Idealized tests are useful because the solutions are known, allowing the model error to be measured exactly and algorithms to be improved. For example, the overflow domain is simply a cold, dense plug of water that slides down a deep-sea ridge (Figure 8).

A more complex three-dimensional domain (idealized Antarctic Circumpolar Current) includes the Coriolis force and spawns eddies associated with geophysical turbulence (Figure 9). The team compared results with exact solutions when available and with five other well-established ocean models.


Figure 8. MPAS-Ocean overflow test. Colors show temperature, where blue is cold and red is warm. Mixing, shown by yellow and green, occurs in the model due to discrete grid cells. Improved numerical methods reduce the mixing in these tests, resulting in more realistic overflow behavior in global simulations.

The MPAS-Ocean model has an advanced vertical coordinate with many options to improve model performance and flexibility. The user may choose from vertical coordinates that are fixed, expand with the motion of the sea surface, follow bottom topography, move with internal gravity waves, or any combination of these. The test cases provided a way to verify the operation of these vertical coordinate types and to measure the spurious mixing in each.

MPAS-Ocean performed similarly or better than the other models, thus validating the functionality of the new model. It produced less spurious mixing than other models by up to a factor of ten. A combination of the vertical coordinate, hexagon-type horizontal grid, and a tracer advection scheme designed for these grids produced this result. Due to improved algorithms, MPAS-Ocean better represents physical mixing processes in climate simulations, leading to more accurate climate studies. Climate researchers may use the MPAS-Ocean model for long-term climate simulations with a higher degree of confidence.

Eddies in an idealized

Figure 9. Eddies in an idealized three-dimensional configuration that includes the Coriolis force due to the Earth’s rotation. Surface temperature is shown, with high viscosity (left) to low viscosity (right). The turbulence in the low viscosity case is more like the physical ocean, but leads to unrealistic mixing that must be minimized.

Reference: “Evaluation of the Arbitrary Lagrangian–Eulerian Vertical Coordinate Method in the MPAS-Ocean Model,” Ocean Modelling 86, 93 (2015); doi: 10.1016/j.ocemod.2014.12.004. The Climate, Ocean, and Sea Ice Modeling (COSIM) team includes: Mark R. Petersen and Matthew W. Hecht (Computational Physics and Methods, CCS-2); Douglas W. Jacobsen, Todd D. Ringler, and Mathew E. Maltrud (Fluid Dynamics and Solid Mechanics, T-3).

Climate modeling programs in the DOE Office of Science, Biological and Environmental Research funded the research. The Los Alamos National Laboratory Institutional Computing facility provided computational resources. The work supports the Laboratory’s Global Security mission area and the Information, Science, and Technology science pillar through enhanced ocean model components of global climate simulations. Technical contact: Mark Petersen

Return to top

Materials Physics and Applications

Magnetic measurements detect hydrogen contaminants in plutonium

Researchers from Condensed Matter and Magnet Science (MPA-CMMS) and Nuclear Materials Science (MST-16) used magnetization, x-ray, and neutron diffraction measurements enabled by the Lab’s materials science capabilities to demonstrate a technique for detecting low concentrations of plutonium hydride in samples. The technique, published in the Journal of Applied Physics, is relevant to plutonium applications, workers who handle plutonium, and long-term plutonium storage. Contaminants, such as oxygen, hydrogen, and carbon, can degrade the mechanical properties of plutonium, causing consequences that can negatively affect health and safety.

LANL researchers examined the effects of plutonium (Pu) metal exposed to low levels of hydrogen during the radioactive decay process. The team showed that ferromagnetic remanence – the residual magnetization left in a ferromagnetic material (a permanent magnet) after exposure to a magnetic field – could detect small quantities of hydrogen against the background of pure plutonium. Pure plutonium is non-magnetic. However, Los Alamos researchers in the early 1960s discovered that Pu acquires a magnetic moment when it reacts with hydrogen to create plutonium hydride. Therefore, magnetic measurements can be used to detect the presence of hydride formation in plutonium metal.


A plutonium

Figure 11. A plutonium sample is sealed in a titanium container for magnetization measurements.

The researchers used the Neutron Powder Diffractometer and neutron diffraction at the Los Alamos Neutron Science Center (LANSCE) to characterize the metallic crystal structures samples of polycrystalline delta (d)-plutonium stabilized with gallium (Ga). The samples had the expected face centered cubic (fcc) structure and lattice parameters. The scientists exposed the samples to hydrogen under partial vacuum at 450 °Celsius to ensure reproducible hydrogen solubility for the magnetization measurements. They loaded one sample to a H/Pu atom ratio of 0.01 ± 0.0003, and encapsulated the second plutonium sample without H loading.

After sealing the samples in titanium containers to prevent radioactive contamination of the surroundings or exposure of the samples to air, the team measured the magnetization of the capsules as a function of magnetic field and temperature at the National High Magnetic Field Laboratory-Pulsed Field Facility at Los Alamos. They used a commercial vibrating sample magnetometer in a physical properties measurement system. The results confirmed that the 2.0 at. % Ga stabilized H-free δ-Pu samples are non-magnetic between 4-300 K.

Magnetic moment

Figure 12. Magnetic moment as a function of magnetic field of δ-Pu with 1 at. % H exposure in a titanium (Ti) sample holder (B), minus the magnetization of the sample without H exposure in Ti sample holder (A). Measurements were conducted at after zero-field cooling (in H < 10−3 T) from room temperature and sweeping the magnetic field around a 6 T hysteresis loop starting from H = 0. The inset shows a zoomed view of the magnetic hysteresis, with a linear background subtracted.

The results demonstrated that commercial magnetization measurement techniques are sensitive to the conversion of tiny amounts (0.0015 mole fraction) of hydrogen in Ga stabilized δ-Pu to ferromagnetic PuHx. This easily reproducible technique is a useful quantitative diagnostic to determine the content of small amounts of PuHx in samples.

Reference: “Detecting Low Concentrations of Plutonium Hydride with Magnetization Measurements,” Journal of Applied Physics 117, 053905 (2015); doi: 10.1063/1.4907216. Authors include: Jae Wook Kim (MPA-CMMS, now at Rutgers University), Eundeok Mun (MPA-CMMS, now at Simon Fraser University, Canada), Joe Baiardo, Vivien Zapf, and Chuck Mielke (MPA-CMMS); Alice Smith, Scott Richmond, Jeremy Mitchell, and Dan Schwartz (MST-16).


Laboratory Directed Research and Development (LDRD) funded the LANL work. Magnetization measurements were performed at the National High Magnetic Field Laboratory, funded by the National Science Foundation, the State of Florida, and the DOE. The neutron diffraction work was conducted on the Neutron Powder Diffractometer at the Lujan Neutron Scattering Center, funded by DOE-Basic Energy Sciences. The work supports the Lab’s Nuclear Deterrence mission area and the Materials for the Future and Science of Signatures science pillars through investigation of materials in the nation’s nuclear weapons stockpile. Technical contacts: Dan Schwartz and Chuck Mielke

Return to top


Magnetic fields hold the heat for inertial confinement fusion

Inertial confinement fusion (ICF) experiments on the National Ignition Facility (NIF) are designed to implode a spherical capsule containing deuterium-tritium (DT) fuel, which compresses and heats the fuel to initiate thermonuclear burn in a hotspot at the center of the capsule. NIF uses laser beams to implode the fuel to cause nuclear fusion reactions for stockpile stewardship and energy production research. One of the key issues for target performance at the National Ignition Facility (NIF) is adequate coupling of laser energy to the gas-filled hohlraum target used to provide the x-ray drive to implode fusion capsules. In an article published in Physics of Plasmas, authors from Physics Division and their collaborators describe the use of an external magnetic field in a laser-heated hohlraum target to magnetically insulate the plasma, attaining higher plasma temperatures with the external field present than without.




Figure 13. Schematic of the laser-heated hohlraum target.

A hohlraum is a high-Z cylinder (usually made of gold), typically filled with a low-Z gas (such as helium or hydrocarbons). Laser energy absorbed in the interior of the hohlraum at the high-Z wall is converted to x-rays and efficiently reradiated within the interior of the hohlraum. The low-Z gas helps to keep the gold wall from expanding too much during laser heating. Research at NIF indicates that insufficient power is being deposited at the hohlraum wall by the laser beams with the longest path length through the hohlraum, due to laser-plasma instabilities and laser absorption in the low-Z gas. A colder than initially expected low-Z gas causes this phenomenon.

The researchers examined the concept that an external magnetic field could insulate the heat loss from a plasma. Magnetic fusion research routinely uses an external magnetic field for this purpose. In this ICF research, the team proposed that a magnetic field of about 10 T or greater applied externally to a laser-fusion hohlraum should increase the low-Z plasma temperature substantially, thus improving laser coupling to the high-Z hohlraum wall. To test this idea, the team performed experiments at the Omega Laser Facility at the University of Rochester’s Laboratory for Laser Energetics using 18 kJ of 351-nm laser light to heat a low-Z gas-filled gold hohlraum target (without a fusion capsule present). An external magnetic field up to 7.5 T was applied to the exterior of the hohlraum using a pulsed coil. Experiments were done with and without a magnetic field present, or by varying the field strength. The plasma temperature and other plasma conditions were accurately measured on each experiment using Thomson scattering from a low power 263-nm probe beam focused inside the hohlraum target.



Figure 14. Schematic of the experimental setup showing the magnetic coil, hohlraum target, Thomson probe, and view hole.

The researchers found that a field of 7.5 T increased the low-Z plasma temperature by 50%. The experimental results were in good agreement with 2-D magneto-radiation hydrodynamics calculations using the HYDRA code. Simple theoretical arguments indicate that a possibly larger temperature increase could be achieved for NIF hohlraums with 10 T or greater fields and a capsule present. This research serves to guide and motivate future use of external magnetic fields for NIF hohlraums.

Figure 15. Time-resolved measurements of the plasma temperature without a magnetic field (blue triangles) and with a 7.5 T field (red circles). Calculations of the plasma temperature from HYDRA are shown with and without a magnetic field (red solid and blue solid lines, respectively).

David Montgomery (Plasma Physics, P-24) led the team, which included John Kline (P-24), Brian Albright and Lin Yin (Plasma Theory and Applications, XCP-6), Daniel Barnak, Po-Yu Chang, Jonathan Davies, Gennady Fiksel, Dustin Froula and Riccardo Betti (University of Rochester); Michael MacDonald (University of Michigan and SLAC National Accelerator Laboratory); and Adam Sefkow (Sandia National Laboratories). The Los Alamos Polymers and Coatings group (MST-7) fabricated the targets for the experiments.

Reference: “Use of External Magnetic Fields in Hohlraum Plasmas to Improve Laser-Coupling,” Physics of Plasmas 22, 010703 (2015); doi: 10.1063/1.4906055. A month after the article was published in January, the journal listed it as one of their most-read articles in February, with over 500 downloads. The NNSA ICF Program (Steve Batha, LANL Campaign 10 program manager) funded the research, which supports the Lab’s Nuclear Deterrence and Energy Security mission areas and the Nuclear and Particle Futures science pillar. Technical contact: David Montgomery

Return to top

Science on the Roadmap to MaRIE

Developing the optics required for high-energy x-ray light sources

In a promising step toward the development of the wide range of optics needed by MaRIE and other high-energy x-ray light sources, Los Alamos researchers, in collaboration with Brookhaven National Laboratory and the Advanced Photon Source (APS) at Argonne National Laboratory, successfully explored the possibility that new diffractive optics could focus high-energy photons (E > 50 keV) efficiently.

MaRIE, the Laboratory’s proposed facility for time-dependent materials science at the mesoscale, could ultimately provide a bright source of high energy x-ray photons. Without focusing elements, the size of the beam on the sample will not allow the full potential of the source to be exploited. The researchers fabricated two types of refractive optics – the familiar solid refractive lens in an unconventional material (diamond), and the less familiar kinoform shape in silicon. Their work, in preparation for MaRIE, successfully demonstrated the use of silicon kinoform lenses and diamond solid refractive lenses to focus beams of high-energy x-ray photons with energies between 50 - 100 keV, producing focal spots as small as 230 nm.


Richard Sheffield

Photo. Richard Sheffield checks distances for lens focal length and efficiency measurements at the APS 1-ID beamline.

Many materials relevant to national security are high-density materials, and these materials are relatively opaque to low energy (E < 30 keV) x-ray photons. To study and understand these materials, researchers want to probe these materials deeply with sub-millimeter x-ray beams, enabling micron-scale characterization. The combination of characterization with modeling would give better insight into materials behavior in the extreme environments of interest. Higher energy photons interact less strongly with matter, can penetrate more deeply into material, and deposit less energy – resulting in less probe heating. This weaker interaction with matter makes conventional focusing optics such as mirrors or zone plates either ineffective, inefficient, or expensive.

The top panel in Figure 16 is a visible light photo of a silicon wafer of size approximately 1 by 3 cm on which 10 kinoform lenses are etched. The Center for Functional Nanomaterials at Brookhaven National Laboratory and the Cornell Center for Functional Nanomaterials designed and fabricated the lenses with electron beam lithography and reactive ion etching. Researchers tested the lenses at the Advanced Photon Source at Argonne National Laboratory. The x-ray transmission image of the lens in the middle panel reveals some of the microstructure of the lens. The bottom image is a knife-edge scan showing a spot size of 230 ± 20 nm at 51.2 keV.

Optical image

Figure. 16. a) Optical image of silicon wafer of size approximately 1 x 3 cm, showing 10 kinoform lenses. Designed focal lengths were 0.25 m, 1 m, 12 m, and 17 m. Design energies were 51.2 and 62.5 KeV. b) X-ray “shadow image” of the f=1 m, E=51.2 KeV (absorption contrast). c) Results at APS beamline 1-ID, at 51.2 KeV with F=0.25m. Spot size: 230 nm, 1D gain: 87±4, with an efficiency of 17%. The same lens was used to obtain a 1.5 micron spot size at 102.4 KeV.

The gain was 87 ± 4, i.e. the resultant flux density on the sample is equivalent to increasing the light source ring current by a factor of 87. A 2 m lens had a gain of 181 and a spot size of 1.0 ± 0.1 microns. The first attempt to use a solid refractive diamond lens to focus hard x-rays demonstrated focusing functionality. Because diamond fabrication technology is not as advanced as silicon fabrication technologies, more research is needed to improve performance with this material.

Reference: “Kinoform Lens Focusing of High-Energy X-Rays (50-100 keV),” Proceedings of SPIE 9207, 920704-1-9 (2014); doi: 10.1117/12.2062635. Participants include: R. L. Sheffield (Experimental Physical Sciences, ADEPS), K. Evans-Lutterodt and A. Stein (Brookhaven National Laboratory), S. D. Shastri and P. Kenesei (Argonne National Laboratory), D. Brown (Materials Science in Radiation and Dynamic Extremes, MST-8), and M. Metzler (Cornell University).

The MaRIE program (Cris Barnes, LANL capture manager) funded the research, which supports the Laboratory’s Nuclear Deterrence mission area and the Materials for the Future science pillar. Technical contact: Richard Sheffield

Return to top

Visit Blogger Join Us on Facebook Follow Us on Twitter See our Flickr Photos Watch Our YouTube Videos Find Us on LinkedIn Find Us on iTunesFind Us on GooglePlayFind Us on Instagram