NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

National Research Council (US) Chemical Sciences Roundtable. Challenges in Characterizing Small Particles: Exploring Particles from the Nano- to Microscale: A Workshop Summary. Washington (DC): National Academies Press (US); 2012.

Cover of Challenges in Characterizing Small Particles

Challenges in Characterizing Small Particles: Exploring Particles from the Nano- to Microscale: A Workshop Summary.

Show details

2What Are Small Particles and Why Are They Important?

Small particles—ranging in size from about one nanometer to tens of microns—are ubiquitous in the natural and engineered worlds. In the atmosphere, small particles impact both warming and cooling of the climate. In Earth’s subsurface, small particles impact soil and water quality. In living systems, small particles impact organism health and viability. In catalysis and reaction engineering, small particles enhance reaction specificity and rates. In materials design and synthesis, small particles provide new and enhanced properties. However, in all of these scientific and engineering domains, a lack of understanding about the properties and chemical composition of small particles limits our ability to understand, predict, and control their applications and impacts. Speakers in this session discussed the crucial types of information that need to be determined about small particles in different media.


The workshop began with an introduction by co-chair Barbara Finlayson-Pitts, Professor of Chemical Sciences at the University of California, Irvine. She illustrated the importance of small particles by looking at the example of a nanoparticle in the atmosphere, as shown in Figure 2-1.

Diagram of an airborne nanoparticle in different stages of growth, from precursor molecules to final particle, and its fate in the in the environment and biological systems. A cut away drawing shows how the interior and surface of the particle may differ from each other depending on chemical composition


An airborne nanoparticle can have many fates. SOURCE: Armandroff, 2011.

Research has determined that gaseous precursors form low-volatility products that nucleate to form new particles, but little is known about how this occurs, even after decades of research. Furthermore, little is known about how the particles then grow into nanoparticles. Because of their size, nanoparticles have a high surface-to-volume ratio, which means that the properties of the particles, in terms of their chemistry and photochemistry, will not necessarily be the same as those of the bulk material (because size is one of the unique and desirable characteristics of nanoparticles). Little is known about that chemistry as well.

Finlayson-Pitts explained that a major research objective is to determine the three-dimensional structure of particles in the atmosphere. Based on a large body of chemical data accumulated over many years, it is known that atmospheric nanoparticles contain a large number of polar groups, including carboxylic acids, amines, and alcohols. However, little is known about how the particles assemble. Other unanswered questions include the following: Do particles self-assemble in air? Do the polar groups end up inside a hydrophobic shell, or do they end up on the outside?

Many factors contribute to understanding climate, according to Finlayson-Pitts. For example, understanding the three-dimensional arrangement of the particles is extremely important. If polar groups are on the outside of the particles, then they will be expected to take up water and act as cloud condensation nuclei more efficiently than if they are buried inside the particles. Some of the studies performed in Finlayson-Pitts’s laboratory revealed that it is not unusual to find polar groups on the inside at the nanoscale. From a bulk chemical composition point of view, significant water uptake might be expected; however, that does not happen because the hydrophobic shell forms at the nanoscale.


Steven Schwartz of Brookhaven National Laboratory discussed the influences of aerosols on climate and climate change, and the challenges associated with representing those influences using models. Understanding how atmospheric aerosols affect Earth’s energy balance and sensitivity to climate change is a critical piece to understanding how the greenhouse effect might limit the future development of society’s energy economy.

Aerosol Influences on Climate and Climate Change

Schwartz explained that aerosols are particles suspended in air and generated from a variety of sources including organic vapors from vegetation, dust, industrial sulfur dioxide emissions, biomass burning, ocean sea salt, and others, as shown in Figure 2-2. All of these materials can undergo chemical reactions in the atmosphere to produce particles. Aerosols are characterized by how they scatter light, producing effects such as urban haze and photochemical smog. For example, the haze that hangs over beaches results from aerosolized sea salt reflecting light. Atmospheric aerosols affect Earth’s climate in two ways:

Illustration depicting different sources of atmospheric aerosols in the environment, including organics from trees, dust from land, industrial emissions, biomass burning, and see salt, forming an aerosol haze and then clouds, that reflect and absorb solar radiation


The role and sources of atmospheric aerosols in the environment. SOURCE: Schwartz, 2010.

  • They reflect sunlight upward, decreasing the amount of sunlight that reaches Earth’s surface, which subsequently cools the planet. This is known as the aerosol direct effect.
  • They serve as seed particles for the formation of cloud droplets.

Without aerosols, there would be no clouds, and Earth’s climate would be very different.

As the atmospheric aerosol concentration, measured in particles per cubic meter (particle/m3), increases, and if all other parameters are equal, more cloud droplets form. Droplet formation increases the scattering in clouds and therefore the likelihood that sunlight will be reflected from the top of the cloud, which has a cooling influence on the climate. A warming influence can be observed when aerosols absorb rather than reflect sunlight. Schwartz stated that the widely varying impacts of aerosols on climate change “must be understood and characterized, and ultimately represented in climate models.”

Earth’s Energy Balance and Perturbations

Figure 2-3 shows the relationship of aerosols in the global energy balance. Of the 343 Watts per square meter (W/m2) of solar power provided from the sun, approximately 70 percent is absorbed (237 W/m2). The balance of that amount of solar energy with the amount of thermal infrared emitted from the planet (also 237 W/m2) is essential for maintaining a constant temperature.

Diagram of the Sun and the Earth showing values of annual input and output (flux) of solar radiation interacting with the surrounding atmosphere and surface of the Earth


Schematic of major global, annual average radiant energy fluxes of the Earth-atmosphere system, given in Watts per square meter (W/m2). Blue numbers represent the short-wavelength radiant energy of the solar spectrum entering the planet, and red numbers (more...)

The 390 W/m2 coming from Earth’s surface represents the greenhouse effect, which results from the infrared-absorbing gases in the atmosphere, as well as clouds, that radiate energy back toward Earth. Increases in greenhouse gases such as carbon dioxide (CO2), methane (CH4), nitrous oxide(N2O), and the chlorofluorocarbons (CFCs) contribute to the greenhouse effect. These gases together account for about 2.6 W/m2 of the energy radiating back to Earth’s surface, or less than 1 percent of the atmosphere’s greenhouse effect. Schwartz said that a major challenge for the climate research community is in representing climate change resulting from those gases in climate models and in confidently predicting what the consequence of perturbation would be as the concentrations of those gases change.

Climate Sensitivity-Definition, Importance, Past and Current Estimates

Climate sensitivity is a measure of how responsive the temperature of the climate system is to a change in the radiative forcing; it is expressed in degrees Kelvin per Watt per meter squared (K/(Wm−2)) (Schwartz, 2007). There have been many estimates of climate sensitivity dating back to the late 1970s (NRC, 1979). Schwartz described the attempts that have been made to calculate climate sensitivity from various climate models (Figure 2-4). Large-scale computer models have been used to represent all of the processes that take place in the atmosphere that govern our climate and climate change. In terms of the greenhouse effect, one way is to look at sensitivity and to ask: How much would the global temperature increase if the CO2 concentration were doubled? The range of sensitivities in the current models roughly coincides with the Intergovernmental Panel on Climate Change (IPCC) “likely” temperature change uncertainty range of 1.5–4.0°C (for 2090–2099 relative to 1980–1999; IPCC, 2007a).

Side by side graphs that plot the values of different national and international assessments and climate sensitivity models. One graph shows values of an assessment by the National Research Council and four assessments by the IPCC with an average value of about 3 °K, and 19 current PBCC AR4 Models that with a similar average of 3°K


A summary of major national and international assessments and current climate models of climate sensitivity. (left) Value of climate sensitivity for past assessments by year assessment was conducted, (right) current (2010) values of 19 different IPCC (more...)

Given the increases in atmospheric CO2, CH4, N2O, and CFC concentrations over the industrial period, and using the IPCC’s 2007 best estimate for climate sensitivity of 3 K, the expected rise in Earth’s temperature due to the increase of greenhouse gases, CO2, CH4, N2O, and CFCs over the industrial period (1780 to present) is calculated to be 2.1 K. However, the observed increase since 1860 is only 0.8 K. Schwartz explained (based on a published report, Schwartz et al., 2010) that Earth has not warmed as much as expected from forcing by long-living greenhouse gases for several reasons.

A couple of these possible reasons will be discussed in greater detail below.

Schwartz elaborated on climate forcing by aerosols and how estimates of aerosol direct forcing are made by linear modeling and radiation transfer modeling. These calculations use aerosol depth data collected daily over north central Oklahoma by the U.S. Department of Energy (Michalsky et al., 2010), and cloud albedo and radiative forcing data from daily measurements of effective radius and liquid water path (Kim et al., 2003). He reviewed an illustration of climate forcing over the industrial period (Figure 2-5) from the IPCC’s 2007 report (IPCC, 2007), which shows that negative aerosol forcing substantially offsets greenhouse gas forcing and that the uncertainty in aerosol forcing dominates the uncertainty in total forcing.

Graph showing negative climate forcing (W m-2) from tropospheric aerosols (cloud albedo effect and direct effects) and positive climate forcing from long-lived greenhouse gases (CO2, CH4, N2O, CFCs), which results in a net positive total forcing


Atmospheric contributors to positive and negative climate forcing over the industrial period (1780 to present). SOURCE: Adapted from IPCC, 2007b.

Although Schwartz would have preferred to identify one reason for the difference between the predicted and observed temperature changes, the current state of research leads to two possible explanations: that aerosol forcing counteracts greenhouse gas forcing, or that climate sensitivity is lower than the consensus value. Estimates of climate sensitivity might be wrong because they largely ignore aerosol forcing. Aerosol forcing has been overlooked because individual aerosol particles are short-lived, as was demonstrated when the Chernobyl reactors released a pulse of cesium-137 aerosol particles that resided in the atmosphere for only approximately 1 week. In contrast, greenhouse gases reside in the atmosphere for decades to centuries. However, what this scenario overlooks is that atmospheric aerosols are continually replenished, producing a steady-state level that can influence radiative forcing.

In summary, Schwartz said that climate sensitivity and aerosol forcing are intrinsically coupled, both in climate models and in empirical determination of sensitivity. As a result, if the climate community is going to make the necessary advances in understanding climate sensitivity, then it must determine the parameters that drive aerosol forcing with much greater accuracy than at present. Required going forward, said Schwartz, are multiple approaches that include laboratory studies of aerosol processes, field and satellite measurements of aerosol properties and processes, and consideration of aerosol processes in atmospheric chemistry transport models. The community must evaluate the various models by comparing model results with observations and then using the best models to calculate the forcings for inclusion in global climate change models. In addition, the community must better understand many of the aerosol processes in order to represent them in models (Figure 2-6). Schwartz stressed the need for multiple types of measurements at the same place and time and for laboratory experiments that provide details that cannot be extracted from field measurements.

Diagram of natural and man-made emission sources and aerosol processes that effect climate (e.g. condensation, evaporation, surface chemistry, evaporation, radiation transfer in clouds, etc.) that must be understood and represented in models


Aerosol processes that must be understood and represented in models. SOURCE: Ghan and Schwartz, 2007. © American Meteorological Society. Reprinted with permission.

Schwartz stated that radiative forcing by incremental greenhouse gases already in the atmosphere could potentially lead to dangerous interference with the climate system. Given the present uncertainty of climate sensitivity, the point in time at which greenhouse gas emissions reach an allowable level, range from about −30 years to +30 years. Furthermore, climate sensitivity must be known with much greater accuracy for effective development of energy strategies, and atmospheric aerosols offset an unknown fraction of the warming forcing of incremental greenhouse gases. The present uncertainty in aerosol forcing greatly limits accuracy in determining climate sensitivity, making the need for fundamental aerosol research both essential and urgent.


Mort Lippmann of New York University (NYU) discussed the state of science of particles and health research, characterizing our understanding of the correlation between particle chemistry and health as primitive. Airborne particles, also known as particulate matter (PM), are regulated by the weight per unit volume of total PM without regard for chemistry. He likened that situation to regulating all airborne gaseous pollutants in sum. He noted, however, that over time, reductions in the total mass of PM correlate with improvements in human health. Figure 2-7 depicts particle volume distributions by particle size for various PM sources.

Line graph of particle size in microns versus particle volume (μm3/cm3) for various sources of particulate matter. Sources of PM include forest fire, urban environments, desert air, and sea spray, with two main peaks, one centered at 0.5μm and the other at 10μm


Particle volume distributions by particle size for various PM sources. SOURCE: Lippmann, 2010.

Initial PM regulations addressed particles between 2.5 and 10 microns. They were modified, however, when research showed that fine particles, defined as those smaller than 2.5 microns (PM2.5) in aerodynamic diameter, are capable of reaching deep into the lungs and are associated with the public health statistics for mortality and morbidity, due to causes such as cardiovascular diseases, liver disease, and to a small extent, lung cancer (Pope et al., 2004). PM10 refers to particle diameter size less than 10 microns (and includes PM2.5). Data reveal a relationship between particle composition and adverse health effects (Figure 2-8), but the relationship is not well understood.

Graph of average relative mortality risk coefficients for PM2.5 and various other different chemicals, such as nickel, zinc, sulfate, iron, nitrate, and many others


Chemical composition and relative mortality risk coefficients averaged across 60 metropolitan areas for which fine particulate matter (PM2.5; particles smaller than 2.5 microns) speciation data are available. SOURCE: Reproduced with permission from Environmental (more...)

Air pollution enters the body through the nose and mouth. The size of a particle determines where it ultimately ends up in the lungs. Fine particles penetrate and are deposited deep into the lungs. The soluble components, said Lippmann, are extracted and enter the blood stream, traveling throughout the body, which explains the occurrence of adverse effects in the liver, brain, heart, and other tissues. Research is starting to find health effects associated with particles in the 2.5–10 micron range, particularly the exacerbation of asthma.

In summarizing the current knowledge about PM, Lippmann said that measuring PM2.5 mass has been a useful surrogate index of adverse health risks. It correlates better with cardiovascular effects than other monitored air pollutants. However, its risk coefficient varies, presumably resulting from differences in PM composition, and he called for studies aimed at better understanding the relationship between PM composition and adverse health effects. “The epidemiology indicates we reduced the public health impact just by going after the messenger, but you can only go so far following the messenger,” said Lippmann. “You have to start getting at the active components.”

However, PM10 mass, which is often dominated by contributions from PM2.5, is a poor indicator of the respiratory system risks associated with the fraction of particles that deposits within the tracheobronchial airways. In addition, the effect of PM10 or PM2.5 composition on human health is unknown. Neither PM2.5 nor PM10 mass is useful as an index of risk associated with ultrafine PM (particle diameter less than 0.1 micron). Lippmann noted that, at the time of the workshop, the Environmental Protection Agency (EPA) was planning to lower the standard for fine-particulate mass. The real focus, in his opinion, should be on characterizing the chemical components of fine particulate matter, understanding their impact on human health, and then developing regulations that reduce levels of the most toxic components.

Research in Identifying Health Impacts of PM Exposure

Research to study PM chemistry is under way. At NYU, Lippmann leads the National Particle Component Toxicity Initiative that aims to compile data on elemental tracers from different sources to enable source apportion analysis that can be used to determine what sources generate PM mass. Particles in New York City, for example, are dominated by oil combustion. As a result, there are more than an order of magnitude higher levels of nickel and vanadium in New York City air than in the rest of the United States. In comparison, tracers associated with steel industry furnaces dominate as the primary PM source in Birmingham, coal sources dominate in western Pennsylvania, and wood burning dominates in Seattle.

Lippmann’s group has looked at the distribution of fine particle components within different parts of New York City and found interesting results. Daily samples collected over the course of the year showed that the relative amounts of nickel and vanadium varied between winter and summer months and between the northern and southern parts of the city. This finding led the researchers to identify residual oil burned as the source of these metals. The differences were explained by the fact that there are two primary sources of residual oil pollution—ocean-going ships coming into the Port of New York and New Jersey and boilers used to provide heat and hot water to New York City buildings. These findings are important because Lippmann and others have determined that nickel and vanadium vary “much more significantly with mortality than any other measured elemental component” (Lipfert et al., 2006, Lippmann et al., 2006).

Lippmann said that, in a sense, people are simply aging faster from the chronic exposure. Large cohort studies are now finding significant health effects from exposure to fine PM. For example, the Women’s Health Initiative Study, conducted by the University of Washington, has shown that previously healthy women exposed to fine PM develop elevated cardiac disease over time (Zhang et al., 2009). Another study focusing on exposure to particles smaller than 2.5 microns in nine heavily polluted California counties found that overall mortality rose with increasing exposure to fine particles, as did mortality from respiratory disease, cardiovascular disease, and diabetes (Ostro et al., 2006).

Lippmann’s group also has been conducting animal studies focused on the effects of chronic PM exposure. In one study, he and his collaborators exposed a strain of mice bred to be susceptible to exposure-induced cardiovascular disease to concentrated Sterling Forest State Park ambient aerosol (which is an index of the regional eastern U.S. aerosol, much of which is secondary from the Midwest and Southwest). Exposure lasted 6 hours, 5 days a week, for 6 months. The bottom line from these experiments is that the mice experienced a wide range of health effects including cardiac dysfunction, atherosclerosis, obesity, and metabolic syndrome (Sun et al., 2009, Ying et al., 2009).

Recently, Lippmann’s group has been collecting and analyzing daily fine PM filter measurements made by state agencies. Because funds for this work are limited, his group has restricted its study to data from Detroit and Seattle. Detroit was chosen as typical of the eastern United States where states are struggling to meet PM2.5 of 15 micrograms/m3, and Seattle as representative of a city that easily meets this standard, with an annual average PM2.5 of 9 micrograms/m3. Seattle’s nickel levels are much higher than Detroit’s, largely because it is a big seaport handling ships using cheap heavy oil. As expected, fine particle mass levels are much higher in Detroit.

In terms of cardiovascular disease in the two cities, Seattle’s rates are higher, most likely because of elevated nickel, vanadium, and sulfur levels associated with heavy oil burning. Lippmann and his colleagues observed similarly elevated levels of cardiovascular disease near the largest Asian nickel refinery located in China, and 1,000 miles downwind from a nickel smelter in Ontario. Nickel smelters, Lippmann noted, do not emit vanadium, although they do emit chromium and iron in addition to nickel.

In another study, Lippmann’s group measured a variety of biological markers in two populations of women in China. The two groups were similar except that one group lived near a nickel smelter while the other lived on the other side of a mountain from a smelter. Particulate levels in the two cities were comparable, at about the average of a typical eastern U.S. city, but nickel levels were 76-fold higher in the city near the smelter. Copper, arsenic, and selenium were higher, too, but not by much. Markers of cardiovascular disease were significantly elevated in the women who lived in the smelter city compared to the matched cohort. From the data, Lippmann concluded that nickel is likely to be the agent most responsible for PM2.5-induced cardiovascular effects, and that copper, arsenic, and selenium may play a role. A reduced capacity for endothelial repair, as measured by changes in several biomarkers, may partially explain the critical role of nickel in PM2.5-associated cardiovascular disease.

Lippmann also discussed a study that he and his collaborators are conducting on cardiovascular plaque progression produced by subchronic exposures to concentrated ambient PM2.5 (CAPs), which is prepared by taking fresh samples and concentrating the particles without filtration. Using a mouse model of cardiovascular disease, his team is comparing the effects of breathing CAPs prepared from air collected at Sterling Forest, New York, and the Mount Sinai School of Medicine in Manhattan with those produced by comparable exposures to freshly generated sidestream cigarette smoke, whole diesel engine exhaust, and the gaseous component of whole diesel exhaust.

In this study, CAPs exposure was about 105 micro grams/m3. Sidestream smoke averaged 480 micrograms/m3, but it also contained all the other products of tobacco combustion including carbon monoxide, cyanide, and nitrous oxide. Whole diesel exhaust had particulate levels that averaged 436 micrograms/m3 plus combustion gases. When the researchers examined plaque accumulation, all three particle-exposed groups had significant plaque excesses compared to mice that were exposed only to the gaseous component of diesel exhaust. The biggest increase was seen in the group exposed to CAPs. It is likely, said Lippmann, that the metal content of the particles drives aortic plaque buildup.

In summary, said Lippmann, epidemiological studies using speciation data show stronger associations of cardiopulmonary effects with transition metals than with PM2.5 mass, and toxicological studies in animals provide support for the influence of transition metals. Data from the Chemical Speciation Network (CSN) on PM2.5 components have been essential to the progress to date in demonstrating stronger associations for metals than for PM2.5 mass, but data on PM2.5 components have been too limited in frequency to adequately support definitive time-series studies and too limited in spatial coverage to adequately identify the effects of PM components that are not uniformly distributed. He recommended that an expanded CSN network could support epidemiological research that could provide a sound basis for the development of National Ambient Air Quality Standards for toxic PM components that contribute only small fractions of PM mass, permitting more targeted controls to benefit public health at lower overall cost and societal disruption.

It is important to keep in mind, Lippmann added, that the effects in mortality and hospital admissions do not affect everybody. Sensitive segments of the population are driving these statistics. Therefore, the impact of the particles will be felt mainly by elderly people, not by healthy young people.


Michael Hochella from Virginia Polytechnic and State University started his talk by giving his major take-home message: “Nanoparticles are everywhere. We’re breathing them right now. They’ve been around the planet since its origin.” There are vastly more nanoparticles present every day, in biological systems and in the atmosphere, than humans can ever manufacture. “What’s going to be most important as we move into the age of nanotechnology is the overprint that humans can already put on what already exists,” he explained.

Humanity has previous experience with a hazardous nanomaterial, asbestos. A complex set of minerals, asbestos is a nanomaterial in Hochella’s view because, even though it is a long fiber, two of its dimensions are in the nanometer range, and it is these dimensions that enable it to cause pulmonary fibrosis, lung carcinoma, and mesothelioma. Asbestos was first recognized as a human carcinogen in the 1950s. Two decades later the EPA put in place the first workforce regulations. These were followed in 1986 by regulations pursuant to the Asbestos Hazard Emergency Response Act. Hochella characterized both sets of regulations as responsible reactions to a true hazard, but he also opined that the public and politicians have misused these regulations, with the resulting waste of billions of dollars in misguided asbestos abatement efforts. He stated that society can learn from the mistakes it made in dealing with asbestos and avoid a similar waste of resources in dealing with nanomaterials.

The industrial production of nanomaterials has become a huge business, producing several hundreds of billions of dollars worth of products in the biotech, energy, electronics, and aerospace industries. Hochella predicted that, over the next couple of years, the economic value of nanomaterials will exceed $1 trillion. Although industrial output of nanomaterials is significant, it is dwarfed by Earth’s generation of nanoparticles, which is in the hundreds of teragrams/year, or about 1 million metric tons (Figure 2-9). Nanoparticles that Earth produces include nanosilver, fullerenes (C60), and carbon nanotubes.

Drawing of a scale with human production of nanoparticles on one side (indicated by photo of a factory) and natural production of nanoparticles (indicated by an image of the Earth) on the other side. The natural production (100s of Tg/yr) far outweighs the human production (<<1 Tg/yr), indicated by a tipping of the scale


An inventory of human versus natural production of nanoparticles. SOURCE: Hochella, 2010.

Even more than 10 years ago, Hochella said, it was already possible to look at nanomaterials at atomic levels in incredible detail, including measuring the electronic spectra of individual atoms. He showed a scanning tunneling microscopy image of a pyrite surface with individual iron atoms visible (Rosso et al., 1999). However, now, many groups, including his, are stepping back to look more at changes in particles due to environmental influences, such as lead sulfide and dissolution of the lead content. It is now possible to examine individual sites such as defects on surfaces as they dissolve, an important process to understand because, as these particles dissolve, they release dangerous species into solution that become bioavailable.

The whole particles can also be highly bioavailable, particularly as they become smaller and more soluble themselves. However, contrary to thermodynamic prediction, sometimes nanoparticles become less soluble when they become very small, and sometimes their solubility remains unchanged. Hochella noted that these types of studies can provide information on radiative forces that interest atmospheric scientists.

Hochella explained that nanoparticles existed at the beginnings of the universe. Nanodiamonds, about 2 nanometers or 150 atoms in diameter, were recovered from meteors left from the beginning of the solar system. Conditions during Earth’s early history were also conducive to the formation of nanoparticles. As a result, biological systems have been exposed to nanoparticles since life first arose on the planet. Hochella and his colleagues are studying how biological systems interact with and even make use of nanoparticles. There are bacteria, for example, that respire using iron nanoparticles as a source of electrons rather than oxygen. These bacteria alter their respiration rate in response to changes in nanoparticle size (Hochella et al., 2008).

Nanoparticles Enter the Environment in Many Ways

“What do nanoparticles in the oceans have to do with climate change?” asked Hochella. A great deal, it turns out, is due to iron oxide nanoparticles being deposited into rivers and from glaciers, which eventually end up into the world’s oceans (Raiswell et al., 2006). These nanoparticles provide most of the iron that ocean-dwelling phytoplankton require for photosynthesis, which is the most important biological CO2 sink in the ocean.

At the largest non-weapons-related Superfund site in the United States, a contaminated mining site in Western Montana the size of Germany, researchers have found previously unknown nanoparticles in the rivers draining into the site from Butte, Montana, including titanium dioxide and lead nanoparticles. These findings prompted Hochella and his group to attempt to better understand how metals as toxic as lead, copper, zinc, arsenic, and cadmium move in the river system. Fifteen years ago, when the group started this effort, nobody understood how these metals were being solubilized, that is, whether they were being carried in river water on organic molecules or bacteria or as nanoparticles.

Using flow field flow fractionation techniques, Hochella’s team found that nanoparticles were present in the water. Upon further analysis using mass spectrometry and transmission electron microscopy, they found that the nanoparticles were covered with metals. Nanoparticles of the iron mineral known as goethite, for example, were carrying arsenic hundreds of kilometers from its source, and in fact, into drinking water. Nanoparticles of ferrihydrite, a different iron oxide mineral, carried arsenic, zinc, and copper into the system, while nanoparticles of the titanium dioxide mineral brookite carried lead into the system. Now that these nanoparticles have been identified, said Hochella, their role in environmental and biological processes can be studied in detail.

As an example, Hochella discussed Schwertmannite, a rare but important iron oxyhydroxide sulfate mineral. Schwertmannite nanoparticles have long, thin whiskers that are visible at 1.8 million-fold magnification and can carry arsenic. Such knowledge enables research focused on understanding how the nanoscale structure of these particles influences their interaction with biological systems.

Ubiquitous Nanoparticles

Recently, Hochella’s team obtained EPA samples taken from wastewater treatment plants. They discovered that manufactured silver nanoparticles are common in the sludge from wastewater treatment plants that is applied to farm fields. Atomic-scale electron diffraction Fourier transform microscopy revealed that these are silver sulfide nanoparticles (Kim et al., 2010). These particles are likely entering the ecosystem when they are shed from clothing treated with silver nanoparticles that serve as antibacterial agents.

He presented an inventory of nanoparticle occurrence (Figure 2-10), characterizing them as either manufactured, incidental as a byproduct of diesel emission or industrial activity, or naturally occurring. In discussing this inventory, he said, “We haven’t found the way nature makes a quantum dot yet, but we’re working on that.”

Table comparing the occurrence of select nanoparticles (including: Au, Ag, Fe, C60, etc.) from different sources: manufactured, direct incidental (auto exhaust), indirect incidental (minding), and natural


An inventory of nanoparticle occurrence. CNT = carbon nanotube; QD = quantum dot. SOURCE: Hochella, 2010.

Hochella concluded his talk by noting that there are many sources of natural nanoparticles: dust in the atmosphere, sea spray, and even hydrothermal events in the deep ocean that bleed seawater as a supercritical fluid heated up to 350°C, ripping all kinds of elements out of the ocean crust and thereby creating nanoparticles in the deep ocean. Melting icebergs, rivers, and volcanoes also add nanoparticles to the environment. Figure 2-11 shows a global budget of naturally occurring and organic nanoparticles. Still unknown is what the additional contribution—and impact—of manufactured particles will be given that the nanotechnology revolution is just starting.

Diagram showing reservoirs and fluxes of naturally occurring nanoparticles on land (continents), continental shelves, and in open oceans, and moving between land and water via rivers, volcanoes, and se spray


The global budget for naturally occurring inorganic nanoparticles. SOURCE: Hochella, 2010.


Gerry McDermott of the University of California, San Francisco, talked about visualizing where particles end up inside cells and quantifying the number that have entered the cell. Many new imaging tools have been developed over the past few years, including soft x-ray tomography and cryogenic light microscopy (McDermott et al., 2009), that have not yet been used to study nanoparticles in cells but have a huge potential for that purpose.

The rationale for imaging small particles in cells, said McDermott, is to know the location of the small particle, particularly if the goal is to develop a therapeutic delivery system. It is important to know, for example, if a particle delivers a drug to the intended target inside the cell. “Does it go inside the nucleus or does it get stuck in a membrane somewhere?” he asked.

For environmental nanoparticles, intracellular destination is an important aspect of whether the particle can cause a change in cell phenotype. It is also important to understand whether small particles alter the subcellular architecture of the cell or the locations and concentrations of specific molecules or molecular complexes, and whether such alterations cause changes in the fundamental biochemistry that takes place inside the cell.

Two new imaging techniques provide important insight about these issues. Soft x-ray tomography for high-resolution three-dimensional imaging of single cells allows for the direct localization of small, electron-dense nanoparticles (LeGros et al., 2005). Cryogenic light microscopy allows the location of fluorescently tagged small particles or cellular structure inside the cell to be identified (LeGros et al., 2009). Images from the two techniques correlate well, which permits combining data on cellular imaging with molecular localization.

Soft X-ray Tomography

Computed tomography (CT) is an iconic instrument in clinical diagnosis that takes and computationally assembles a series of x-ray snapshots to create a three-dimensional image. Compared to the two-dimensional projection afforded with a conventional x-ray film, CT imaging provides exquisite insight into the internal structure of the human body.

Soft x-ray tomography miniaturizes this concept to provide the same exquisite detail about the internal structure of a cell. With this technique it is feasible to visualize in great detail how a cell responds to environmental factors such as drug molecules or small particles. The structural detail can also yield new insights into fundamental processes of cell biology, such as the cell cycle.

Soft x-ray tomography offers many advantages over conventional imaging such as light microscopy and electron microscopy. Tomography provides spatial resolution as low as 50 nanometers or as high as 10 nanometers, and it does so in three dimensions. It is fast, requiring only 2 to 3 minutes to collect a full tomographic data set, which makes high specimen throughput possible. Cells are imaged whole, hydrated, unfixed, and unstained in a near-native state. The instrument’s field of view is large enough to image one eukaryotic cell or about 200 bacterial cells at a time. And, unlike conventional microscopy that uses fluorescent tags to image specific cellular structures, soft x-ray tomography images all of a cell’s internal structures at once.

The instrument concept is simple (Figure 2-12), although it only became possible to build the necessary focusing devices in the past 3 years because of the revolution in nanofabrication. Because no material has an appreciable refractive index for x-rays, focusing is done using nanofabricated Fresnel lenses. The outer rings of the zone plate, shown in Figure 2-12, determine the spatial resolution of the microscope.

Schematic of a full-field transmission soft x-ray microscope, which includes in a linear setup: an x-ray source, condenser zone place, the specimen, an objective zone plate, and CCD camera. A detailed image of the grooves of a zone plate is also shown


Full-field transmission soft x-ray microscope. SOURCE: McDermott et al., 2009.

Although simple in concept, in real life the instrument is large and expensive with many pipes and high vacuum chambers. The x-rays are generated by the Advanced Light Source at Lawrence Berkeley National Laboratory and are delivered to a separate room that houses the microscope. McDermott hopes that, in the future, the microscope will be able to use less expensive plasma x-ray sources. Imaging of whole cells is done in what is known as the water window, a region of the spectrum in which water does not absorb x-rays but biomolecules do.

McDermott showed tomographic reconstructions of a yeast cell (Figure 2-13), and of malaria parasites and gold nanoparticles inside a red blood cell (Figure 2-14). The latter images were used to track how the malaria parasite takes up nutrients once it has invaded a red blood cell (Hanssen et al., 2011).

Side-by-side images of a yeast cell. One is a tomographic image of a slice of yeast cell and the other is a three-dimensional reconstruction image of the same cell showing the different internal organelles


One slice from a tomographic reconstruction (left) and the three-dimensional reconstruction of a yeast cell clearly shows the individual organelles inside the cell SOURCE: McDermott, 2010.

Side-by-side three-dimensionally tomographic images of two different red blood cells showing internal contents. One shows malaria parasites and the other shows gold nanoparticles contained within an organelle


Tomographic image of red blood cell containing malaria parasites and gold nanoparticles. Parasites were allowed to invade red blood cells containing gold particles. The image is a rendered model overlaid onto a virtual section. The parasite surface is (more...)

McDermott also showed images detailing internal structural changes that occur as the pathogenic species of yeast known as Candida albicans grows and undergoes a transformation from nonpathogenic to highly pathogenic. These images show that the yeast expands the number of mitochondria-filled tubes during this transformation, suggesting that tube formation could be a fruitful target for drug disruption. Collaborating with a group at Stanford University, McDermott’s group developed and tested a series of protease-resistant peptide-like drugs known as peptoids as antifungal agents that would be capable of blocking tube formation (Chongsiriwatana et al., 2008). In tests of two different peptoids, the researchers found that both peptoids were effective at greatly reducing the growth of the mitochondria-filled tubes. However, when fungi treated with the two peptoids were imaged using soft x-ray tomography, they found that one of the two agents produced large changes in the fungal nucleus that could be problematic if that agent ever proceeded to human clinical use (Figure 2-15) (Uchida et al., 2009). McDermott also showed images obtained using cryo-fluorescence microscopy in combination with x-ray tomography (Figure 2-16). McDermott concluded by noting that these changes would not have been seen with standard microscopy, which highlights the potential benefits of using these new techniques in cell biology studies.

Tomographic images of C. albicans cells after various treatments. Three-dimensional reconstructions of the cells are shown in comparison with the slice images


Soft x-ray tomography of C. albicans cells after treatment with two peptoids. The top seven images (A–F) show internal changes seen with the first of two peptoids, while the bottom image (G) shows the larger nuclear changes produced by the second (more...)

Four images from left to right of correlated cryo-fluorescence, x-ray tomography, and a three-dimensional reconstruction of the cell that show the different internal organelles


Correlated cryo-fluorescence and x-ray images of Schizosaccharomyces pombe, with the fluorescence image showing labeled vacuoles on the left and x-ray images of unlabeled vacuoles in the middle two images. The tomographic reconstruction on the right shows (more...)


In response to a question from Pedro Alvarez of Rice University about the size distribution of natural versus manufactured nanoparticles, Hochella said, “These natural nanomaterials show the same fascinating size-dependent properties as manufactured or synthetic ones do.” He added that nanoparticles with the same chemical composition and molecular structure from the two sources have similar distributions in terms of physical, chemical, electrical, and magnetic properties. However, another participant noted a key difference between natural and synthetic nanoparticles is that synthetic nanoparticles can be much more mono disperse in terms of size and shape and homogeneous in terms of chemical composition compared to naturally occurring colloidal particles.

Barbara Karn of the EPA (now at the National Science Foundation) commented that manufactured nano materials often have elements such as indium that are not found in naturally produced nanoparticles. She asked if that was a concern, particularly with regard to water treatment plants and their discharges into waterways. Addressing this comment, Hochella said that for some elements, such as lead or gold, nature has done a good job concentrating them in locations that humans then mine. Those materials are probably found in naturally occurring nanoparticles. For others, however, as Karn noted, humans are now mining materials from much deeper in Earth, elements that would not be exposed on the planet’s surface. These elements are being concentrated and put into products, and the result is that humans are “dramatically changing the distribution of that element on the planet’s surface.” For example, in some cases it can cause a material to become more bioavailable and possibly toxic, which is why there is a need to develop a better understanding of the life-cycle impacts of synthetic nanoparticles on the environment. In fact, elaborated Hochella, that is why Virginia Tech and other institutions around the world are starting sustainable nanotechnology centers. He and his colleagues at Virginia Tech, for example, are studying how cadmium, a critical component of quantum dot nanoparticles, gets into the environment and what its fate will be. By understanding such processes, it may be possible to either move away from using cadmium, or other potentially toxic materials, or design ways to produce these materials in a more environmentally friendly way.

Lippmann followed up by asking Hochella about nanoparticulate silver and whether there is any idea of what the environmental consequences will be. Hochella replied that the answer is no, which is exactly why research is needed now. He noted that nanoparticulate silver is included in a large number of consumer items because of its antimicrobial properties, but there is concern about what will happen to the planet’s microbial ecosystems when large quantities of nanoparticle silver enter and become concentrated in the environment.

Lippmann inquired about what asbestos has to do with nanoparticles. Hochella responded that he defines a nanoparticle as one with at least one of its three dimensions in the nanometer range, and asbestos meets that criteria. Clay minerals fit this definition in one dimension, and it is that dimension that in part imparts the special properties of clay and many other minerals as well. He added that catalyst researchers, who worry about stopping agglomeration, can learn from studying the behavior of natural fibers such as asbestos. Lippmann added that even when nanoparticles do agglomerate, they still have nanoscale features on their surfaces that must be considered when designing and studying such materials.

Finlayson-Pitts noted, “Ultimately what we’re interested in are the impacts of particles, the good and the bad, and in the case of a bad, how do we mitigate? What should we control?” She remarked that Schwartz, Lippmann, and Hochella addressed different aspects of trace metals and organics carried on the surface of nanoparticles, and she asked if there is an overriding control strategy that will help mitigate the potential impacts on climate and health.

Lippmann responded, “There has been an effective and continuing effort to reduce the emissions of sulfur dioxide into the atmosphere, with the Clean Air Act leading to a 50 percent reduction [in airborne sulfur dioxide levels].” Reducing the sulfur content of diesel fuel has had an additional substantial impact on reducing sulfate aerosols that he predicted “will change the reflectivity of the atmosphere, because sulfate is the best light scatter of all of them.”

Schwartz noted that sulfur emissions in general, and particle emission specifically, are increasing in the developing world. He added that a reduction in aerosol emissions may increase the greenhouse effect, which may or not be a good outcome. Hochella suggested that increasing the output of iron-containing nanoparticles, which could boost phytoplankton productivity, could help ameliorate global warming. This possibility, he said, points to the difficulty in developing an overriding control strategy for nanomaterials.

A participant asked the speakers to address any outstanding issues in characterizing small particles. Lippmann replied that he would like to see more work on characterizing the composition of particles in the air and the health implications of the composition given that the tools to do so, such as x-ray fluorescence, are on the threshold of making the needed measurements. He said that such data would enable control efforts to focus on emissions of specific, toxic materials rather than particles in general, which could potentially save a great deal of money.

Schwartz said that he would like to see research develop a better understanding of the processes that are responsible for the formation and growth of atmospheric aerosols. “I think we are really on the cusp of a revolution in terms of characterization of the composition of these newly formed particles,” he said. “I think we’ll that find many of the aerosol chemistry and growth models that are being used right now to try to estimate aerosol impacts on climate are going to turn out to be all wrong.” An audience member added that such studies should also include surface properties because, as the catalyst community knows well, surface properties and composition are both important for determining a particle’s behavior.

Copyright © 2012, National Academy of Sciences.
Bookshelf ID: NBK98070
PubReader format: click here to try


  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this title (6.1M)

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...