Modifying QuEChERS for complicated matrices- High Fat Samples

This post is part of a series on QuEChERS. Here are links to the previous two posts, in case you may wish to catch up before reading this one:

QuEChERS dSPE selection-which one is best?

Modifying QuECHERS for complicated matrices- Dry Samples

(See Jana’s post here for further details on dry samples: QuECHERS approach optimization for low-moisture matrices – case of honey and brown rice flour

As in my last post for this series, I suggest to use the following for reference

From the official QuEChERS website (QuEChERS.com), maintained by CVUA Stuttgart, these specific documents:

For Extraction (Stage 1): https://www.QuEChERS.com/pdf/reality.pdf

For dSPE Cleanup (Stage 2): https://www.QuEChERS.com/pdf/cleanup.pdf

For greater detail on modifications: “A review of recent developments and trends in the QuEChERS sample preparation approach”, Rejczak & Tuzimski, De Gruyter Open Chemistry, 2015, 1, 13, 980 -1010. https://doi.org/10.1515/chem-2015-0109

And again, here is a link to the USDA database at https://fdc.nal.usda.gov/ to obtain a listing of protein, total lipids, fatty acids, carbohydrates, sugars, and cholesterol (a sterol).

 

Before we start discussing more modifications to QuEChERS, I wanted to highlight some characteristics of QuEChERS sorbents that may help us in solving specific challenges with matrices.  There are primary characteristics of each sorbent that we usually use to select a sorbent for a particular sample matrix.  There are also other more secondary interactions that can complement the primary effects of another sorbent, as shown in the table below.

Sorbent Primary Action Secondary Action
MgSO4 excess water
PSA sugars, fatty acids, organic acids polar pigments such as anthocyanines, and some sterols, ionic lipid components
C18-EC nonpolar interferences,  long chain hydrocarbons, lipids, waxes proteins, starches, long chain fatty acids and long chain organic acids, some pigments, some sugars
GCB pigments some lipid components such as sterols, and planar polyphenols, flavonoids

For example, “lipids” and “sterols” are listed as potential primary, and secondary interactions in the chart above.  Lipids are composed of fats and oils, including those commonly known as triglycerides, as well as some waxes, fatty acids and sterol compounds (such as cholesterol) , and ionic lipids (such as phospholipids). To remove lipids from a sample, you would primarily use C18-EC sorbent, but looking at the secondary interactions in the chart, PSA and GCB also have affinity for some lipid components, such as sterols and ionic/charged lipid molecules. Consequently, the sorbents are often used in pairs to take advantage of this and to also remove other matrix components along the way. For lipid removal, it is very common to see PSA and C18-EC used together. The C18 may remove the bulk of the lipids, but the PSA complements it by removing some additional lipid molecules, possibly the ones that are smaller or more polar that C18 may not do as well. GCB can also retain and remove nonpolar components of lipid such as sterols, but is less common because of possible interaction with planar analytes.

 

High Fat Samples

Samples containing high amount of lipids or solid fats are perhaps the most challenging matrices. Here are the techniques I would try for lipid removal, starting with most preferred. (Please note the “preference” is my opinion.)

dSPE with PSA/MgSO4/C18-EC – This is a demonstration of the classic QuEChERS technique and represents the example we just discussed above for a sample containing lipids and sterols. PSA and C18-EC sorbent are often the perfect combination to remove this matrix interference. GCB can also be added if necessary, preferably if none of the analytes are planar molecules. Sometimes GCB is used in small amounts even with planar analytes, although caution is recommended. If this is attempted, a small amount of chlorophyll remaining in the extract indicates that planar analytes still remain intact in the extract. Here are some examples of this technique using PSA/MGSO4/C18-EC.

 

Cooling/Freezing the QuEChERS extract– As described on the QuEChERS website mentioned earlier, (https://www.QuEChERS.com/pdf/cleanup.pdf), extract aliquots from Stage 1 can be stored in a freezer, anywhere from 2 hours to overnight, to precipitate out the fats and waxes. Remaining supernatants may still need to undergo dSPE to remove other matrix interferences before analysis by GC or LC. As with all techniques, validation tests should be performed to ensure adequate recovery of the target analytes and efficiency of the method. In particular with this technique, it is important to make sure that the target analytes aren’t precipitated along with the lipid layer.   Sometimes an acceptable approach is to add surrogate or internal standard prior to this step to take this into account.  Here are some examples of this technique.

 

SPE cartridge (cSPE) cleanup– In some cases, the amount of fat is too high to efficiently remove with the above mentioned techniques.  Using cartridge SPE gives the analyst an option to use more sorbent that removes more of the fatty matrix. This can be accomplished with PSA or C18 sorbent or a combination of the two. Often a pass-through technique can be used that is quite simple, discussed in the following blog post. In this case, the sample extract is passed straight through the cartridge and the fatty matrix remains on the cartridge. It’s great because you don’t have the extra load, rinse and subsequent elution steps typically involved with SPE.
Fatty Acid Removal from QuEChERS-Type Extracts with Quick and Easy PSA Cleanup Cartridge Pass

Here are some examples of this technique.

 

Hexane/nonpolar solvent partitioning– If the above procedures are not sufficient to remove lipids from a sample, another option is to incorporate some amount of hexane or other nonpolar solvents into the QuEChERS extraction (Stage 1) or perhaps even following QuEChERS dSPE (Stage 2). Here are some examples of using this concept at Stage 1.

 

Additional Resources:

 

Thanks for reading our discussion of high fat sample matrices. Please look for the next post on samples containing high amounts of sugars and starches.

 

Optimizing Splitless Injections: Initial Oven Temperature and Solvent Polarity

Beyond optimizing the inlet parameters of temperature and splitless valve time, initial oven temperature also plays an important role for splitless injections. When a liquid sample is injected into a GC, the first goal is to vaporize the sample within the inlet and transfer it to the column. As you know, this sample volatilization and transfer takes longer during splitless injections due to the slow inlet flow. Because of this, we want to condense (focus) the sample at the head of the column, to prevent it from moving through the column before analytes have completely transferred onto the column. If we do not focus properly, broadening and poorly shaped peaks will result, especially for early eluting compounds. This can also lead to poor resolution of volatiles.

In order to focus the sample in a tight band at the head of the column, a low initial oven temperature is required. There are two different approaches for setting this temperature, based upon your solvent and analyte boiling points. If your sample contains relatively volatile analytes that have a boiling point close to that of the solvent, utilize what’s known as “solvent focusing”. This involves setting the initial oven temperature below the boiling point of the solvent, to condense the solvent, which will trap volatile analytes. Ideally, set the temperature 20 °C below the boiling point of the solvent to take full advantage of this effect. For some solvents this may not be possible or practical without cryogenic cooling; in this case, get as low as practical, even if setting near or just slightly below the boiling point of the solvent. If this results in poor peak shapes of early eluters, you may have to consider cryogenic cooling or a different injection technique.

When analyzing compounds that have boiling points above the solvent, more than 150 °C, you can start with higher initial oven temperatures. Solvent focusing is no longer required in this case and instead the goal is to “cold trap” or focus the analytes at the head of the column. To do this, use an initial oven temperature that is lower than the boiling point of your most volatile analyte. This technique is referred to as “analyte focusing”.

Figure 1 shows examples of how initial oven temperature can affect peak shape. Higher oven temperatures lead to poorly focused initial peaks. While temperatures that are slightly too high may affect the first few peaks, as temperatures continue to increase, more peaks will be affected, as shown in the figure.

Figure 1: Example of the effect of initial oven temperature on peak shape. Using an initial oven temperature that is too high can lead to broadened and deformed peaks.

Solvent Polarity

Besides boiling point, solvent polarity can also affect your splitless analysis.  From your first chemistry classes, you’ve probably heard the expression “like dissolves like”.  This rule also applies to dissolving your solvent within your GC stationary phase.  For instance, if you have a non-polar stationary phase and you inject a polar solvent, your solvent may “bead up” instead of dissolving into the stationary phase.  Analytes that are cold trapped in the solvent can then become split between these “beads”, resulting in split and deformed peaks.  Figure 2 provides a visual demonstration of this phenomenon, as it occurs at the head of a capillary column.  To avoid this, always match column polarity with solvent polarity.  For instance, very polar solvents like water are best on polar phases such as wax columns.

Figure 2: When solvent and column phase have similar polarity, the solvent will dissolve evenly within the phase, forming a uniform film of solvent. If the solvent polarity is vastly different from the column phase polarity, beading of the solvent will occur, affecting focusing of the analytes.

Relating back to the initial topic of oven temperature, if your analytes have sufficiently higher boiling points than your solvent, and you can start with a higher oven temperature, the solvent polarity/column polarity will not have as great of an effect, since you are not utilizing solvent focusing.  Likewise, when it comes to split injections, this also doesn’t matter as much, since you are both injecting less solvent and not relying on solvent focusing.

Conclusion

This will conclude my blog series on optimizing splitless injections. To summarize, some important parameters to consider for splitless injections include liner type, inlet temperature, splitless valve time, initial oven temperature, and solvent polarity vs column polarity.

I hope these blogs have been informative and useful for your method development. Please feel free to comment and share your experiences.

QuEChERS approach optimization for low-moisture matrices – case of honey and brown rice flour

Last month, Nancy published a blog summarizing how to approach samples with less than 80% water. Today, I want to go into more detail on how to deal with different commodities with less than 20%. As Nancy said, QuEChERS was first developed for high-moisture matrices such as strawberries and spinach. However, the method is very adaptable for a variety of other commodities, even dry goods with some simple adjustments. So, why do we need the moisture?

QuEChERS is a technique based on the extraction of analytes of interest from the matrix using acetonitrile or similar water-miscible solvent, followed by separating the water and matrix from the solvent, aided using salts (salting-out effect). Water needs to be present to hydrate the sample so it is accessible to the solvent for extraction. Without sufficient water, extraction will be incomplete and result in poor recoveries. Samples rich in moisture have enough water already present to start the extraction process. In low moisture samples, water needs to be added to make up for the lack of moisture native in the sample. Ideally, the water and solvent amounts should be the same (e.g. 10 mL of each).

Read the rest of this entry »

Chiral Separation on a C18 Column? Separation of d- and l- Amphetamines Part IV

I am back here to complete my Amphetamines blog series. In my previous posts, I have discussed the importance of separating the chiral d- and -l isomers to accurately identify the illicit isomer using an achiral method on a Raptor C18 column employing a pre-column derivatization technique, sample prep strategies and derivatization efficiency in my previous blogs part I, part II & part III . Today I’d like to discuss more about the validation studies by evaluating the method’s accuracy, precision, selectivity and specificity.

Once the method development was completed, I wanted to perform some additional work like validation studies to ensure that this technique is suitable to quantify the isomers even in the human urine matrix using deuterated internal standards for both d-and-l amphetamines and methamphetamines. The internal standards were also derivatized simultaneously with the analytes in urine matrix from 50-5000ng/mL calibration range along with the QC standards. As this sample derivatization is simple, fast and reproducible with maximum derivatization efficiency it was easy to prepare large number of samples at a time for validation studies.

The validation experiments were performed as described below:

Linearity: For linearity experiments, I have injected the calibrators in the range of 50-5000ng/mL followed by 3 sets of QC standards. Using 1/x weighted linear regression, all four analytes showed acceptable linearity with r2 values of 0.998 or greater (Figure 1). In addition, the %deviation from nominal concentration was concentration was <15%, in all 3 accuracy and precision experiments (Table I)

Figure 1: Standard Curves

 

Accuracy and Precision: Once a suitable linearity was achieved, precision and accuracy analyses were performed on three different days by injecting the 3 sets of calibration standards followed by 3 sets QC standards at 4 levels (LLOQ, LQC, MQC and HQC) prepared in pooled urine in 3 different batches. Method accuracy was demonstrated by the average recovery values within 10% of the nominal concentrations for low, mid, and high QC levels and within 15% for the LLOQ. The %RSD was 1–8% and 0.6–8% for intraday and interday results, respectively, indicating acceptable method precision (Table I). All the calculations were performed by averaging the data from 3 different days, except intra-day studies. Because deuterated internal standards were used for each enantiomer, the standards and target analytes experienced similar enhancements, which ensured accurate and reliable quantitative results were obtained.

Table I: Interday Accuracy and Precision studies for the analysis of amphetamines by LC-MS/MS in urine

 

Selectivity and Specificity: As we know when we have an analyte in high concentration eluting closely with low levels of another compound the chromatographic resolution between these compounds is compromised, making it difficult for true identification of the illicit methamphetamine consumption. This is very important especially with the separation of isobaric chiral compounds at extreme levels of each other (very high of d-isomer and very low of l-isomer, vice versa). We were very curious to see how a real urine sample after consumption of the illegal methamphetamine with high intense d- isomer looks like!!! And similarly, consumption of over-the-counter (OTC) drugs that contain l-methamphetamine can result in a high-intensity l-enantiomer peak in urine, which may make it difficult to identify very low concentrations of the illegal enantiomer (d-methamphetamine), leading to false negative results.

Well, to address our curiosity and the method’s specificity, here we performed an experiment by spiking very high levels of the d- isomer with LLOQ levels of l- isomers and vice versa and analyzed. The method developed here was found to be highly specific and selective with good chiral resolution even when evaluated at extreme concentrations, such as high l- enantiomer (5000 ng/mL) with low d- enantiomer (50 ng/mL) for both amphetamines and methamphetamines in urine (Figure 2). The intense signal from d-isomer didn’t negatively impact the chromatographic resolution and peak shape of low levels of l-isomer.

Figure 2: Highly selective chromatographic results were obtained even at extreme concentrations for both the d- and l-enantiomers.

 

I hope this blog was helpful for your sample prep and method development for achiral separation of chiral amphetamines using a Raptor C18 column. If you would like more information on this work click on the following link for more details on the full application note and the references: Analysis of Amphetamines by LC-MS/MS for High-Throughput Urine Drug Testing Labs

References:

1. Newmeyer, N. M, Concheiro, M and Huestis, A. M. J Chromatogr A. 2014; 1358: 68–74.
2. Foster, S. B and Gilbert, D. D. J Analytical Toxicology.1998; 22:265-9.

Links to blogs in this series:

  1. Chiral separation on a C18 column? Separation of d- and l- Amphetamines, Part I
  2. Chiral separation on a C18 column? Separation of d- and l-amphetamines, Part II
  3. Chiral Separation on a C18 Column? Separation of d- and l- Amphetamines Part III

 

 

Optimizing Splitless Injections: Splitless Purge Valve Time

In my previous blog, I discussed optimizing inlet temperature for splitless injections.  Today I would like to discuss another critical parameter for splitless injections: splitless purge valve time.  The key feature of a splitless injection is that all the carrier gas flow is directed to the column and the splitless valve is closed during injection.  This allows for maximum recovery of the injected sample, making splitless injections ideal for trace level analyses.

The split valve remains closed during and after sample injection for a predetermined amount of time to allow for full volatilization and transfer of the analytes to the column.  While this time is essential for achieving the best recoveries of trace analytes, keep in mind you are also injecting a very large amount of solvent in the case of liquid injections.  Because of this, at some point, you must open the split vent to rid the inlet of excess solvent vapor.  Without doing this, the large solvent peak can potentially interfere with your compounds of interest.

So just as in selecting an ideal inlet temperature, selecting an ideal splitless hold time can also involve compromise.  You must select a hold time that is long enough to ensure complete vaporization and transfer of your analytes to the column, but not so long as to introduce excess solvent into your column, which can result in excessive tailing of the solvent peak, leading to an elevated baseline and potential interference with analytes.  The figure below provides an example showing peak areas over a wide molecular weight range of hydrocarbons vs splitless hold time.  Note that the liner is single taper with wool and the carrier gas flow is 1.5 mL/min.  As a general rule, you should allow the liner to be swept 1.5-2 full volumes with carrier gas, prior to opening the split vent.  Notice that after a certain amount of time, gains start to become insignificant.  Eventually C8 is lost in the solvent and cannot be detected.

Figure 1: Experiment to test response of early, middle and late eluting hydrocarbons, as well as solvent peak area, vs splitless valve time. The “ideal” split time is between 60 and 75 seconds, which correlates to a 1.5 to 2x sweep of the inlet liner. The solvent peak eventually interferes with C8, completely masking it. Column flow was set at 1.5 mL/min and liner was a single taper with wool.

To make your life easy, Restek has a free tool that calculates this ideal range for you: The EZGC Method Translator and Flow Calculator.  Check out the short webinar below to see how to use it:

Here are a few key points from the video to remember when filling in the EZGC Flow Calculator to calculate splitless valve time:

  1. For “Temperature”, under the Column section, enter your initial oven temperature.
  2. In the “Control Parameters” section, only enter one of the values, based upon your control mode. For instance, if you are using constant flow, fill in your column flow, whereas if you are using constant pressure, only fill in pressure.  The other values will be automatically calculated.  Be sure to select the correct outlet pressure.  For an MS detector select “vacuum” and for detectors such as FID or TCD, select “atmospheric”.
  3. In the “Inlet” section, you must enter inlet temperature and liner volume. An exact liner volume is not necessary, as having something relatively close should result in an acceptable recommendation.  The simplest way to obtain an estimate is to assume that your liner is a cylinder and apply the following formula: V = π r2 h , where V is volume, r is the radius (1/2 of the internal diameter), and h is the height (length) of the liner.  Some common liner configuration volumes are listed in Figure 2.   Note that this table is in µL, whereas the Flow Calculator uses mL.  Simply place a decimal point before the µL value to convert to mL (i.e. 900 µL = 0.900 mL).  Use the values listed under “Physical” volume.
  4. Note that after entering a value into a field, you must click outside of the field or into another field in order for the calculation to update.

Figure 2: Liner volumes for some common configurations. Use the value under “Physical” for calculating splitless valve time.

You now have an understanding of why optimizing splitless purge valve time is important.  Using Restek’s Flow Calculator makes this an easy task.  If you ever want to verify that you are in fact using the best hold time for your analysis, you can easily set up an experiment like the one shown in Figure 1, where you plot peak areas vs hold time.

For the next installment of this blog series I am going to discuss initial oven temperature.  Hope you enjoy!

 

The new U.S. EPA Method TO-15A blog series – Part 2: Use air when analyzing air!

Last time, we left you with a teaser, how to take your canister blanks from the following red trace down to the blue trace:

Before we get to that, let us back up to 2015, where both Wayne Whipple (retired US EPA) and I coincidentally presented on canister cleaning and canister blank levels at the National Environmental Monitoring Conference (NEMC). Neither of us knew of the other’s presentation, but apparently we both saw the same thing… No, it was not the new canister cleanliness requirements of the pending Method TO-15A on the horizon. Rather, we both saw that canister cleanliness was the rate-limiting step in achieving lower detection limits with evacuated stainless-steel canisters. It was not the instrumentation then, nor is it the instrumentation now. With reasonably up-to-date equipment, I know of several laboratories (including ours), which routinely achieve 5 to 10 pptv detection limits for most of the VOCs targeted by TO-15A. For the record, Wayne Whipple routinely had single-digit pptv detection limits with his Leco Pegasus V TOF-MS. Obviously, most of us are not as lucky as Wayne; regardless, it does not matter how sensitive your instrumentation is if your canister blanks remain higher.

Because I like a challenge and I am a geek who enjoys researching the minutia of a subject like canister cleaning, we proceeded to experiment with canister cleaning variabilities and presented these results at Air and Waste Management (A&WMA), NEMC, and other global conferences from 2015 to 2018. We are going to take the next several blogs to break out some of these experiments and results to shine a light on what will move the needle for your canister cleanliness, as you strive to achieve the new Method To-15A guidelines. Although obvious for most, I feel compelled to point out just how many different variables come into play when looking at canister cleanliness. The following is a screen shot from one of our canister cleaning presentations:

As you may see above, we have a list (that may not be exhaustive) of a dozen or so variables, which offer the opportunity to increase the canister blank concentrations via anyone or combination of the following: a leak, contamination (intrinsic to the material), carry over, etc. For example, use contaminated water to humidify your canisters and it simply does not matter how clean the canister was prior to that point. Long story short, there will be a prevailing theme in the following blogs: “garbage in = garbage out.” With that said, the first thing I want to start off with is fill gas. As you will notice in section 9.4.2 of TO-15A:

“Canister zero-air challenges are performed by pressurizing clean evacuated canisters with humidified (40% to 50% RH) HCF zero air. Note that performing this qualification with ultrapure nitrogen does not adequately test the canister as the inert nitrogen atmosphere does not permit reactions within the canister that may occur when ambient air is sampled.”

So why does the EPA make a point to call out air over nitrogen? Well, that question was rhetorical, as the EPA tells us point blank that nitrogen is an inert gas, which does not permit reactions to take place, which may otherwise take place during and/or after field sampling of AIR. Okay, so I added the last part, but it is true. You know why… because we sample air, which has oxygen and that means there is an oxidative potential, which is otherwise absent in the inert nitrogen. This is just sound logic folks! So, then the question becomes why is anyone using nitrogen in the first place? The answer to that questions is steeped in history. Most laboratories were setup back in the day with preconcentrators, which required liquid nitrogen to cryogenically cool the traps. The resultant presence of a liquid nitrogen dewars throughout air laboratories meant everybody had access to a very clean source of fill gas, which was literally just going to be wasted anyway. And since nitrogen represent 80% of air, somewhere along the line there seemed to have been this leap of faith to the suitability of nitrogen as a fill gas. I know the vendor that installed my first preconcentrator successfully convinced me of this and I ran nitrogen as a fill gas for several years, until I generated the data to show this was not the best practice. So yes, the need for air over nitrogen as a fill gas is not speculation or theory on the behalf of the EPA. We presented the results to support this back in 2017 to the U.S. EPA and scientific community.

In the following table, we evaluated canister cleanliness with helium and air as the fill gas. For the following results, all canisters were humidified to 50% RH (more on the importance of this in future blogs) and aged for the canisters for 7 days (more on this as well in future blogs); and everything else was equivalent for an apples-to-apples comparison:

As you may see in the table above, acrolein (which was a hot topic for quite some time) grows more in canisters filled with air when compared to canisters filled with an inert gas like helium. Even for a more mundane compound like benzene we make the same observation. In both cases, this was a statistically significant trend. Now I know you are probably thinking one or more of the following:

  1. This is the product of the gases coming from different sources and/or lines and thereby contributing to the blank levels.
  2. We used helium and not nitrogen.
  3. These results are not orders of magnitude different.
  4. These results would not meet the new Method TO-15A cleanliness requirement of 20 pptv.

My responses to those thoughts:

  1. Both gases were run through the same lines and the above blank results were background corrected for each gas (i.e., each gas was plumbed directly to the preconcentrator 1st and analyzed independent of any canisters).
  2. Yes, but an inert gas is an inert gas in this scenario. So, long as it does not contain the oxygen, which air contains. I encourage you try it for yourself with nitrogen and air. I have yet to see any results to contradict what we are saying. Oh… and we recently had a customer with ethylene oxide growth (the next acrolein in my opinion) in their canister blanks. They never saw the EtO in their blanks (oddly enough, filled with nitrogen), but it grew in their field samples (filled with air). I do not want to give too much away on this, as I know my colleague Jason Hoisington will be blogging on this in the very near future.
  3. You are correct, at best we see a 2x difference for acrolein. However, we are not talking about orders of magnitude in improvement anyways. We are talking about moving the needle in the correct direction with incremental improvements to the above dozen or so variables originally identified as blank contributors. All in the name of trying to consistently achieve the 20 pptv cleanliness levels.
  4. Remember that teaser from the beginning, well it looks like I have rambled on long enough, so that will have to wait until next time. I know, I said that last time, but I promise this time.

Final thought: if you fill your blank canisters with an inert gas like nitrogen, you risk getting artificially biased low blank concentrations for some of your target analytes, which may look fantastic for meeting cleanliness requirements. However, the problem is that these blank results are not consistent with what those canisters will experience when filled with humid air in the field. So, now you have some background information on why TO-15A calls out the use of air as the fill gas for the blank challenge. Stay tuned to see how we make sure that air is clean…

 

Modifying QuEChERS for complicated matrices- Dry Samples

Before getting in to this discussion, I recommend reading my previous blog post first regarding classical applications for methods based on Quechers. https://blog.restek.com/quechers-dspe-selection-which-one-is-best/

QuEChERS methods were originally written to analyze pesticides in fruit and vegetable matrices, most of which have high water content and low fat content. More recently, the technique has been used for a greater variety of food and agricultural products, as well as other environmental matrices. It has also been adapted for analytes other than pesticides in some cases. We will discuss some of the possible modifications you may need to make for complicated matrices. For all of the sample types we will discuss, the best general references I can give are from the official QuEChERS website (quechers.com), maintained by CVUA Stuttgart, these specific documents:

For Extraction (Stage 1): https://www.quechers.com/pdf/reality.pdf

For dSPE Cleanup (Stage 2): https://www.quechers.com/pdf/cleanup.pdf

For greater detail, I found this reference useful: https://www.degruyter.com/view/journals/chem/open-issue/article-10.1515-chem-2015-0109/article-10.1515-chem-2015-0109.xml

For many samples, particularly food products, you may find it helpful to use the USDA database at https://fdc.nal.usda.gov/ to obtain a listing of content for water (as well as protein, total lipids, fatty acids, carbohydrates, sugars, and cholesterol).  We will discuss techniques for dry sample matrices in this blog post. Other types of sample matrices will be discussed in subsequent posts.

For samples with little or no water content, water must be added before using Quechers extraction salts. For these samples, reduce the sample weight from 10 g to 5 g or less and add 10 mL of water prior to adding acetonitrile and QuEChERS extraction salts. (For those using the AOAC QuEChERS method, the sample size is reduced from 15 g to 7.5 g or less and 15 mL of water is added.)  The sample weight should be adjusted according to the amount of chromatographic interference anticipated (or how “dirty” the matrix is perceived to be). Here are several examples of this technique. In some cases, such as cannabis, you might see total water volumes less than or greater than 10 ml, but the sample weights are adjusted accordingly as well.

For samples that contain a little bit of water but less than 80%, the amount of water can be adjusted accordingly to estimate a combined water content of 10 mL.  Some good examples of this technique are shown below.

Additional Resources:

This concludes discussion of dry sample matrices. Please look for the next post on samples containing high amounts of lipids and waxes.

Terpene Analysis Approaches – Part IV

We are back at it with another blog about terpene analysis! If you did not catch my previous post, be sure check it out here before moving on. Last time, I finished up by discussing how we would move away from analyzing terpenes in standards and dive into in-matrix analysis. Well…I lied. I’m sorry, but I PROMISE, we will get there because we did get there. First, I need to cover a couple important things.

I had the opportunity to visit a cannabis lab in Santa Rosa, CA, and they let me run their GC’s for a week. Since Restek is located in PA, we are unable to currently bring cannabis into our Innovations Lab, so in order to get our hands on this sticky material we look for collaborations. Having done the preliminary work at Restek and figuring out which sampling approach I wanted to test, I could really get cracking on further method development with their team. The first thing I wanted to complete was a calibration using DI-SPME, which initially started out as a pain, but once we optimized some of the sampling parameters, we were able to gather some great data!

Under the sampling conditions that we were used in the previous blog post (see Table 1 below), we obtained the results shown in Figure 2.

Table 1. DI-SPME Parameters

It would be cumbersome to show you the results for all of the targeted terpenes, so Figure 1 gives a representation of some the terpenes of interest over the volatility range of the entire list.

Figure 1. DI-SPME Calibration Curves

The calibration range was 20 – 1280 ng/mL (ppb) and as you can see, we did not get the best results. I should note, that we were using naphthalene-d8 as an internal standard and the results were generated off of the compounds’ response factors. So, what is going on here? It looks like we’re saturating our detector, right? Guess again! We are not saturating the detector, it’s the fiber! When we analyze the data, the peaks do not have flat tops. Peaks with flat tops are indicative of saturating your mass spectrometer. So, pressed for time in the lab, we made a couple of quick changes to our sampling parameters seen in bold below (Table 2).

Table 2. Optimized DI-SPME Parameters

Under these new conditions and shifting the calibration range from 20 – 1280 ng/mL (ppb) down to 10 – 320 ng/mL (ppb), we obtained the following results (see Figure 2 below) for those same four compounds displayed previously in Figure 1.

Figure 2. DI-SPME Calibration with Optimized Parameters

Dang! Look at that improvement! We went from some embarrassing r2 values in Figure 1 to some r2 values that we can actually work with in Figure 2. Optimizing your sampling parameters and selecting an appropriate calibration range is critical for developing SPME methods. Fiber saturation is a common occurrence when doing SPME, but if you understand the parameters, you are able to overcome this issue. By changing our extraction time from 4 min to 1 min and increasing the split ratio to 250:1, we were able to go from calibration curves that were plateauing to linear curves. To see the rest of the calibration data for our terpenes of interest, refer to Table 3 below!

Linear calibrations were achieved for all compounds with an average r2 value of 0.983 in an average range of 10 – 320 ng/mL (ppb). This is a great improvement from what I have seen in published literature where an average r2 value of 0.942 was achieved for 31 terpenes in a calibration range of 50 – 1000 ng/mL (ppb) using liquid injection.[1] While our results were an improvement, we did experience some difficulties with the higher molecular weight terpenes, as well as with terpenes containing an alcohol functional group. The lower values for the heavier terpenes may be due to the SPME phase, while the terpenes containing alcohols most likely favored the water solution, making it more difficult to getting them onto the phase.

 

Overall, things are starting to look pretty good with the DI-SPME method. We still have more to come though, so stay tuned for our next blog in this series!

 

Reference

  1. Brown, A. K., Xia, Z., Bulloch, P., Idowu, I., Francisco, O., Stetefeld, J., Tomy, G. (2019). Validated quantitative cannabis profiling for Canadian regulatory compliance – Cannabinoids, aflatoxins, and terpenes. Analytica Chimica Acta. https://doi.org/10.1016/j.aca.2019.08.042

ProEZGC Chromatogram Modeler – there is much more to the program than just the Welcome Screen

When you log into the ProEZGC Chromatogram Modeler software, you will see the screen below, which is what I refer to as the Welcome Screen. This is the starting point for all, and unfortunately all some customers will think there is to the ProEZGC software.

 

For those of you who have used ProEZGC, you know that all you need to do is to add a compound name or CAS# into the box titled Search by Name or CAS# to get started (remember, only one compound name or CAS# per line).

For this post, I would like you all to see that there is more to ProEZGC than just the screen above and the screen below. In this example, I typed in the names of four common compounds/solvents and clicked the blue Solve button.

 

After reviewing the search results for all stationary phases, I selected the results for the Rtx-502.2 column.

 

I then clicked on the Conditions tab (the dark blue tab between Compounds and My EZGC) and were presented with a new page (see below).

 

From this page you can choose different dimensions of the same column (in this case, the Rtx-502.2), modify the GC oven temperature program, and if you select Custom at the bottom of the page in the Results section, you can change carrier gas flow rates (see screen below) and a few other parameters.

 

Notice the blue arrow under Control Parameters and to the right of Column Flow in the screen above?  So what does the arrow represent? It allows you to change the remaining top parameters (Column Flow, Average Velocity, Holdup Time and/or Inlet Pressure – whichever parameters the arrow is not pointing to) in the Control Parameters section without affecting the parameter the arrow is pointing to.  See example below.  Notice that I moved the arrow from Column Flow to Average Velocity by double-clicking in the empty space to the left of 39.70 cm/sec.

.

By keeping the Average Velocity the same and changing the length of the capillary column, notice how the Column Flow (carrier gas flow rate value) increases, as do the Holdup Time* and Inlet Pressure.

  • Holdup Time is the time it takes for a non-retained compound to travel through the column.

 

One last comment; under Results you have three selections to pick from.  The default is Speed, and for most analysis this is fine.  As the name implies, when this button is selected, analysis times are minimized.  I showed you all how to select the Custom button for the most control of carrier gas parameters.  My favorite button, however, is Efficiency.  Analysis times may be a little longer, but separations are usually improved, and that is my goal, to show a customer maximum compound separation with reasonable analysis times.

.

If you have never seen or tried changing conditions by selecting the Conditions tab, I encourage you to do so. To learn more about Pro EZGC, visit the Help Section and the Restek Video Library. Let us know if you have any questions.

.

Tips on the analysis of pesticides and mycotoxins in cannabis products: Matrix matters! (Part II)

In part I of this blog series, we learned about matrix effects, how to assess them, and how to address them (you can check the blog here). One of the main conclusions of part I was the importance of using matrix matched calibration as the means to account for possible ionization effects in LC-MS/MS, and to minimize bias in GC-MS/MS due to chromatographic response enhancement. In part II, I want to talk about important points to keep in mind when evaluating recoveries and performing a matrix matched calibration. The first aspect, which was discussed previously in part I, is to ensure that your surrogate matrix reflects the composition of your sample matrix. Additionally, this surrogate matrix should be a blank (or free of your target analytes, in our case pesticides and mycotoxins). Once you source your surrogate matrix and decide on the sample prep conditions, it is very important to investigate your analytes recoveries.

  1. Spiking procedure and recoveries

Assessment of recoveries (% of analyte extracted/total amount of analyte spiked) is critical to ensure reliable quantitative data in typical pesticide analysis methodologies. This is important because matrix-matched calibrations are commonly prepared by post-spiking blank extracts with target analytes and internal standards at different concentration levels using the same dilution factor as used in real samples/extracts. A key assumption made when preparing the calibration curve is that the extraction efficiency is assumed to be 100%. Unfortunately, as we already showed in our technical article (here), there are some cases (e.g. daminozide in brownies) where getting close to exhaustive recoveries (close to 100% recoveries) is very difficult. Hence, the use of the right internal standard can be very critical to account for those variations. Alternatively, calibrators can be prepared by spiking matrix blanks at different concentration levels and running them through the entire extraction process to construct the calibration curve. I know that this can be tedious, especially when dealing with multiple matrices and many different pesticides, but if you are having difficulties with a particular cannabis matrix, this is in an option that you may want to consider. Undoubtedly, this approach in combination with the use of isotopically labeled standards will give you the best results in terms of accuracy and precision.

Typically, recoveries are tested by spiking the surrogate matrix with the analytes of interest and then performing your extraction procedure. Then you compare the amount of analyte recovered to analyte spiked in post-extraction blank assuming 100% extraction efficiency. How do we effectively use this approach? First, you need to pick the concentrations at which you want to test your method. Testing your method at a low (close to LOQ), medium, and a high concentration is a good idea. In our brownies workflow, we chose a concentration of 100 ng/g to test our methodology. This concentration was chosen because it is the lowest action level regulated for some of the California pesticides. Then, you need to spike your analytes in the surrogate matrix. Here I would like to bring a very interesting finding to your attention. A chemist from a cannabis testing lab shared with me that he first adds the extraction solvent to his sample, and subsequently spikes the samples with target analytes. In my case, I normally spike my analytes in the blank matrix, wait for the analytes to equilibrate with the sample, and then proceed to perform the extraction. In order to check whether the order of addition impacts the results, we evaluated the effect of spiking our target analytes (California list of pesticides and mycotoxins) before and after adding the extraction solvent. I know that this experiment may sound trivial, but please don’t forget that the devil is in the details. This experiment was performed in two matrices: brownies and dark chocolate. Although both matrices are delicious and have chocolate as one of their main ingredients, their compositions are very different. Figure 1 summarizes the results of the comparison for the three most impacted analytes.

Figure 1. Relative responses obtained in A) brownies and B) dark chocolate after comparing the effect of spiking our target analytes before and after adding the extraction solvent. Responses were normalized by the results obtained when spiking first our analytes and then adding solvent (n=3). Brownies samples were prepared as described in our technical article (here). Dark chocolate samples (0.5 g) were extracted by using isopropyl alcohol (0.5 mL) and acetonitrile acidified with acetic acid at 1% (2.5 mL); subsequently, 2 mL of the extract were passed through a Restek Resprep C18 cartridge (Restek Cat.#26030).

 

To better visualize our data, the responses (area counts) were normalized according to the response obtained for the samples where analytes were spiked first and solvent was added later. Interestingly, the effect depends on the compound and on the matrix type (and of course, it also depends on your extraction conditions). Daminozide (I guess that at this point you may think that it is our favorite analyte) displays 3-fold better recoveries when it is spiked in brownies after adding the extraction solvent vs. when it is spiked in the dry matrix. Conversely, in the case of dark chocolate, there is only a 10% difference between spiking daminozide before vs. after the extraction solvent. The main reason for this difference is that daminozide, a highly polar pesticide (logP=-1.5), displays a much higher affinity for a matrix like brownie compared to a more fatty sample such as dark chocolate. Such affinity can be modified when solvent is added to the matrix, meaning that the recovery of a surrogate matrix spiked by adding solvent and then the analytes won’t always be representative of a real sample. In the case of acephate, another polar pesticide (logP=-0.85), the spiking order results in a 17% difference in brownie whereas in dark chocolate the difference is negligible. Ochratoxin A, on the other hand, showed differences of 39% and 24% in brownies and chocolate, respectively. Based on these results, it is clear that the safer way to assess recoveries (also method accuracy and precision, which will be discussed in part III) is to spike your analytes and isotopically labeled analogues directly in the matrix before adding the extraction solvent. If you intend to use your isotopically labeled analogues to correct only for matrix effects, instrumental response drift, or injection variations, you can add them to your final extract. However, if you need to account for extraction variations, internal standards should be added directly to the sample matrix (before the extraction process).

After selecting the concentrations at which you want to test your recoveries and spiking analytes in your surrogate matrix, you need to prepare your post-spiked extract, which is the solution you will compare against your pre-spiked extracts. As we learned in part I, it is very important to have the same matrix components present in the solutions used to assess recoveries in order to account for matrix effects. To prepare a post-spiked extract, you simply perform your whole extraction procedure using blank matrix and then spike your post-extraction blanks with analytes assuming 100% recovery. Here it is worth  emphasizing that it is critical to ensure that your dilution factors are correct. For example, when using SPE cartridges to clean-up your extract, your final extract volume is going to be lower (~2.6 mL) than the original volume of extraction solvent used (3 mL). In that case, you may need to pool extracts collected from at least two replicates and then measure the volume you need to keep the same dilution factor. For instance, in our case, to prepare a post-spiked extract we added 50 µL of a 1 ppm analyte mix (same volume added to 0.5 g of surrogate matrix to attain 100 ng/g) in 3 mL of matrix blank extract (3 mL was the total volume of extraction solvent used for our brownies workflow). Recoveries are then estimated using the following equation:

% Recovery = (analyte response in pre-spiked extract/analyte response in post-spiked extract)*100

It is very important to emphasize that you can have amazing accuracy and precision results without having exhaustive recoveries. An example of this is the quantitation approach we used for daminozide in our technical article (here). In other words, recoveries and accuracy are two different things. We will talk more about this in Part III, so please stay tuned!