Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue085 line shapes #88

Open
wants to merge 30 commits into
base: master
Choose a base branch
from
Open

Issue085 line shapes #88

wants to merge 30 commits into from

Conversation

cjperks7
Copy link
Collaborator

@cjperks7 cjperks7 commented Aug 8, 2023

What's New

  • New module line_broaden to manage various line broadening mechanisms a user can toggle on\off
    • At the lowest level, there's generalized Gaussian and Lorentzian line shape calculators that takes as input the frequency FWHM (in units Hz) from some model that then outputs the line shape (in units 1/AA) on wavelength mesh centered on the transition wavelength stored in the ADF15 file
    • At a higher level, there's models for the FWHM due to Doppler, Natural, and Instrumental broadening
      • Allowed flexibility if other broadening models are added later per other applications: Pressure, Stark, etc.
    • There is also a model to include suprathermal ion wings per a Bi-Maxwellian model
      • NOTE: It is assumed that the user won't turn on the Doppler and Suprathermal_Ions options at the same time
      • Maybe in the future this function could be generalized to arbitrary distribution functions, but the philosophy employed right now is simple scoping tools
    • At the highest level, aurora.line_broaden.get_line_broaden() accepts from aurora.radiation.get_local_spectrum() an array of transition wavelengths and a dictionary, dbroad, whose keys are the various broadening mechanisms of interest and whose entries are the necessary physics for the associated model
    • Given many broadening mechanisms at once, the profiles are then convoluted together
  • NOTE: care was taken to adjust the wavelength mesh to ensure the outputted normalized line shape was resolved well enough to integrate to 1 while ensuring the mesh wasn't ridiculously large (i.e. Lorentzians, Suprathermal ions)

Using Doppler Broadening

  • To not perturb other people's workflow, when aurora.radiation.get_local_spectrum(dbroad=None) then it is assumed you just want Doppler broadening so everything runs the same as before

Using Instrumental Broadening

  • What's employed here is a very simplified Gaussian model for Instrumental broadening. Ray-tracing is necessary to do it right, but this is meant to be a quick scoping tool
    - What the user gives is the Gaussian FWHM in units [\AA]
    - NOTE: It's pretty obvious how the inputs of this option can be leveraged to include any Gaussian broadening from some external model
  • What is assumed for the model is that one has a detector surface with an associated wavelength mesh and they know from a monochromatic light source the characteristic pixel spread in the spectral direction
    • i.e., if one is using a crystal spectrometer, even with a highly collimated light source, there's an approximately Gaussian distribution about the Bragg angle that light can be reflected (the rocking curve)
    • So one can characterize the resultant pixel spread as the characteristic arc length of the rocking curve projected onto the detector surface (rocking curve FHWM * crystal-to-detector distance) divided by the detector width
    • To convert this into [AA], then multiply by the spectral range
    • In generality, what is needed is the spectral range times the characteristic fraction of pixels illuminated by the Gaussian spread, so in units [AA]
  • NOTE: Since the model for Doppler and Instrumental broadening are both Gaussian, the convolution of the two has been benchmarked against the analytic expression to good agreement
    - Screenshot 2023-08-08 at 10 29 54 AM
    - The blue curve (analytic) is behind the red curve (computed)

Minimum working code (simulating He-like Kr):

import aurora
import os

# Inputs
adf15_file = os.path.join(
    '/home/cjperks/tofu_sparc/atomic_data/ADAS_PEC_files',
    'Krypton',
    'fs#0.50A_8.50A#kr34.dat'
    )
Te_eV = Ti_eV = 3e3
ne_cm3 = 1e14

dbroad = {
    'Doppler':{
        'Ti_eV': Ti_eV, # [eV]
        'ion_A': 83.8, # [amu]; Kr
        },
    'Instrumental':{
        'width_A': 11e-3 * (2.05e-4 * 290/3.84), # [AA]; spec_rng * (rc_FWHM * D_cd / w_d)
        },
    }

out = aurora.get_local_spectrum(
    adf15_file,
    ne_cm3,
    Te_eV,
    ion_exc_rec_dens=[0,1,0],
    Ti_eV=Ti_eV,
    n0_cm3=0.0,
    dlam_A=0.0,
    plot_spec_tot=False,
    dbroad=dbroad,
    )

Using Natural Broadening

  • To include this, you need the Einstein coefficient for every transition of interest which can be hard to come by
    • So the philosophy employed here is that each entry in the dbroad['Natural'] dictionary is a transition wavelength (hopefully you've read your ADF15 file) with it's associated Einstein coefficient
    • NOTE: It's pretty obvious how the inputs of this option can be leveraged to include any Lorentzian broadening from some external model
    • There's a tolerance of 1e-4 AA in the case a user gets their transition wavelengths from some other resource (calculated, NIST, etc.) Hopefully small enough that you don't jump to a different line
    • If one has an ADF15 file with multiple entries for the same wavelength, but different population mechanisms (ion, exc, rec) then all are captured here
  • The convolution calculator is able to handle the difference between lines that weren't Naturally broadened and ones that were
    • NOTE: If Natural broadening was the only line mechanism turned on then all other lines are zeroed out as the line shape data is initialized as all zeros
  • This workflow has been benchmarked with convoluting Doppler broadening in the limit the Einstein coefficient is very small so the Lorentzian becomes a Delta function
    • Screenshot 2023-08-08 at 10 49 29 AM

Minimum working code (simulating He-like Kr):

import aurora
import os

# Inputs
adf15_file = os.path.join(
    '/home/cjperks/tofu_sparc/atomic_data/ADAS_PEC_files',
    'Krypton',
    'fs#0.50A_8.50A#kr34.dat'
    )
Te_eV = Ti_eV = 3e3
ne_cm3 = 1e14

dbroad = {
    'Doppler':{
        'Ti_eV': Ti_eV, # [eV]
        'ion_A': 83.8, # [amu]; Kr
        },
    'Natural':{
        '0.9454': 1.529e15, # [1/s]; w-line so the largest Einstein coefficient
        },
    }

out = aurora.get_local_spectrum(
    adf15_file,
    ne_cm3,
    Te_eV,
    ion_exc_rec_dens=[0,1,0],
    Ti_eV=Ti_eV,
    n0_cm3=0.0,
    dlam_A=0.0,
    plot_spec_tot=False,
    dbroad=dbroad,
    )

Using Suprathermal Ion

  • Right now, there's just a Bi-Maxwellian model so it runs Doppler broadening twice but once for a slow population and another for a fast. Then one needs to define what fraction of the ions are in the fast population
    • Screenshot 2023-08-08 at 11 20 15 AM
  • Placeholders for if in the future this ever gets improved on

Minimum working code (simulating He-like Kr):

import aurora
import os

# Inputs
adf15_file = os.path.join(
    '/home/cjperks/tofu_sparc/atomic_data/ADAS_PEC_files',
    'Krypton',
    'fs#0.50A_8.50A#kr34.dat'
    )
Te_eV = Ti_eV = 3e3
ne_cm3 = 1e14 

dbroad = {
    'Suprathermal_Ions':{
        'model': 'Bi-Maxwellian',
        'Ti_eV': Ti_eV, # slow population ion temp
        'ion_A':  83.8,
        'Ti_fast_eV': 100e3, # fast population ion temp
        'f_fast': 1/2, # fraction of ions in fast population
        }
    }
out = aurora.get_local_spectrum(
    adf15_file,
    ne_cm3,
    Te_eV,
    ion_exc_rec_dens=[0,1,0],
    Ti_eV=Ti_eV,
    n0_cm3=0.0,
    dlam_A=0.0,
    plot_spec_tot=False,
    dbroad=dbroad,
    )

@cjperks7
Copy link
Collaborator Author

cjperks7 commented Aug 8, 2023

@fsciortino Let me know if you observe any bugs are ways to improve

@cjperks7
Copy link
Collaborator Author

cjperks7 commented Aug 8, 2023

Checks are failing because they can't find numpy?

@odstrcilt
Copy link
Collaborator

@cjperks7 I have the same issue with regression tests :( it is probably caused likely by different versions of Python used to install the numpy and the actual python. It might be possible to fixit by changing meta.yaml file, but I have not succeeded yet. So if you have any ideas, please try it.

@odstrcilt
Copy link
Collaborator

One idea for your pull request. If the instrumental function is not Gaussian, it is usually described as the sum of Gaussians. Then you can use the same trick, because the convolution of Gaussian is again just a Gaussian.

Convolution of Gaussian and Lorentz distributions is Voigt distribution
https://en.wikipedia.org/wiki/Voigt_profile
This can be well approximated by Pseudo-Voigt distribution, which is just a weighted sum of Gaussian and Lorentz profiles.

In conclusion, if this is used for synthetic diagnostics in Bayesian inference, where the computation time matters,
all convolutions can be calculated analytically.

@cjperks7
Copy link
Collaborator Author

One idea for your pull request. If the instrumental function is not Gaussian, it is usually described as the sum of Gaussians. Then you can use the same trick, because the convolution of Gaussian is again just a Gaussian.

@odstrcilt Thanks for the feedback. It should be easy to give a user a choose to model instrumental broadening for a number different Gaussians per whatever they find best fits high-quality ray-tracing.

To get to your point on Bayesian optimization, that wasn't actually my intention. I've documented my reasonings for these various extensions under my initial Issue description #85

I specifically want to use this for inferring ion temperature from high-resolution spectroscopy as a trick you can use in your tomographic inversions can very naturally handle the convolution of an arbitrary number of Gaussians representing your instrumental broadening given you know their characteristic widths alongside the typical Doppler profile (ref: Section 3 in M.L. Reinke et al, RSI 83, 113504 (2012)).

You have touched on a sort of philosophic debate that's been going on in my head as I work on ImpRad. I've been mainly working with a high-resolution spectrometer (XICS), but lines integrated over the broadening profile to ignore the specific details of how the photons are redistributed. I once had a conversation with Francesco where he said he initially was doing this though later got pressed into doing Bayesian inferencing on the whole spectrum. I recall he said that it was a worthwhile endeavor, but now that I know better I feel as though I need to question the merit of that.

To me, it seems like doing full spectrum Bayesian inferences only convolutes more uncertainties from 1) your ion temperature profile, 2) your knowledge on the instrumental broadening (therefore diagnostic geometry), and 3) the quality of your atomic data for a larger number of exotic transitions. Sure, you get constraints on other charge states via satellite lines, but I feel I can get by just fine using low-resolution VUV spectroscopy. So it seems more effort than its worth to me.

Hence why I didn't worry about speed and instead checked that my numerical method matches simple cases to analytically derive.

What do you think? Do you or @fsciortino find utility in doing full spectrum Bayesian inferences? Do you personally use it?

}

# Calculates Instrumental broadening
def _get_Instru(
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Generalize this to give the user a choose in the number of Gaussian/Lorentzian profiles that they've determined best models their instrumental broadening

@odstrcilt
Copy link
Collaborator

If you care only about the number of photons, you can sum the intensity of all pixels measuring your line. The line shape does not matter. The reason why we are fitting the peak and not summing the pixels is mainly to get a better signal-noise ratio. Also, even if the fitting function is not perfect, it will be somewhere higher and somewhere lower, so on average it it will give you a very good estimate of the total integral, even of the fit is not perfect.
But if you also care about the velocity and temperature, proper fitting is essential. Matt Reinke used tomography of the first three moments of the peak. Although this is simple, robust, and fast, it is not the best possible solution in a maximum posterior way. Francesco and one other Norman developed a Bayesian algorithm to calculate the moments more accurately.
But the ideal approach is to describe these profiles by splines (or Gaussian processes...), and develop a synthetic diagnostic, which calculates the spectra and fit the measured spectra by the model. I was asked to develop a synthetic diagnostic for integrated data analysis from ITER XICS spectrometers, so I will try to use this approach.

@cjperks7
Copy link
Collaborator Author

cjperks7 commented Sep 21, 2023

But the ideal approach is to describe these profiles by splines (or Gaussian processes...), and develop a synthetic diagnostic, which calculates the spectra and fit the measured spectra by the model. I was asked to develop a synthetic diagnostic for integrated data analysis from ITER XICS spectrometers, so I will try to use this approach.

Interesting, we're having similar conversations about analyzing real data for SPARC. Admittedly I've elected to go with the moments method because 1) like you said, it's easy to do and 2) I'm used to it from making CMOD Ti profiles. It's been working fairly well for a bunch of performance scoping I've been doing with fake density/temp profiles. But we do expect to explore the Bayesian approach once we start dealing with real data. I have Francesco/Norman's BSFC code, but haven't really tried using it yet.

I'm still curious as to if you still need to include instrumental broadening via this model in Aurora as part of your Bayesian analysis. I would imagine you could create a high-quality synthetic diagnostic and calculate a transfer matrix (local emissivity to counts) via volume-of-sight integration, hence automatically convoluting instrumental broadening, that you only have to do once and then can save? Or are you just sticking with line-of-sight integration that you run during the analysis?

@odstrcilt
Copy link
Collaborator

l for a bunch of performance scoping I've been doing with fake density/temp profiles.

It is possible that the moment approach will be better, but I would like to try the direct approach.

To make it fast, I plan to calculate a transfer matrix between the 2D pixel grid and local spectra on the outer midplane grid.
The advantage is that this matrix will not depend on the plasma shape and it can be recalculated before the inference.
So the steps are:

  1. from impurity density, calculate local emissivity (or skip this step and reconstruct the emissivity directly)
  2. map all profiles (emissivity, Ti, omega) from rho to Rmidplane
  3. evaluate local spectra, only including the Doppler broadening (and Zeeman splitting?). Stark and natural broadening should be completely negligible.
  4. either calculate analytically convolution with instrumental broadening function now or numerically later.
  5. multiply the spectra by the transfer matrix
  6. apply instrumental broadening if it was not done yet,
  7. in the ideal case, compare with the measured spectra. In the real case, apply wavelength calibrations, intensity calibrations, and dispersion calibrations. Although this could already be included in the transfer matrix... Convolution with instrumental function can also be in the transfer matrix.
    The transfer matrix should describe how a narrow line at a single R location contributes to a single pixel in the 2D array of the detector. This is a linear map; it can be described as a matrix.

What do you mean by "volume-of-sight integration, hence automatically convoluting instrumental broadening"
What dominates instrumental broadening in XICS? Can you measure the instrumental broadening by some calibration source?
For example, the instrumental broadening in visible light CER system is dominated by optics and slit width. It must be measured.

@cjperks7
Copy link
Collaborator Author

By "volume-of-sight integration, hence automatically convoluting instrumental broadening", I mean pretty much what you mean by what you described for calculating your transfer matrix. Particularly, I've done ray-tracing to account for finite beam width as well as the distribution of wavelengths a single pixel is sensitive to. So the latter effect builds in instrumental broadening. My transfer matrix is basically in units [m^3*AA] to yield a photon flux per pixel.

For Natural broadening, it can/cannot matter depending on 1) impurity you're looking at, 2) transition you're looking at, and 3) your ion temperature. For instance, in SPARC we're relying on a three sets of von Hamos spectrometers. One to image Ne-like Xe (particularly the 3D line), one to image He-like Kr (either particularly the w or z line), and another focused on just tracking some metals (W, Fe, Ni, Cu). The Einstein coefficient (setting the Lorentzian width) scales ~Z^4 so already it's important to pay attention to Natural in a fusion power plant where you can't rely on the typical Ar.

A quick scoping on just comparing FWHM of a Doppler Gaussian and a Natural Lorentzian showed me that for low-temperature operations they can be comparable for the Ne-like Xe 3D line and He-like Kr w line. Completely negligible for the He-like Kr z line (but it's about half as bright as the w line...). Hence why I started this Issue so I could include it and quantify the error in my reconstructions
Screenshot 2023-09-21 at 4 52 39 PM

For Zeeman splitting, I haven't really thought about it much. I asked John Rice once and he said you can completely ignore it for x-rays so haven't moved forward. If you have data to the contrary then I'd love to know.

For instrumental broadening in XICS, it's like you said, optics and slit width. But importantly the rocking curve of the crystal which you want wide to increase etendue, but not too wide. Particularly for SPARC, we've played some games for space reservation conflicts basically defocusing the von Hamos spectrometers so that has made our instrumental broadening non-uniform.

@cjperks7
Copy link
Collaborator Author

cjperks7 commented Sep 21, 2023

Therefore my transfer matrix is really a discretized version of $\int_V dV \int_\lambda d\lambda$ for a local emissivity spectrum in units $[ph/s/cm^3/\AA]$

@cjperks7
Copy link
Collaborator Author

And oh yeah, we plan to have at least one calibration source (money fights...). Probably then assume it applies the same to all the spectrometers. Another theoretical method we'll try is comparing the broadening of bright lines from different impurities (hence mass).

The details haven't been flushed out at this stage. We're just post-PDR now.

@odstrcilt
Copy link
Collaborator

This is very interesting, I didn't know that the Einstein coefficients have such a steep Z dependence.

@fsciortino
Copy link
Owner

hey guys, interesting dicussion -- sorry for not being so engaged. To reply to @cjperks7 , I think that if you want to really do the highest quality assessment of impurity transport coefficients, you need to avoid model inadequacies in all the possible forms. This is practically impossible, but one must try. Doing full-spectrum analysis helped me to understand what were systematic errors and where my analysis was actually at fault. I found it educational, but actually a lot of work. Depending on your objective, I would stay away from this level of detail... but it's not a trivial choice. It may even vary for different devices, where other lines may pop up and you may be underestimating them at first. For example, I remember that the K-alpha spectrum of Ar had some annoying Mo lines in the middle - I did extensive analysis to convince myself that it was all fine in my case. You cannot come to such conclusions if you don't look at the details. It's not as bad in the VUV, because there you have many more lines and full fitting is anyway a waste of time (you can only hope that no other lines are overlapping, based on a lot of experiments with different non-intrinsic species).

In ITER and SPARC, lots of W will be flying around. IIRC, the W spectrum for XICS is quite dangerous because of potential overlaps... I remember discussing this with John. Sine @cjperks7 is working with him, I'm sure you discussed this extensively. Worth looping in @odstrcilt on these challenges, since he's working on the ITER XICS. The folks at WEST have seen some of the issues in experiment, I believe. W7-X also has some W experiments (W injections).

@cjperks7
Copy link
Collaborator Author

@fsciortino that's a very good observation. Indeed with the Ne-like Xe spectrum (useful for low-temp regions) we are finding that there's a nearly degenerate line emitted from Al-like W with the 3D line (brightest) that we'd like to use for Ti measurements. The Te ranges that these charge states are significantly populated at least only slightly overlap so it's part of the decision making when to trust those channels over the He-like Kr (useful for high-temp regions) per the moments approach.

The He-like Kr spectrum has so far appeared clean of pollutants. The Li-like satellites are shifted around differently than in the K-alpha spectrum for Ar you mention, so may/may not cause strife depending on if we rely on the w line (which may/may not have a lot of natural broadening error already) or z line (which is a little less bright so signal-to-noise concerns). So another sort of decision making of moments v. Bayesian we have to make once we have real data. It's so convenient when you have fake data to turn off the troubling bits haha! Hence I've enjoyed the moments method for quickly scoping performance per diagnostic design.

I do hope to bring ImpRad to WEST one day (once they give me shot time....) to run new experiments for my ICRH-impurity pumpout project I have on CMOD data. ImpRad has been showing some very exciting results! So you are reminding me that I probably need to worry about full spectrum modeling there.... Especially because I want to focus on some of those polluting W lines.

@odstrcilt and yeah, I would find it very beneficial to keep a collaboration on this stuff open. I've had a few very interesting conversations with other people working on ITER XICS that I've reflected on and applied to my own work. A particular conversation I enjoyed with Oleksandr Marchuk were they pointed an important population mechanism in atomic data modeling we both were independently doing for a few Ne-like W lines that I had been missing. Which is helpful because we both want to use that spectrum.

@cjperks7
Copy link
Collaborator Author

To anyone interested, I swear I'll get to closing this out in a week or two, but adding to my to-do list

  • psuedo-Voigt profile
  • 2-photon emission profile (just if PEC from relevant H-, He-like states are found)

cjperks7 added 6 commits February 13, 2024 13:54
…efficient is related to: either central wavelength or adf15 isel
…aussians each with a known variance. NOTE: Assumes no systematic shift as isgit add aurora/line_broaden.py git add aurora/line_broaden.py
@odstrcilt
Copy link
Collaborator

Finally, I have time to work on the synthetics CIXS diagnostics for ITER. @cjperks7, would you recommend some literature for me to read?

So far, I have found these:
This is the best publication about CIXS diagnostics:
Beiersdorfer, P., et al. "The ITER core imaging x-ray spectrometer." Journal of Physics B: Atomic, Molecular and Optical Physics 43.14 (2010): 144008.
About the W atomics data
Beiersdorfer, Peter, Joel Clementson, and Ulyana I. Safronova. "Tungsten data for current and future uses in fusion and plasma science." Atoms 3.2 (2015): 260-272.
About a microcalorimeter as an alternative to X-ray spectrometer:
Beiersdorfer, P., et al. "The ITER core imaging x-ray spectrometer: X-ray calorimeter performance." Review of Scientific Instruments 81.10 (2010).

Matt Reinke's RSI 2012 paper about the Alcator C-mod system was also very interesting.

@cjperks7
Copy link
Collaborator Author

@odstrcilt yeah, sure I can. How about I follow up in an email since I imagine we could have quite a long conversation? Any other interested party with visibility on this PR speak now if you want to be CC-ed.

As a first go, what you list are pretty big ones. I have some more on CIXS and microcalorimeters that I can share. More theory on the former as for SPARC we were far in the design process before considering a microcalorimeter and ended up ruling it out based on available space.

For W atomic data, I should advertise I have EBIT experiments at LLNL coming up in the Spring to look at this for polluting a Ne-like Xe spectrometer that you'd probably want to take advantage of. I know ITER people (I think Novimir was PI) have other EBIT data I want as well.

Anyway, as a heads up I'll be posting a report either this evening or tomorrow morning implementing your feedback and opening this PR back up for review. last bits of debugging. I'll tag you and Francesco when I do.

@cjperks7
Copy link
Collaborator Author

cjperks7 commented Feb 19, 2024

@odstrcilt @fsciortino Okay, this branch should be ready for another round of reviews. What's new:

Instrumental Broadening

  • There was a commented during the last review that I modeled Instrumental Broadening as just a Gaussian where the user gives it a characteristic variance precludes it being more complicated such as a sum of Gaussians
  • The convolution of a bunch of unshifted Gaussians is just another Gaussian where the variance is the quadrature sum so I put that in there for now
  • This could be expanded to do a weighted sum of Gaussians and Lorentzians or even a user-inputted profile to be convolved with the other broadening mechanisms
  • I haven't continued just because I'm not sure what would be useful to do

Voigt profiles

  • It was commented that I should implement a pseudo-Voigt function to circumvent the convolution step for potential speed-up
  • https://en.wikipedia.org/wiki/Voigt_profile#Pseudo-Voigt_approximation
  • I implemented the option to use the weighted sum method in the linked Wikipedia article, but I'm honestly not sure whether I haven't implemented it right or if I objectively disagree with this method
  • Has anyone used the weighted-sum pseudo-Voigt method before and can see if I made a mistake?
  • I'm not even sure if I can agree with the method because the point of convolving a Gaussian with a Lorentzian is that it takes mass out of the center of the Gaussian and redistributes it into the wings, but a weighted-sum can only do the opposite
  • To benchmark my convolution method, I added another method to calculate the Voigt profile just using the scipy built-in function
  • Difference between the scipy function and my convolution are down to numerics. If I expanded and refined the wavelength meshes the residual converges
  • Screenshot 2024-02-19 at 12 31 32 PM
  • Minimum working example:
# Modules
import aurora.line_broaden as lb

# Fake ADF15 transition wavelength mesh
wave_A = np.r_[2.72, 2.73]

# Fake plasma conditions
Ti_eV = 4e3
mi = 131.29
AA = 1e15

# Dictionary of broadening physics
dbrd_V = { 
    'Voigt':{
        'Ti_eV': Ti_eV,       # [eV], ion temperature
        'ion_A': mi,       # [amu], ion mass
        #'key_options': 'wavelength',     # [AA] ADF15 transition wavelength
        #'2.72': AA,       # [1/s], fictious Einstein coefficeint
        #'2.73': AA,        # [1/s], fictious Einstein coefficeint
        'key_options': 'isel',       # ADF15 isel index
        '1': AA,       # [1/s], fictious Einstein coefficeint
        '2': AA,       # [1/s], fictious Einstein coefficeint
        },
    }

# Calculates profiles
out_pV = lb.get_line_broaden(
    dbroad = dbrd_V,
    wave_A = wave_A,
    use_scipy=False,
    use_pseudo=True,     # Use weighted-sum method
    )
out_V = lb.get_line_broaden(
    dbroad = dbrd_V,
    wave_A = wave_A,
    use_scipy=True,       # Use scipy built-in function
    use_pseudo=False,
    )
  • Note that I added the option to related the Einstein coefficient to the ADF15 transition either by the wavelength or isel index
  • Also note that if the issue with the weight-sum method is not resolved, I may or may not delete it but I'll definitely bury the control options to switch to it that you see above

2-photon emission

  • Since I have collisional-radiative modeling going that calculates the population density for states that decay by 2-photon emission, I figured I might as well deal with that by including it in the spectrum modeling
  • I have implemented two methods to do this 1) using a published data table that ChiantiPy gives tools to interface with and 2) use a published analytic formula like they do in PyAtomDB.
  • Note that the published data table ChiantiPy uses gives distributions up to nuclear charge Z=92, but they arbitrarily cut the downloaded data table at Z=30. So I filled my own copy with the missing data up to Z=92
  • Also note that the that analytic fit used by PyAtomDB is independent of nuclear charge and differentiating between H-like and He-like. This is because as Z increases, the difference between the H-like and He-like curves disappears. But as Z increases the distribution (\Psi_0) becomes more strongly peaked at y=0.5
  • Screenshot 2024-02-19 at 12 43 12 PM
  • Minimum working example:
# Modules
import aurora.line_broaden as lb
import pdb

# Fake ADF15 transition wavelength mesh
wave_A = np.r_[2.72, 2.73]

# Fake plasma conditions
Ti_eV = 4e3
mi = 131.29
AA = 1e15

# Dictionary of broadening physics
dbrd_V = {
    'Voigt':{
        'Ti_eV': Ti_eV,       # [eV], ion temperature
        'ion_A': mi,        # [amu], ion mass
        #'key_options': 'wavelength',     # Transition wavelength
        #'2.72': AA,       # [1/s], fictious Einstein coefficient
        #'2.73': AA,       # [1/s], fictious Einstein coefficient
        'key_options': 'isel',       # ADF15 isel indexing
        '1': AA,        # [1/s], fictious Einstein coefficient
        '2': AA,        # [1/s], fictious Einstein coefficient
        },
    '2-photon':{
        #'isel': [1],
        'wavelength': [2.72],    # wavelength of minimum photon wavelength
        'Znuc': 30,                # nuclear charge
        'nele': 1,                   # number of electrons (only H-like or He-like)
        },
    }

# Calculates profiles
out_V = lb.get_line_broaden(
    dbroad = dbrd_V,
    wave_A = wave_A,
    use_scipy=True,
    use_pseudo=False,
    )

  • Note that again you can demark the ADF15 transition that's 2-photon by either its wavelength or isel index
  • Also note that it is assumed that the physics of this transition is only 2-photon emission as you would expect for the H-like sequence and 1S0 for He-like. 3S1 for He-like more strongly decays by magnetic dipole emission than 2-photon consistently by 1e4 times even versus nuclear charge so it is ignored here. 3S1 also has a different trend with nuclear charge than is captured here. Because of this though the 2-photon distribution will override other physics options applied to the flagged transition
  • A hanging thread I haven't resolved per my references is that I never see any thermal broadening included. I don't know why and if there should be. Anyone else know? I know actual atomic physicists that I plan on asking.
    • Not a hanging thread anymore. There's a simple scale separation argument. Now 2-photon emission happens via two single-photon emission events mediated by a virtual 2p state (hence 2E1 and such). That means each photon is independently isotropic so each experiences a probability distribution of Doppler shifts on the order of \Delta\omega =k * v_th,i. The 2-photon emission curve is centered around \omega_0/2 so the FWHM is on the order of the photon frequency \Delta\omega=k*c. So as long as you have non-relativistic ions, the Doppler broadening is relatively basically a delta function and can be ignored.

@odstrcilt
Copy link
Collaborator

I have fixed the Pseudo-Voigt profile, now it agrees pretty well:
image

@cjperks7
Copy link
Collaborator Author

cjperks7 commented Feb 29, 2024

I have fixed the Pseudo-Voigt profile, now it agrees pretty well:

Oh sick, thanks! What was the issue? From what I understand reading your changes, I was supposed to the Gaussian/Lorentzian parts using the same, combined FWHM?

@odstrcilt
Copy link
Collaborator

Exactly, both needs to use the same FWHM called "f" on wikipedia

@cjperks7
Copy link
Collaborator Author

Okay, let me know if you see anymore problems. If we're good then before merging then I'll make the pseudo-Voigt the default

@odstrcilt
Copy link
Collaborator

You don't have to make pseudo-Voigt default. It is just an approximation, which is useful only if you want to evaluate the line shape really fast, for example, for fast synthetics diagnostic.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants