Pipeline to create x-ray temeprature maps, pressure maps, surface brightness maps, and density maps of galaxy clusters from Chandra Data Archive.
This pipeline is based on a pipeline created by Jean-Paul Breuer
Special thanks to,
Dr. Khan M B Asad
Assistant Professor
Independent University, Bangladesh (IUB), Dhaka, Bangladesh.
Jean-Paul Breuer
Masaryk University, Brno, Czech Republic
Dr Jeremy Sanders
Max Planck Institute for Extraterrestrial Physics, Germany
System requirements:
Platform Support: Tested on Ubuntu 20.04.5 LTS.
Any multicore CPU, minimum 9 GB RAM, 50 GB storage for all software packages. 75 GB recommended for testing with a cluster system.
1. Install Anaconda distribution.
Follow the Anaconda Installation page for installation.
2. Installing CIAO with conda.
- Run the following command in the terminal to install ciao, caldb and some associated software in a conda environment named “ciao-4.14” or anything you like.
conda create -n ciao-4.15 -c https://cxc.cfa.harvard.edu/conda/ciao -c conda-forge ciao sherpa ds9 ciao-contrib caldb marx python=3.10 jupyter jupyterlab numpy matplotlib astropy scipy scikit-learn pandas seaborn
-
CALDB, acis_bkgrnd and hrc_bkgrnd file download might fail because of
CondaHTTPError: HTTP 000 CONNECTION FAILED for url
error or slow internet connection. If this happens remove caldb from CIAO installation command and follow the Alternative download instructions. There are multiple approches underCALDB alternatives
. Recommended and tested alternative:Install individual conda tar files
. -
Reference: Installing CIAO with conda page
3. Download and install HEASOFT Software.
-
Go to the HEASOFT installation page
-
Select "Source Code" and select "PC - Linux - Ubuntu" from checkbox in STEP 1.
-
Select all in STEP 2 and click submit. Alternative (drive download): heasoft-6.31.1src.tar.gz
-
unzip or extract the .tar.gz file
(using e.g. "tar zxf [tar file]")
and follow the INSTALLATION process to install HEASOFT. For user without sudo access see the [OPTIONAL] steps at the end. -
To make HEASOFT initialization easy I have created the following alias:
Get the PLATFORM name:
Go inside the heasoft directory and run:cd /home/[user_name]/[heasoft_saved_directory]/heasoft-6.31.1/BUILD_DIR
nano config.txt
see line number 4 which will look something like this:
modified Linux system type is x86_64-pc-linux-gnu-libc2.31
So, machine's architecture/PLATFORM isx86_64-pc-linux-gnu-libc2.31
In the terminal type and run the following:nano ~/.bashrc
Paste the following command (replace the "heasoft-6.31.1" with your downloaded heasoft version and replace "PLATFORM" with machine's architecture):
alias heainit='export HEADAS=/path/to/your/installed/heasoft-6.31.1/(PLATFORM); . $HEADAS/headas-init.sh'
For example I created following alias in the
.bashrc
script of my system:alias heainit='export HEADAS=/home/zareef/software/heasoft-6.31.1/x86_64-pc-linux-gnu-libc2.31; . $HEADAS/headas-init.sh'
save the ~/.bashrc.
runsource ~/.bashrc
Typeheainit
to initiate HEASOFT whenever needed.
[OPTIONAL]For users without sudo access
During the INSTALLATION process check gcc, g++, gfortran, perl, python3 by running which gcc
, which python3
etc in the terminal. Use these locations in the export
of Building the software
step.
4. Install CFITSIO
[OPTIONAL]For users without sudo access
In the ./configure
part of the instructions replace the --prefix=/usr/local
with --prefix=[home]/[usr_name]/local/bin
. Create mkdir -p [home]/[usr_name]/local/bin
if it is not created before.
In my case I used /home/zareef/anaconda3/bin
(automatically created with anaconda installation) as the location of [home]/[usr_name]/local/bin
in the server user account.
5. Download and install Contour binning and accumulative smoothing software.
- Open terminal and run the following:
git clone https://github.com/jeremysanders/contbin
- Go to the downloaded folder directory.
cd home/[user_name]/[dowload_location]/contbin
- Build:
make
- Copy the built program:
sudo make install
[OPTIONAL]For users without sudo access
- Open terminal and run the following:
git clone https://github.com/jeremysanders/contbin
-
Go to the
contbin
folder -
Open Makefile
nano Makefile
- Edit following parameters:
You need to change the line in the Makefile that says linkflags=... to have -L/path/of/library at the start. Change the line which says CXXFLAGS=... to have -I/path/of/include/directory at the start.
e.g (for /usr/local/lib and /usr/local/include)
...
# add -lsocket to below for Solaris
# add -Ldirname to add directory to link path to look for cfitsio
linkflags=-L/usr/local/lib -lcfitsio -Lparammm -lparammm -lpthread
# where to install (not very well tested)
bindir=/usr/local/bin
# sensible compiler flags
export CXXFLAGS=-I/usr/local/include -O2 -g -Wall -std=c++11
export CXX=g++
...
This is the part of the Makefile
for my server account. I used my anaconda3
folder location as the lib, include and bin path which was created after anaconda installtion:
...
# add -lsocket to below for Solaris
# add -Ldirname to add directory to link path to look for cfitsio
linkflags=-L/home/zareef/anaconda3/lib -lcfitsio -Lparammm -lparammm -lpthread
# where to install (not very well tested)
bindir=/home/zareef/anaconda3/bin
# sensible compiler flags
export CXXFLAGS=-I/home/zareef/anaconda3/include -O2 -g -Wall -std=c++11
export CXX=g++
...
- Build:
make
- Copy the built program:
make install
To learn more about contbin
6. OPTIONAL: X Windows Virtual Frame Buffer(Xvfb) for Headless Linux Server
Xvfb allows you to run graphics applications without the need to use a display.
Useful to run any application that requires GUI but have to run it in a remote server where there is no display hardware and no physical input devices. Like our processing in step 7 where the pipeline automatically opens DS9 to process the region files.
install:
sudo apt install xvfb
The tools in step 7 and 8 are not implemented yet in this pipeline. They may be added as an option in future updates.
7. OPTIONAL: Installing GNU parallel shell tool.(Not used)
Run the following:
conda install -c conda-forge parallel
Reference:
Anaconda parallel package link
8. OPTIONAL: Install SPEX software package.(Not used)
Follow the SPEX installation guide from here.
To download ChandraCluster_mapPipeline, simply run git clone https://github.com/ZareefJafar/ChandraCluster_mapPipeline.git
Go to the folder cd ~/ChandraCluster_mapPipeline
.
There are several python scripts (.py files). Running each script will generate a bash script (.sh file).
Let's start!!!
Step 0: Creating directories
- Everything will run on
conda
environment. So, activate it first.
conda activate ciao-4.14
- Run
directory.py
. Enter the instructed informations.
python directory.py
Step 1: Run PreProcessing_download_data.py
python PreProcessing_download_data.py
bash preprocessing.sh
Step 2: Run PreProcessing_reprocess_data.py
python PreProcessing_reprocess_data.py
bash preprocessing.sh
Bug List and solution:
pget_error
solution: go to the link
cannot import name 'object' from 'numpy': Problems with NumPy 1.24
solution: enter ciao environment and run conda install -c anaconda numpy=1.23.5
Step 3: Run PreProcessing_flare_filter.py
python PreProcessing_flare_filter.py
bash preprocessing.sh
Step 4: Run PreProcessing_merge_data.py
python PreProcessing_merge_data.py
bash preprocessing.sh
Step 5: Run PreProcessing_merge_data_flux.py
python PreProcessing_merge_data_flux.py
Step 6: Removing point source from merged image
This step requires the DS9 application, which is a graphical interface. In case you are running the pipeline in a remote server without any graphical interface refer to the [OPTIONAL] part.
- Open
broad_thresh.img
with ds9. This file should be located insidemerged
folder inside cluster data folder.
ds9 ~/[data_dir]/[cluster_name]/merged/broad_thresh.img
- We need to create 3 region files from
broad_thresh.img
file.
src_0.5-7-nps-noem.reg
:
A region file that contains all cluster emission (eg. a large circle around the cluster that includes the extended emission, which will be removed and used for the deflaring/high energy rescaling). This would include areas such as the peak of cluster emission as these regions may contain high energy events you want to consider in this analysis.
broad_src_0.5-7-pointsources.reg
:
A region file that contains all of the pointsources. These are typically foreground point sources one does not want to consider when analyzing the cluster.
square.reg
:
This will eventually crop out all things outside of the region of interest.
Region file format:Region - ciao
,Coordinate System - wcs
Save location: ~/[data_dir]/[cluster_name]/regionfiles
.
- Run
PreProcessing_source_crop.py
python PreProcessing_source_crop.py
- Run generated
preprocessing.sh
bash preprocessing.sh
- [OPTIONAL]For remote system without any graphical interface
Copy the~/[data_dir]/[cluster_name]/merged/broad_thresh.img
file to your local machine. Follow above steps for creating the 3 region files. Finally copy the region file to~/[data_dir]/[cluster_name]/regionfiles
directory of the remote machine and countinue with the pipeline in the remote machine.
Step 7: Run Preliminary_Products_contourbin.py
For system without sudo access go to the base environment: conda deactivate
before running the following commands.
python Preliminary_Products_contourbin.py
bash preliminary_products.sh
Go to ciao environment conda activate ciao-4.15
and continue from step 8.
Step 8: Converting region file coordinate system syntax
- Convert region file coordinate system syntax
input: enter 'n' if you are running the pipeline on a machine with no access to the graphical interface of the operating system. 'y' otherwise.
python RegCoordChange.py
bash regcoordchange.sh
Step 9: Pre fitting
Running this will take a long time depending on the data. To run this in a remote server or another computer system follow the instruction FOR REMOTE MACHINE
. Future works includes adding CPU/GPU parallel processing and resume option.
- [OPTIONAL]:To run only the Step 9 in a remote server
Transfer all the data file and script file to the remote server. Make sure the remote server has ciao and heasoft installed. Then runchange_machine.py
.
python change_machine.py
input: /....../[new_data_dir]\
- [OPTIONAL]For users without sudo access
first go to the script/code directory
cd ~/.../ChandraCluster_mapPipeline
make tmp
folder
mkdir tmp
chmod 777 tmp
set the ASCDS_WORK_PATH environment variable. See Bugs: wavdetect and specextract tmpdir on the CIAO website for details information.
see ASCDS_WORK_PATH value.
printenv ASCDS_WORK_PATH
Now change it to your created tmp folder path.
export ASCDS_WORK_PATH=$PWD/tmp
Continue running remaining steps from the remote server.
- Run Processing_pre_fitting.py
python Processing_pre_fitting.py
- Initialize
heasoft
using alias we created previously.
heainit
- Run generated
pre-fitting.sh
bash pre-fitting.sh
-
While running
pre-fitting
you may see following Warnings related toOBS_ID and background files
. Ignore it. -
May encounter
specextract zero count error
. Ignore it for now.
Step 10: Processing_xspecfitting.py
- Run Processing_xspecfitting.py
python Processing_xspecfitting.py
- Go to
specfile_output
folder. You will find a file namedxspecfitting.sh
.
cd ~/[data_dir]/[cluster_name]/specfile_output
- run
xspecfitting.sh
bash xspecfitting.sh
- If you face following error just type
exit
and press Enter.
- This particular error will run endlessly. Just press
Ctrl+C
to stop it and typeexit
to continue.
- Return to the python script folder and start following from step 11.
cd ~/ChandraCluster_mapPipeline
Step 11: Run ParseOutput_xspec.py
python ParseOutput_xspec.py
Step 12: Run cleanup.py
python cleanup.py
Step 13: Finale step, creating maps
- Run
pipeline_maps.py
python pipeline_maps.py
All the maps will be saved in the maps folder of the data folder. ~/[data_dir]/[cluster_name]/maps
DONE!!!!!!!!
-
All the generated data products including generated bash script and maps of some galaxy clusters using the pipeline: drive
-
This paper by J. P. Breuer discusses about image analysis of a2256 cluster.
Some resources which helped me to work with this pipeline and also my ongoing work on detecting cold fronts from galaxy clusters with potential minihalo.
- Galaxy Clusters, ARGI
- Contour binning: a new technique for spatially resolved X-ray spectroscopy applied to Cassiopeia A
- The Galaxy Cluster 'Pypeline' for X-ray Temperature Maps: ClusterPyXT, arXiv
- Study of the formation of Cold Fronts and Radio Mini-halos induced by the Intergalactic Gas Sloshing in the Cores of Galaxy Clusters
- The Mergers in Abell 2256: Displaced Gas and its Connection to the Radio-emitting Plasma
- X-ray spectroscopy of galaxy clusters: studying astrophysical processes in the largest celestial laboratories
- A Brief Intro to the Chandra Mission by Jonathan McDowell
- An X-ray Data Primer
- X-ray spectroscopy of galaxy clusters: studying astrophysical processes in the largest celestial laboratories
- Occurrence of Radio Minihalos in a Mass-limited Sample of Galaxy Clusters
- Expanding the Sample of Radio Minihalos in Galaxy Clusters
- Diffuse Radio Emission from Galaxy Clusters
- Different binning approaches