Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prepare open version #76

Merged
merged 6 commits into from
Nov 2, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/ci.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ jobs:
strategy:
fail-fast: false
matrix:
python-version: ["3.11", "3.12"]
python-version: ["3.11", "3.12", "3.13"]

steps:
- uses: actions/checkout@v3
Expand Down
38 changes: 0 additions & 38 deletions .github/workflows/docs.yaml

This file was deleted.

36 changes: 36 additions & 0 deletions .github/workflows/python-publish.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# This workflow will upload a Python Package using Twine when a release is created
# For more information see: https://docs.github.com/en/actions/automating-builds-and-tests/building-and-testing-python#publishing-to-package-registries

# This workflow uses actions that are not certified by GitHub.
# They are provided by a third-party and are governed by
# separate terms of service, privacy policy, and support
# documentation.

name: Upload Python Package

on:
release:
types: [published]

permissions:
contents: read

jobs:
deploy:
runs-on: ubuntu-latest
permissions:
id-token: write # IMPORTANT: this permission is mandatory for trusted publishing
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.x'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install build
- name: Build package
run: python -m build -s
- name: Publish package distributions to PyPI
uses: pypa/gh-action-pypi-publish@release/v1
33 changes: 33 additions & 0 deletions .readthedocs.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
# .readthedocs.yaml
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details

# Required
version: 2

# Set the OS, Python version and other tools you might need
build:
os: ubuntu-22.04
tools:
python: "3.12"


# Build documentation in the "docs/" directory with Sphinx
sphinx:
configuration: docs/source/conf.py

# Optionally build your docs in additional formats such as PDF and ePub
formats:
- pdf
- epub

# Optional but recommended, declare the Python requirements required
# to build your documentation
# See https://docs.readthedocs.io/en/stable/guides/reproducible-builds.html
python:
install:
- method: pip
extra_requirements:
- all
- docs
path: .
43 changes: 18 additions & 25 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,43 +1,36 @@
# punchpipe

`punchpipe` is the data processing pipeline for [the PUNCH mission](https://punch.space.swri.edu/).
All the science code and actual calibration functionality lives in `punchbowl`. This package
only automates the control segment for the Science Operations Center.

> [!CAUTION]
> This package is still being developed. There will be breaking code changes until v1.
> We advise you to wait until then to use it.

The `punchpipe` is organized into segments, i.e. levels of processing to produce specific
data products. Segments are referred in code by their ending level,
e.g. `level1` means the Level 0 to Level 1 segment.

## Accessing the data

Coming soon.

## First-time setup
1. Create a clean virtual environment. You can do this with conda using `conda env create --name ENVIRONMENT-NAME`
2. Install `punchbowl` using `pip install .` in the `punchbowl` directory.
3. Install `punchpipe` using `pip install .` while in this directory
4. Set up database credentials Prefect block by running `python scripts/credentials.py`.
- If this file does not exist for you. You need to determine your mySQL credentials then create a block in Python:
```py
from punchpipe.controlsegment.db import MySQLCredentials
cred = MySQLCredentials(user="username", password="password")
cred.save('mysql-cred')
```
5. Set up databases by running `scripts/create_db.py` directory.
6. Build all the necessary deployments for Prefect by following [these instructions](https://docs.prefect.io/concepts/deployments/).
- See below for an example:
```shell
./deploy.sh
```
7. Create a work queue in the Prefect UI for the deployments (will need to run `prefect orion start` to get the UI)
8. Create an agent for the work queue by following instructions in the UI

Coming soon.

## Running
1. Make sure first-time setup is complete
2. Launch Prefect using `prefect orion start`
3. Create agents for the work queues by following the instructions in the UI for the work queue

## Resetting
1. Reset the Prefect Orion database using `prefect orion database reset`.
2. Remove all the `punchpipe` databases by running `erase_db.sql`
Coming soon.

## Getting help

Please open an issue or discussion on this repo.

## Contributing

## Licensing
We encourage all contributions.
If you have a problem with the code or would like to see a new feature, please open an issue.
Or you can submit a pull request.

16 changes: 11 additions & 5 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
from importlib.metadata import version as get_version
from packaging.version import Version

# Configuration file for the Sphinx documentation builder.
#
# This file only contains a selection of the most common options. For a full
Expand All @@ -16,13 +19,16 @@


# -- Project information -----------------------------------------------------

project = 'punchpipe'
copyright = '2023, PUNCH Science Operations Center'
author = 'PUNCH Science Operations Center'
project = "punchpipe"
copyright = "2024, PUNCH Science Operations Center"
author = "PUNCH Science Operations Center"

# The full version, including alpha/beta/rc tags
release = '0.0.1'
release: str = get_version("punchpipe")
version: str = release
_version = Version(release)
if _version.is_devrelease:
version = release = f"{_version.base_version}.dev{_version.dev}"


# -- General configuration ---------------------------------------------------
Expand Down
2 changes: 1 addition & 1 deletion punchpipe/controlsegment/db.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
import os

from sqlalchemy import TEXT, Column, DateTime, Integer, String, Boolean, JSON
from sqlalchemy import TEXT, Column, DateTime, Integer, String, Boolean
from sqlalchemy.orm import declarative_base

Base = declarative_base()
Expand Down
2 changes: 1 addition & 1 deletion punchpipe/controlsegment/util.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
from prefect_sqlalchemy import SqlAlchemyConnector
from ndcube import NDCube
from punchbowl.data import write_ndcube_to_fits, get_base_file_name
from sqlalchemy import and_, or_
from sqlalchemy import or_

from punchpipe.controlsegment.db import File

Expand Down
56 changes: 0 additions & 56 deletions punchpipe/deliver.py

This file was deleted.

1 change: 0 additions & 1 deletion punchpipe/flows/level2.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,6 @@

from prefect import flow, task, get_run_logger
from punchbowl.level2.flow import level2_core_flow
from sqlalchemy import and_

from punchpipe import __version__
from punchpipe.controlsegment.db import File, Flow
Expand Down
3 changes: 1 addition & 2 deletions punchpipe/flows/levelq.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,7 @@
from datetime import datetime

from prefect import flow, task, get_run_logger
from punchbowl.level2.flow import level2_core_flow, levelq_core_flow
from sqlalchemy import and_
from punchbowl.level2.flow import levelq_core_flow

from punchpipe import __version__
from punchpipe.controlsegment.db import File, Flow
Expand Down
78 changes: 74 additions & 4 deletions punchpipe/level0/ccsds.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,10 @@
import os

import ccsdspy
import numpy as np
from ccsdspy.utils import split_by_apid
from matplotlib import pyplot as plt
import pylibjpeg

PACKET_NAME2APID = {
"ENG_LZ": 0x60,
Expand Down Expand Up @@ -50,12 +53,79 @@ def process_telemetry_file(telemetry_file_path):
apid_separated_tlm = open_and_split_packet_file(telemetry_file_path)
parsed_data = {}
for apid, stream in apid_separated_tlm.items():
definition = load_packet_def(PACKET_APID2NAME[apid])
parsed_data[apid] = definition.load(stream, include_primary_header=True)
if apid not in PACKET_APID2NAME or apid in [96]:
print(f"skipping {apid}")
else:
print(apid, PACKET_APID2NAME[apid])
definition = load_packet_def(PACKET_APID2NAME[apid])
parsed_data[apid] = definition.load(stream, include_primary_header=True)
return parsed_data


def parse_compression_settings(values):
# return [{'test': bool(v & 1), 'jpeg': bool(v & 2), 'sqrt': bool(v & 4)} for v in values]
return [{'test': bool(v & 0b1000000000000000),
'jpeg': bool(v & 0b0100000000000000),
'sqrt': bool(v & 0b0010000000000000)} for v in values]


def unpack_compression_settings(com_set_val: "bytes|int"):
"""Unpack image compression control register value.

See `SciPacket.COMPRESSION_REG` for details."""

if isinstance(com_set_val, bytes):
assert len(com_set_val) == 2, f"Compression settings should be a 2-byte field, got {len(com_set_val)} bytes"
compress_config = int.from_bytes(com_set_val, "big")
elif isinstance(com_set_val, (int, np.integer)):
assert com_set_val <= 0xFFFF, f"Compression settings should fit within 2 bytes, got \\x{com_set_val:X}"
compress_config = int(com_set_val)
else:
raise TypeError
settings_dict = {"SCALE": compress_config >> 8,
"RSVD": (compress_config >> 7) & 0b1,
"PMB_INIT": (compress_config >> 6) & 0b1,
"CMP_BYP": (compress_config >> 5) & 0b1,
"BSEL": (compress_config >> 3) & 0b11,
"SQRT": (compress_config >> 2) & 0b1,
"JPEG": (compress_config >> 1) & 0b1,
"TEST": compress_config & 0b1}
return settings_dict


def unpack_acquisition_settings(acq_set_val: "bytes|int"):
"""Unpack CEB image acquisition register value.

See `SciPacket.ACQUISITION_REG` for details."""

if isinstance(acq_set_val, bytes):
assert len(acq_set_val) == 4, f"Acquisition settings should be a 4-byte field, got {len(acq_set_val)} bytes"
acquire_config = int.from_bytes(acq_set_val, "big")
elif isinstance(acq_set_val, (int, np.integer)):
assert acq_set_val <= 0xFFFFFFFF, f"Acquisition settings should fit within 4 bytes, got \\x{acq_set_val:X}"
acquire_config = int(acq_set_val)
else:
raise TypeError
settings_dict = {"DELAY": acquire_config >> 24,
"IMG_NUM": (acquire_config >> 21) & 0b111,
"EXPOSURE": (acquire_config >> 8) & 0x1FFF,
"TABLE1": (acquire_config >> 4) & 0b1111,
"TABLE2": acquire_config & 0b1111}
return settings_dict


if __name__ == "__main__":
path = "/Users/jhughes/Desktop/sdf/punchbowl/Level0/packets/2024-02-09/PUNCH_NFI00_RAW_2024_040_21_32_V01.tlm"
path = "/Users/jhughes/Desktop/data/PUNCH_CCSDS/RAW_CCSDS_DATA/PUNCH_NFI00_RAW_2024_160_19_37_V01.tlm"
# path = "/Users/jhughes/Desktop/data/PUNCH_CCSDS/RAW_CCSDS_DATA/PUNCH_WFI01_RAW_2024_117_22_00_V01.tlm"
parsed = process_telemetry_file(path)
print(parsed[0x20])
print(parse_compression_settings(parsed[0x20]['SCI_XFI_COM_SET'])[22:44])

fig, ax = plt.subplots()
ax.plot(parsed[0x20]['CCSDS_PACKET_LENGTH'])
plt.show()

print(parsed[0x20]['CCSDS_PACKET_LENGTH'][22:44])

img = np.concatenate(parsed[0x20]['SCI_XFI_IMG_DATA'][22:44])
img = pylibjpeg.decode(img.tobytes())

Loading
Loading