Process alerts and applies correction to pass from difference magnitude to apparent magnitude. This includes the previous detections coming from the ZTF alerts.
The correction for ZTF is as follows:
For the error, the following is used:
An additional error, for extended sources, is also calculated:
Presently, there are no correction strategies for any other surveys.
Any other survey will get the correction fields added, but filled with null
values.
To install the repository without support for running the step (only including tools for correction) run:
pip install .
To include the step itself:
pip install .[apf]
from correction import Corrector
detections = [{"aid": "AID1", "tid": "ZTF", ...: ..., "extra_fields": {"distnr": 1, ...: ...}}, ...]
corr = Corrector(detections)
corr.corrected # Gets boolean pandas series on whether the detection can be corrected
corr.dubious # Gets boolean pandas series on whether the correction/non-correction is dubious
corr.stellar # Get boolean pandas series on whether the source is star-like
corr.corrected_magnitudes() # Computes pandas dataframe with corrected magnitudes and errors
Including the development dependencies is only possible using poetry:
poetry install -E apf
Run tests using:
poetry run pytest
New strategies (assumed to be survey based) can be added directly inside the module core.strategy
as a new
Python file. The name of the file must coincide with the survey name (lowercase),
i.e., a file atlas
will work on detections with sid
s such as ATLAS
, AtLAs
, etc., but not ATLAS-01
Strategy modules are required to have 4 functions:
is_corrected
: Returns boolean pandas data series showing if the detection can be correctedis_dubious
: Returns boolean pandas data series showing whether the detection correction status is dubiousis_stellar
: Returns boolean pandas data series showing whether the detection is likely to be stellarcorrect
: Returns pandas data frame with 3 columns (mag_corr
,e_mag_corr
ande_mag_corr_ext
)
If detections with no survey strategy defined are part of the messages, these will be quietly filled with default
values (False
for the boolean fields and NaN
for the corrected magnitudes).
Important: Remember to import the new module in __init__.py
inside core.strategy
or it won't be available.
These are required only when running the scripts, as is the case for the Docker images.
CONSUMER_SERVER
: Kafka host with port, e.g.,localhost:9092
CONSUMER_TOPICS
: Some topics. String separated by commas, e.g.,topic_one
ortopic_two,topic_three
CONSUMER_GROUP_ID
: Name for consumer group, e.g.,correction
CONSUME_TIMEOUT
: (optional) Timeout for consumerCONSUME_MESSAGES
: (optional) Number of messages consumed in a batch
PRODUCER_SERVER
: Kafka host with port, e.g.,localhost:9092
PRODUCER_TOPIC
: Topic to write into for the next step
The scribe will write results in the database.
SCRIBE_TOPIC
: Topic name, e.g.,topic_one
SCRIBE_SERVER
: Kafka host with port, e.g.,localhost:9092
METRICS_TOPIC
: (optional) Topic name, e.g.,topic_one
METRICS_SERVER
: Kafka host with port, e.g.,localhost:9092
For each release, an image is uploaded to GitHub packages. To download:
docker pull ghcr.io/alercebroker/correction_step:latest