Strange IndiaStrange India


LUKE image calibration process

During the ground activities of the integration and test phases of the LICIACube, several sessions of calibration measurements were carried out to fully characterize the performances of the instruments. Measurements were taken both with and without external calibrated light sources.

The acquisition of images in dark conditions enabled the characterization of the electrical parameters of the detector. Dark current, fixed pattern noise and readout noise of the detector and their dependence on the temperature for each pixel were characterized and measured.

The calibration curves for radiance and digital counts (DN) of the instruments were obtained by measurements with a calibrated integrating sphere:

$$R\left({\rm{W}}\,{{\rm{m}}}^{-2}\,{{\rm{sr}}}^{-1}\,{{\rm{nm}}}^{-1}\right)=F\left({\rm{DN}}\right)$$

The results of the analyses of acquired calibration data show that using a B-spline as a model for the calibration curve it is possible to obtain the best fit of experimental data.

The characterization at pixel level was performed, giving for LUKE 3 × 2,048 × 1,088 calibration curves (one curve per pixel for each RGB Bayer filter).

The calibration of the acquired scientific images starts from the raw data (acquired frames), the detector temperature (in housekeeping data) and the integration time of the image together being used for calculating the bias frame. This bias frame, composed of the sum of the dark signal and the fixed pattern noise, is subtracted from the raw image.

The three colour frames given by the Bayer filter are then retrieved after applying the debayering algorithm.

The pixel value in DN of the obtained frames is then converted to radiance (W m−2 sr−1 nm−1) by applying the calibration curves obtained by on-ground calibration and confirmed by in-flight check before the fly-by of the Didymos system. Final calibrated images include three separate planes associated with the three RGB filters produced by the debayering process.

Dimorphos shape constraints

The overall size of Dimorphos, as viewed by LICIACube, can be retrieved by combining images in which the lit side of the moonlet is visible in a following subset of images, obtained just after the CA and showing the outline of the dark side of Dimorphos (Extended Data Fig. 4).

Two pairs of images, in which both the illuminated and non-illuminated hemispheres can be seen independently, are used to perform this analysis. Each pair of images is acquired inside the same acquisition triplet and therefore they have very similar observation geometries.

In the short-exposure images (exposure time 0.7 ms), the illuminated hemisphere is clearly visible, whereas in the long-exposure ones (exposure time 35 ms) the non-illuminated part of the asteroid appears as a shadow in the saturated part of the plume.

By knowing the distance between the spacecraft and the target (with an accuracy of about 2 km at CA), the pixel scale in metres is determined for all the exploited images. After choosing a signal threshold so that the plume and Dimorphos are seen as different objects, a classical computer vision algorithm enables the determination of the object sizes. Considering the Dimorphos axes values computed using the DART measurements (that is, x = 177 m, y = 174 m and z = 116 m) (ref. 1) and taking into account that roughly a half of the hemisphere area can be visible in each of the selected images, one object per each image with size between 3,000 m2 and 6,000 m2 is selected. Furthermore, in one image it is also possible to extract the orientation of the objects and, hence, the axis sizes.

In particular, by looking at Extended Data Fig. 4, the values of the semi-axis A1 = 80 m and of the axis A2 = 100 m are determined with an uncertainty of 14 m, in good agreement with what was found by DART, taking into account that the entire shape is not determined by this single analysis.

Cone geometry methods

Equation (1) gives the geometric relation between a perfectly axisymmetric cone and its projection onto a plane in Euclidean space, where α is the half aperture angle of the original cone, δ is the half angle of the projected cone and θ is the angle between the axis of the original cone and the plane onto which it is projected (Extended Data Fig. 2).

$$\tan \delta =\frac{\tan \alpha }{\sqrt{{\cos }^{2}\theta -{\tan }^{2}\alpha {\sin }^{2}\theta }}$$

(1)

The projected aperture angles (2δ) are measured using LUKE images, and the SPICE data enable the calculation of camera planes in the inertial space. These are the planes to which the images are projected at each image acquisition time. Extended Data Table 1 details the image parameters used, and Extended Data Fig. 1 shows cropped portions of the respective images, which were used for the measurement of the projected aperture angle 2δ. The uncertainty of the measurements is the minimum measurement possible by the protractor used, which is 1°.

Deriving an upper limit for the aperture angle

Equation (1) is rewritten as equation (2) for distinction. Equation (2) implies that given a measured projected half angle δ of a cone, the highest possible half angle α of the original cone can be obtained when the angle between the cone axis and the projected plane is 0°. A static cone is assumed over all six observations. The lowest projected aperture angle measured is the highest possible value of the original cone aperture angle. As such, the upper limit for the aperture angle of the ejecta cone has to be 140° with an uncertainty of 1°.

$$\tan \alpha =\frac{\tan \delta \cos \theta }{\sqrt{1+{\tan }^{2}\delta {\sin }^{2}\theta }}$$

(2)

Constraining the axis and the aperture angle of the ejecta cone

Using these measured data and SPICE data, a nonlinear equation for each observation of the cone is constructed. A projected plane is defined by introducing the following equation, ax + by + cz + d = 0, where a, b, c and d are the coefficients describing the plane and x, y and z are the coordinates. The unit vector of the cone axis is also defined as (p, q and r). As using these geometric constraints yields θ, θ in equation (1) can be replaced with the quantities defined above and rewritten in the following way:

$$f=-{\tan }^{2}\alpha +{\tan }^{2}\delta \left(1-\frac{{(a\times p+b\times q+c\times r)}^{2}}{{k}^{2}}(1+{\tan }^{2}\alpha )\right)=0$$

(3)

where k is (a2 + b2 + c2)1/2. This equation is the constraint that the cone geometry must satisfy.

In equation (3), there are four knowns from measurements (δ, a, b and c), whereas others (α, p, q and r) are unknown. Note that α can be constrained based on the above discussion. Thus, it is necessary to have four equations to solve p, q, r and tan2α, where α is eventually calculated. Five equations derived from the above format and the equation of the unit vector components lead to six equations in total. As four terms must be solved, all the 15 combinations are tried choosing four from six equations. The following equations are a possible combination that includes the unit vector equation.

$${f}_{1}=-{\tan }^{2}\alpha +{\tan }^{2}{\delta }_{1}\left(1-\frac{{(ab{c}_{10}\times p+ab{c}_{11}\times q+ab{c}_{12}\times r)}^{2}}{{k}_{1}^{2}}(1+{\tan }^{2}\alpha )\right)=0$$

$${f}_{2}=-{\tan }^{2}\alpha +{\tan }^{2}{\delta }_{2}\left(1-\frac{{(ab{c}_{20}\times p+ab{c}_{21}\times q+ab{c}_{22}\times r)}^{2}}{{k}_{2}^{2}}(1+{\tan }^{2}\alpha )\right)=0$$

$${f}_{0}=-{\tan }^{2}\alpha +{\tan }^{2}{\delta }_{0}\left(1-\frac{{(ab{c}_{00}\times p+ab{c}_{01}\times q+ab{c}_{02}\times r)}^{2}}{{k}_{0}^{2}}(1+{\tan }^{2}\alpha )\right)=0$$

$${f}_{4}={p}^{2}{+q}^{2}+{r}^{2}-1=0$$

$${k}_{0}^{2}={ab}{c}_{00}^{2}+{ab}{c}_{01}^{2}+a{c}_{02}^{2}$$

As an additional check, synthetic cones at known random axes with an aperture angle of 140° are generated and observed at different camera positions such that they could be viewed through a side-on profile, similar to the LUKE images. The plane geometry coefficients (a, b, c) that define the camera plane in inertial space are used to compute the projected aperture angles (2δ) for three camera positions. Then, the three nonlinear equations that were created by the synthetic cone generation and the unit vector equation are numerically solved, to find the four needed unknowns. The optimize.roots routine of the python library scipy19, which can be initiated with guesses of the cone axis and of the aperture angle 2α, is used for solving this system of nonlinear equations. Given the nonlinear nature of the equations, the guess of the angle is converted to tan2α, before initiating the solving routine. A series of starting point guesses are computed combining different directions for the axis solution and an angle for the aperture angle. The vectorial part of the guess is thus based on systematically sampling all the possible directions around a unit hemisphere with enough resolution using a spherical coordinate system. The guess for the angle of the solution is thus appended with all the sampled directions and iterated over all the guess combinations. As such, visualizing the results for the solved axis and the aperture angle using several plots, a solution for the original axis of the synthetic cone is recovered to an accuracy of angular separation of less than 0.1°. The solution for the aperture angle has an accuracy of less than 0.2°.

As there are several ways of choosing a combination of equations to be solved, a unique solution is not obtained for the cone axis. Therefore, the axis solution needs to be rotated in three-dimensional space such that the rotated cone axis matches with the position angle (angle measured from the projected north pole of the celestial sphere towards the east in the LUKE plane) of the observed ejecta cone axis in images. It is noteworthy in this context that a twist angle of 15° has to be applied to image planes before proceeding to a geometrical analysis of the position angle because of the imprecisions in the currently available LICIACube SPICE data. Following this twist-angle correction, first, the rotation required in the LUKE plane for the projection of the solved cone axis to match the position angle of the ejecta cone axis in images is found. Next, the solved cone axis is rotated along the LUKE boresight in three-dimensional space in very small angular (0.18°) increments up to 360°. At each increment, the new axis is projected onto the LUKE plane to find its angular separation with respect to the position angle of the ejecta cone axis in the images. Therefore, the resulting solution reaches the new axis with the least angular separation with respect to the position angle of the ejecta cone axis in images, when projected to the LUKE plane. The position angle of the ejecta cone was measured using the image reported with ID 1 in Extended Data Fig. 1.

Once a candidate solution axis is obtained, which matches the position angle of the ejecta cone in images, the ejecta cone is simulated at the timestamps of five images used for this analysis at their observation geometries, in which the images were initially acquired (Extended Data Fig. 1). Image ID (6) in Extended Data Fig. 1 is used to reject or accept candidate solutions, because of its very different observing geometry, compared with other images. Going through all the 15 combinations of the equations, all the candidate solutions, obtained after matching the positional angle of the ejecta cone in the image ID 1 in Extended Data Fig. 1, are explored. An approach similar to that in ref. 20 is applied to show the range of solutions for the cone axis direction that are mathematically possible and the derived solution constrained by different view geometries (Extended Data Fig. 3). The solution is a 144°-aperture angle cone with its axis pointing to (RA, DEC) = (137°, +19°). This solution is obtained by solving for the combination of three nonlinear equations formed by images ID (2), (4) and (5) in Extended Data Fig. 1 and the unit vector equation. The obtained aperture angle of 144° exceeded the upper limit of 140° placed above because image ID 1 in Extended Data Fig. 1 does not go into solving this specific combination of equations. Accordingly, the aperture angle of the ejecta cone is established as 140 ± 4°. The position angle of the axis solution in image ID 1 in Extended Data Fig. 1 is 72° once considered the twist angle of 15° needed to account for the imprecisions in SPICE data. The angular separation between the cone axis and the incoming DART direction is 10°.

Because of the 15° twist angle required to account for the SPICE imprecisions, the position angle of the ejecta cone in image ID 1 in Extended Data Fig. 1 oscillates between 105° and 75°. Consequently, the uncertainty of the cone axis oscillates between RA: 128°, 145° and DEC: +29°, +7°. Therefore, this results in an axis solution of (RA, DEC) \(=\,{{137}_{-9}^{+8}}^\circ ,\,+{{19}_{-12}^{+10}}^\circ \).

Filamentary streams

To understand the morphology of the ejecta and spatial reference, filamentary streams are labelled in the highest spatially resolved image acquired just before the CA (Fig. 2). Filamentary streams are defined as rectilinear extended structures extending from the surface of Dimorphos. They are connected to ray crater systems (see ref. 21 and references therein), and may constrain the boulder-rich surface morphology of the target, internal structure and shape for the impact and ejecta modelling in the future8,22,23.

Using DART, LICIACube and Dimorphos referencing positions calculated through reconstructed SPICE data, 18 filaments can be distinguished extending across the image up to 4 km at an exposure time of 10 ms (Fig. 2). The streams are arising nearly radially from the photocentre of the ejecta.

Upper limits on ejection velocities from early structures

Ejecta velocities are determined from a pair of sequential frames, indexed k − 1 and k and separated in time by ∆t, beginning with the angular projection measured at the field of view of the instrument. From each observation, spacecraft position S, ejecta origin position O, distance from spacecraft to ejecta origin position D, angular separation of ejecta structure from origin θ and projected ejecta structure extension Pj are defined (see Extended Data Fig. 5a for the labelling). These projected ejecta velocities can be used to estimate the magnitudes of the ejecta velocities when the observations fulfil certain conditions. Assuming that the angle ω is virtually unchanged between the sequential frames, it is possible to postulate

$$\frac{{\sigma }_{k}}{{\sigma }_{k-1}}=\frac{{Pj}_{k}}{{Pj}_{k-1}}=\frac{\Delta {t}_{k}}{\Delta {t}_{k-1}}\,$$

(4)

The projected ejecta structure extension is given as

$${Pj}_{k}=2({D}_{k}\pm {\sigma }_{k})\tan \left(\frac{{\theta }_{k}}{2}\right)$$

(5)

Thus, solving for σk as a function of the known quantities and σ(k−1):

$${\sigma }_{k}=\left|\frac{\left(\frac{\Delta {t}_{k}}{\Delta {t}_{k-1}}\right){D}_{k-1}{{\rm{FOV}}}_{k-1}-{D}_{k}{{\rm{FOV}}}_{k}}{{{\rm{FOV}}}_{k-1}\pm {{\rm{FOV}}}_{k}}\right|$$

(6)

$${{\rm{FOV}}}_{k}=\tan \left(\frac{{\theta }_{k}}{2}\right)$$

(7)

Finally, substituting these quantities into the cosine law from the triangles defined in Extended Data Fig. 5a,

$${P}_{k}^{2}={V}^{2}{\Delta }^{2}{t}_{k}={D}_{k}^{2}+{({D}_{k}\pm {\sigma }_{k})}^{2}-2({D}_{k}\pm {\sigma }_{k}){D}_{k}\cos ({\theta }_{k})$$

(8)

where V is the true magnitude of the observed velocity. The projection angle is also solved:

$$\cos (\omega )=\frac{{\sigma }_{k}-{P}_{k}-{{Pj}}_{k}}{-2{P}_{k}{{Pj}}_{k}}$$

(9)

Solving equations (8) and (9) yields two solutions. The solution that yields coherent velocity through different sequential frames—that is, the same order of magnitude and smallest standard deviation, is kept and shown in Extended Data Table 2. Errors are propagated based on an average manual error of 3 pixels when measuring the projected distances.

The Didymos system orbital configuration, DART trajectory, LICIACube trajectory and relative positioning and instrument framing are calculated through reconstructed SPICE data.

Resolved morphological features and ejection velocities

The morphological features are tracked according to their visual distinctiveness between the frames taken 106 s (DDimo = 376 km) and 118 s (DDimo = 304 km) after the impact. The features are classified according to their apparent morphology: C, clumps; N, bright nodules; and B, filament breaking, merging, discontinuities and undulations (Fig. 3). Their orientation is tracked with respect to the filamentary streams, because many features are observed along their extension from the surface to the solar system environment, or in between.

Both solutions are provided for the estimation of the velocity magnitudes in Extended Data Table 2. As all features are studied in only two frames, it is impossible to distinguish between any preferential solution.

RGB analysis methods

The RGB capabilities of the LUKE camera enable colour investigation of the plume ejected by Dimorphos. Whereas on rocky surfaces the differences in colours are related mostly to composition and alterations because of space weathering24, in diffuse ejecta plumes such as those observed by LICIACube, other effects can lead to colour changes because of physical properties of particles, such as the presence of extremely small grain sizes25.

Triplets of images with different exposure times were acquired during the fly-by. The last triplet in which Dimorphos and the plume generated by DART impact are still almost entirely visible is used for colour investigation. The triplet is composed of images acquired at 2022-09-26 23:17:03.000 (0.5 ms exposure time), 2022-09-26 23:17:03.004 (4 ms exposure time) and 2022-09-26 23:17:03.024 (20 ms exposure time). For reference on the wavelength range covered by the RGB filter, see ref. 26. On the calibrated images, the background is first evaluated to perform the removal of all areas that are not characterized by the presence of a plume. An average value of the background is calculated in the area diametrically opposite to the position of the binary system. Thus, the signal-to-noise ratio is computed for each channel in each image (Extended Data Fig. 6).

At the end of this process, the pixels in which the signal-to-noise ratio is less than 10 are masked. Before evaluating the channel ratios, the solar contribution is removed from the LUKE filters (R = 0.1320, G = 0.1706 and B = 0.1569). The maps resulting from the ratio of the three filters together with the associated uncertainties are shown in Extended Data Fig. 7.



Source link

By AUTHOR

Leave a Reply

Your email address will not be published. Required fields are marked *