Even while maintaining a push-broom hyperspectral imager “architecture”, there are several permutations of the basic design that would result in slightly different performance values. These permutations include e.g., which focal length objective lens to use, whether to use a grism or a grating, a reflective vs refracting relay, etc. On a a qualitative level, each design has a unique set of advantages / disadvantages. Quantitatively, however, a definitive conclusion on which design is “best” may require completing high-fidelity simulations and possibly prototyping for several promising candidates.
To better understand the options available at the early stages of our design process, it is instructive to consider several scenarios at once and summarize what the performance may look like. This analysis is carried out here.
1. Focal Length vs Instantaneous Spatial Resolution
Selecting a longer focal length objective lens allows us to see “distant objects up close”. For a simple optical imaging system, the angle subtended by the chief rays coming from the center of adjacent plots of farmland on the Earth remains unchanged. Stipulating that these rays are incident on the centers of two adjacent pixels, we can derive the pixel-limited spatial resolution:
$\theta =\frac{p}{f} = \frac{\Delta x}{H}$
where $\theta$ is the angle subtended, $p$ is the distance between pixels (pixel pitch), $f$ is the imaging system effective focal length, $H$ is the altitude of the satellite, and $\Delta x$ is the minimum resolvable distance between farm fields. Our sensor has 15µm pixels, and with a 50mm focal length lens at low-Earth orbit (500km), we will be able to resolve points on the Earth 150m apart.
This quantity $\Delta x$ is an “instantaneous” spatial resolution, and is maintained perpendicular to the satellite’s direction of motion. In # 2 we will discuss the along-track spatial resolution, which is affected by the satellite’s motion over the Earth.
The diffraction limit requires that given some focal length, the diameter must be no less than some value so that light can indeed be focused into a spot less than or equal to the area of the pixel. This can be more simply captured in the working f-number of the system: $F/\# :=f/D$, with $D$ being the entrance pupil diameter of the optical system. Hence, using the Rayleigh Criterion: $\Delta x = 1.22 \lambda\cdot F/\#$, we determine a system faster than f/7 is required.
Based on other preliminary considerations, to obtain adequate signal brightness we require a speed of ~f/3. The last table entry estimates what the physical diameter of such a lens would be, including a realistic margin for the mechanical housing.
| Focal Length (w/ 15µm pixels) | Δx (across track) | Lens Diameter (f/3) |
|---|---|---|
| 25 mm | 300 m | 10 mm |
| 50 mm | 150 m | 20 mm |
| 75 mm | 100 m | 30 mm |
| 100 mm | 75 m | 35 mm |
| 150 mm | 50 m | 55 mm |
| 300 mm | 25 m | 110 mm |
2. Along-Track Spatial Resolution
The satellite is flying over the Earth with a ground-projected velocity of 7.5km/second. Imagine the camera used photographic film to capture the image. A “short” shutter opens the film for exposure for just 0.1 second. In this time, however, the satellite would have swept 750 meters over the ground. Even if one used a very long focal length lens with e.g. 50m instantaneous spatial resolution, the result would be motion-blurred. When considering digital camera sensors, it is clear that fast frame-rate imaging is required. A fast frame rate (e.g. 100fps) allows one to limit the exposure to 0.01 seconds, without camera “down time” during which the camera stops collecting light while its circuitry reads out the image and transfers it to memory storage. With this new scenario, motion blur is limited to just 75 meters, resulting in individual pixels which collected light from 50m across the track to 50m + 75m = 125m along the track. In other words, spatial sampling along the direction of motion is limited to one point every 125 meters.
Besides the camera frame rate, or exposure / integration time, another parameter affects the along-track spatial resolution: the slit width. In the push-broom hyperspectral imaging architecture, slit “selects” one line on the ground. The objective lens projects an image of the Earth onto the plane of the slit, and the dispersive relay reimages this “line on the ground” onto the sensor with light being spread by its spectrum perpendicular to the axis of the slit.
Moving “along” the slit axis on the camera sensor, one “walks” from one adjacent plot of land to the next in the across-axis direction on the ground, the resolution being determined by the focal length and pixel pitch (see # 1, instantaneous across-track spatial resolution). Consequently, if the slit is 15µm wide, and the (1:1) relay has no magnification, the image on the camera sensor will be exactly one row of pixels. Following similar reasoning, we can claim that the instantaneous across-track spatial resolution is the same as the instantaneous along-track spatial resolution, as they are both limited by the width of the pixel. (This is before considering the effects of motion blur.)
Now, our camera sensor allows up to 120fps frame rate (8msec integration time) for a 320256px, and 60fps (16ms integration time) for the full 640512px aspect. The resulting motion blur is at least 60m or 120m, respectively, assuming deadtime is minimized. Suppose that, for some application, one only cares to get an along-track spatial sampling of 450 meters using an f=50mm lens. We can relax the framerate requirement to e.g. 25 fps ( $\tau_{int}=40ms$ ), for which the motion blur would be 300 meters, and hence spatial sampling = 150m (instantaneous with slit width = 15µm = 1 pixel) + 300m (motion blur) = 450m. Or, alternatively, once can set the framerate to 50fps ( $\tau_{int}=20ms$ ) but widen the slit to two rows of pixels so the instantaneous along-track field of view becomes 300m. As before, the along-track spatial sampling works out to 450m, however in this case the slit is twice as wide. What are the consequences?
It would be best to design the system in anticipation of meeting the strictest requirements, i.e. focusing light through the 15µm slit. On the other hand, it would be remis to not acknowledge potential room for compromise. The tables below show the along-track spatial sampling given several hardware parameters: lens focal length (f), integration time (t_int), and slit width (w_slit).
$\Delta y$: f=100mm Lens, 15µm pixels
| t_int → w_slit ↓ | 8ms 120fps | 10ms 100fps | 16ms 60fps | | --- | --- | --- | --- | | 15µm | 135m | 150m | 195m | | 25µm | 185m | 200m | 245m | | 50µm | 310m | 325m | 370m |
$\Delta y$: f=50mm Lens, 15µm pixels
| t_int → w_slit ↓ | 10ms 100fps | 16ms 60fps | 20ms 40fps | | --- | --- | --- | --- | | 15µm | 225m | 270m | 300m | | 25µm | 325m | 370m | 400m | | 50µm | 575m | 620m | 650m |
3. SNR (Signal-to-Noise Ratio), Impacts of F/# and Integration Time on.