G. Z. Angeli*

a

, B. Xin

a

, C. Claver

a

, M. Cho

b

, C. Dribusch

b

, D. Neill

a

, J. Peterson

c

, J. Sebag

a

,

S. Thomas

a

a

AURA LSST Project, 950 N. Cherry Ave. Tucson, AZ 85719

b

AURA NOAO 950 N. Cherry Ave. Tucson, AZ 85719

c

Purdue University, 610 Purdue Mall, West Lafayette, IN 47907

All of the components of the LSST subsystems (Telescope and Site, Camera, and Data Management) are in production.

The major systems engineering challenges in this early construction phase are establishing the final technical details of

the observatory, and properly evaluating potential deviations from requirements due to financial or technical constraints

emerging from the detailed design and manufacturing process. To meet these challenges, the LSST Project Systems

Engineering team established an Integrated Modeling (IM) framework including (i) a high fidelity optical model of the

observatory, (ii) an atmospheric aberration model, and (ii) perturbation interfaces capable of accounting for quasi static

and dynamic variations of the optical train.

The model supports the evaluation of three key LSST Measures of

Performance: image quality, ellipticity, and their impact on image depth. The various feedback loops improving image

quality are also included. The paper shows application examples, as an update to the estimated performance of the

Active Optics System, the determination of deployment parameters for the wavefront sensors, the optical evaluation of

the final M1M3 surface quality, and the feasibility of satisfying the settling time requirement for the telescope structure.

integrated model, optical performance, image quality, LSST

The Large Synoptic Survey Telescope (LSST) is in the second year of its construction [1]. As the various components of

the observatory will become available, the three major subsystems (Telescope and Site [2], Camera [3], and Data

Management [4]) will go through comprehensive, well planned integration and test procedures on their own, before they

are assembled together. The commissioning phase of LSST starts in 2019, and it has three major components: (i) a

period of telescope commissioning and system tests using a commissioning camera, (ii) a period dominated by the

technical activities of integrating the Camera, Telescope, and Data Management and verifying it against system

requirements, and (iii) a period focused on characterizing the system with respect to the survey performance

specifications and science expectations.

One of the major systems engineering responsibilities during construction and commissioning is to maintain a thorough

performance estimate for the observatory, in support of continuous compliance evaluation. In particular, LSST Project

Systems Engineering is tracking performance metrics, as indicated in Figure 1. This paper describes the tools and

corresponding simulation framework to estimate some of the critical Measures of Performance: Image Quality,

Ellipticity, and Image Depth.

In its lifetime of 10 years, in the Wide-Fast-Deep survey fields LSST will obtain a large number of images (with a

median of larger than 825) of any sky object it observes. Due to statistical evaluation of the data, the overall survey

performance of the observatory (image depth, ellipticity correlation) will significantly surpass the achievable single visit

metrics. The LSST Science Requirements Document [5] separately specifies single visit and survey requirements. This

paper focuses on tools developed for estimating single visit performance metrics.

*gangeli@lsst.org; phone 1-520-318-8413; lsst.org

In evaluating technical and programmatic choices during construction, the various options of the trade study need to be

compared in the same metric. The impact of a particular trade option on the optical performance of the observatory is

usually a good measure of its criticality. Many times a technical change request or request for waiver precipitates such

comparisons, which in turn requires a reliable, validated set of simulation tools capable of linking technical

specifications and tolerances to critical Measures of (optical) Performance.

Figure 1 LSST performance metrics tracked by the project (not including Data Management Key Performance Metrics).

Integrated étendue is a derived metric characterizing the science capability of the observatory [6]

In subsystem and then system verification, the preference is always measuring the specified parameter. However, there

are important system parameters not lending themselves to direct measurements. For these parameters, verification is

usually done by analysis. A prominent example is image quality degradation due to major components of the system.

While the overall image quality of the observatory is certainly measurable, the individual contributions of the camera,

telescope optics, dome seeing, or atmosphere can only be estimated through modeling and simulations.

Identifying these individual contributions is important for verifying the performance of the properly functioning

observatory. It is equally important to account for anomalies of the system during integration and commissioning, as

reproducing those malfunctions in the model helps tremendously in eliminating them. A simulation framework linking

physical (environmental, operational, and technical) parameters to the measurable optical effects is a critical tool for the

LSST commissioning team.

The LSST project formerly developed an elaborated simulation system for predicting the science performance of the

observatory, including (i) a catalog simulator (CatSim) for providing representative sky targets in the LSST field of

view, (ii) an image simulator (PhoSim) for propagating those sky targets through the optical system, and (iii) an

operations simulator (OpSim) generating realistic observing cadences for the 10 year LSST experiment [7]. The toolset

presented in this paper capitalizes on these former developments. It is using PhoSim as its optical engine, OpSim to

provide realistic operational parameter statistics, and CatSim to anchor sky statistics. On the other side of the synergy,

the image and operations simulators are absorbing our detailed system descriptions and results into higher fidelity

science simulations.

The simulation tools presented in the paper fit into a framework. There are well established and validated interfaces

between the various components, and those components can be run concurrently, as needed. However, there was no

effort invested into developing an application layer integrating the components into a single tool running uniform code in

a uniform environment. Such a development was deemed outside of the scope of the LSST construction project.

The objective of system performance modeling is to evaluate the optical effects of various implementation imperfections

and disturbances [8]. Performance simulations as represented here do not address the effects of post-processing of the

collected data (DM algorithms), but rather the “raw” performance of the system. For this paper, system means the

aggregate of what the project builds and places on the summit – i.e. the dome, telescope, and camera, - but does not

include the natural environment, like atmosphere and wind. The environment is considered input to the system and its

model. The system can also be defined as the hardware and “firmware”, the later meaning the active control loops acting

on direct measurements of the state of the system.

Optical

System

St ructural

System

De te c tor

A tm osphe re

Sky Obje c ts

Opti

cs

posi

ti

on & shape

Te

mperature

and

its

gradi ents,

Te

lescope

poi nting,

Wi nd

and

air

pre ssure

Cont roller

A

Actuators

Supervis ory

control,

Te

lescope

el

evati

on

TC S

Te

mperature

Force bal ance

WF & St ate

Estimat or

WF & Guider Data

Cont roller

B

Te

mperature

Figure 2 Conceptual block diagram of the LSST system, as it is included in the integrated modeling framework

A graphically simplified, but comprehensive block diagram of the system is captured in Figure 2. It shows the delivered

“hardware” system in blue, while the environment is indicated in yellow and the various control systems operating on

the system in orange. For the sake of visual clarity, some secondary connections and data flows are omitted. For the

same reason, some functionalities and subsystems are combined into single blocks:

The Controller A block includes Look-up-Table (LUT) feed-forward, as well as Force Balance, thermal, guider,

and WFS feedbacks.

The architecture of the Active Optics System was reported in earlier publications [9, 10].

Actuators operating on the structure include both mechanical (force and position) and thermal (heaters and

blowers) effects; the arrow in the diagram indicates commands, as the actuators are part of the Structural

System. Controller B represents the thermal control of the focal plane, critical for sensor performance.

The Structural System includes both the telescope (M1M3 and M2 mirror glass with all the corresponding

actuators, as well as the hexapods and rotator for M2 and the camera) and camera (camera body, lenses and

filter with their mounts, as well as the entrance window of the Dewar and the focal plane).

The Structural System is impacted by the various perturbations (thermal environment inside the dome and

inside the camera, actuator behavior and noise, fabrication and installation errors).

The Detector includes all the solid state effects inside the CCD relevant to the optical image quality.

Optics position stands for the rigid body positions of each optical element, while optics shape denotes optical

surface figure errors, in both cases including the focal plane and its components, the rafts and sensors.

For conceptual clarity, the optical system is separated from the detector as well as from the atmosphere. In the physical

system, the boundaries are clear; in numerical models these interfaces are much harder to define.

Supervisory control, i.e. the coordination of the various controllers by setting and synchronizing their sampling rates is

included at a notional level. It is realized by the Telescope and Camera Control Systems (TCS and CCS), and while it is

clearly outside of the scope of this paper, actual timing should be well understood for correct simulations.

The core of the modeling framework, the optical engine, is PhoSim [11]. It is a Monte-Carlo simulator generating

photons above the atmosphere and then propagating them through the atmosphere and the observatory optical system,

into the silicon of the sensor, and converting them to electrons and eventually ADU (Analog-to-Digital Unit) counts for

each pixel.

PhoSim is uniquely suited for our modeling purposes, thanks to its features tailored to generate focal plane images of

high numbers of sky objects.

It is highly optimized for computational speed, enabling large scale Monte-Carlo simulations;

It delivers very high resolution images of the sky targets;

It provides very fine resolution chromatic (wavelength) sampling of the system behavior, due to its extensive

Monte-Carlo approach;

It accounts for sensor effects

PhoSim has well-defined interfaces that enable it to accept perturbations such as alignment errors, fabrication errors,

mirror bending, and thermal and gravitational deformations. The rigid-body degrees of freedom of the individual optical

elements can be easily controlled. The perturbations to the shape of the various optical surfaces can be accounted for in

the form of Zernike polynomials, mirror bending modes, or just an arbitrary grid surface. To define a grid surface, the

user can provide either a ZEMAX grid sag file or raw FEA output of the grid coordinates and their displacements.

The fidelity of the optical model, together with the implementation of the perturbations, is validated against the official

LSST ZEMAX model by means of the optical sensitivity matrix [12]. For small perturbations, the telescope can be

described by the linear optical model with proper fidelity. The sensitivity matrix describes how the exit pupil

(Optical Path Difference) of the optical system responds to the perturbations. For LSST, the degrees of freedom that

need to be controlled include those in the two actively supported mirror systems and the positioning of the two mirror

systems by the two hexapods. The current design is that the 20 lowest-order bending modes on each mirror substrate,

including M1M3 and M2, will be actively controlled. A total of 50 degrees of freedom will be controlled by the Active

Optics System (AOS). Meanwhile, the control of this large number of degrees of freedom requires the ability to measure

high-order aberrations in the wavefront. Annular Zernike polynomials Z

4

-Z

22

, in Noll/Mahajan's definition [13, 14], will

be measured and used. This makes the sensitivity matrix at any field position a 19 by 50 matrix. For the validation, we

perturb each degree of freedom by incremental magnitudes, in both positive and negative directions, and let PhoSim

calculate the

. We then fit each

to annular Zernikes. System linearity is verified, and the sensitivity matrix

element

is calculated at field

, where

is the

th

perturbation, and

is the coefficient of the

th

annular Zernike

term. Both the

map and the sensitivity matrix elements are compared against ZEMAX.

As an example, the top row of Figure 3 shows the on-axis

maps determined by ZEMAX and PhoSim with 0.5

microns of bending mode #5 on M1M3 surface. The difference of the two is shown on the bottom left of Figure 3. The

peak-to-valley difference is no more than a few nanometers. The RMS of the difference is only 0.1 nm. It is obvious

from the

maps that this bending mode mostly affects Z

, the trefoil that is symmetric about the x-axis. The bottom

right of Figure 3 shows

determined by ZEMAX and PhoSim, with -0.5, 0, and +0.5 micron of M1M3 bending mode

#5. It is seen that

varies linearly with M1M3 bending mode #5. The sensitivity measurements made using the PhoSim

and ZEMAX

maps provide almost identical results. The sensitivity matrix element in question here is

10

15

.

The perturbation index is 15 because there are a total of 10 rigid-body degrees of freedom on the M2 and the camera

hexapods that precede the bending modes in the matrix column indexing.

Figure 3 Top row: on-axis

maps determined by ZEMAX (top left) and PhoSim (top right) with 0.5 microns of

bending mode # 5 on M1M3 surface. Bottom row: the difference between the

maps by PhoSim and ZEMAX (bottom

left) and the determination of the sensitivity matrix element.

The LSST AOS control optimizes performance metrics across the focal plane. This is done using sensitivity matrices at

30 Gaussian Quadrature field points to link the optics state to the

and optical performance at these field points [15].

The sensitivity matrices at the centers of the four wavefront sensors are routinely used to estimate the system state.

These, together with the center of the field, constitute a total of 35 field points. The validation of all the 35 x 19 x 50 =

33,250 sensitivity matrix elements is now part of the PhoSim automated validation pipeline, which runs every few days.

The differences between the PhoSim and ZEMAX results are compared to the tolerance on our image quality metric,

, which we will discuss in Section 3.1. The tolerance on

is set to be 0.001.

We are below this tolerance for all

33,250 elements.

The comparisons described in this section validate almost all the optics related algorithms and parameters in PhoSim.

The

calculations are shown to be accurate to the nanometer level. Other PhoSim components that are also tested

include the raytrace components, the optical design implementation, the perturbation file interface, the interpolation

methodologies, the coordinate systems and sign conventions. These validations establish PhoSim as the high-fidelity

optical model with a tested perturbation interface.

The atmospheric model used in our simulations is based on LSST site testing data. A DIMM (Differential Image Motion

Monitor) instrument was directly measuring the variance of the differential atmospheric image motion in two small

aperture telescopes pointed to the same stars. This variance is directly related to the atmospheric Fried parameter,

,

which is the primary output of the instrument. Tokovinin [16] suggested to always characterize atmospheric seeing as

measured by a DIMM with

.

0

0

0.976

(1)

The DIMM acts as a spatial filter, as it is insensitive to wavefront perturbations that are smaller than its apertures or

larger than the separation of its two telescopes. Its exposure time is very short (5-20 ms) to “freeze” atmospheric image

motion. Consequently, the DIMM is not sensitive to the low frequency part of the turbulence spectrum affected by the

outer scale; it is measuring

in the inertial (Kolmogorov) range.

On the other hand, the long exposure

i.e. the image quality detected by a telescope is sensitive to the outer scale

(

) of the atmosphere. Assuming 10 m/s wind speed and 30 m outer scale, any exposure longer than 3 s is affected by

the break-down of correlation in the atmosphere. According to the experience with various telescopes, image quality can

be defined by a von Kármán type atmosphere and corresponding

shape [16, 17].

0.356

0

0

0

12.183

(2)

Figure 4 shows the various

shapes used in the literature to approximate the effect of atmospheric aberrations. All

the

shapes shown have

of 0.6” at 500nm. For the double Gaussian, the standard deviation of the second

Gaussian is twice that of the first Gaussian, and the ratio of the peak amplitudes is 0.1. The power of the Moffat profile

is 4.765. These parameter values are chosen so that each shape is a good description of the typically-observed seeing

profile within the core. It is obvious from the figure that the von Kármán shape has more pronounced “tail” than the

other approximations. As our image quality metrics are linked to the square integral of the

its shape has a

remarkable effect on the metrics.

The LSST Science Requirements Document [5] defines the fiducial atmosphere for the project with median

of

0.6” at the optical wavelength of 500 nm, corresponding to atmospheric seeing measurements at the prospective LSST

site. The measured DIMM seeing (median

of 13.8 cm) was converted into achievable image quality by accounting for

the outer scale of the atmosphere assumed to be 30 m at the LSST site [18].

Surface Brightness (flux/arcsec

2

)

energy ratio

Figure 4 Atmospheric

shapes used in the literature, all with

of 0.6” at 500 nm: (a) normalized to unit power,

(b) normalized to unit maximum intensity; (c) is the same as (a), but with log scale; and (d) shows the encircled energy. We

are using the von Kármán model in our integrated simulations. The additional parameters uniquely defining these profiles

are discussed in the text.

The atmosphere used in our integrated simulations is embedded in PhoSim [11]. A common 7-layer frozen turbulence

approach is included in the tool, where the layers are drifting with different but correlated wind directions and speed.

Each 5 km times 5 km layer is constructed by the linear superposition of four phase screens with pixel sizes of 1, 8, 64,

and 512 cm. The validation of this model against the expected von Kármán atmospheric structure function was reported

by Peterson [11].

We introduce here three different mathematical tools for representing the mechanical structure [19]. They constitute

increasing levels of abstraction, from Newton’s second law as applied to the real physical system to its state-space

representation enabling meaningful system minimization and controls.

A

of a structure is characterized by the mass (

), damping (

), and stiffness (

) matrices, the initial and

boundary conditions for nodal displacements (

) and velocities (

), as well as the sensor outputs (

). In the special case

of localized masses connected with springs and dampers, the nodal model is the collection of the Newtonian equations of

motion. In general, for a distributed parameter system the choice of the nodes is somewhat arbitrary, but limited by

practical considerations.

0

0

(3)

and

are the input and output matrices, relating input forces (

) and the outputs to the nodal displacements. In the

general case, the nodal differential equations are highly coupled, as any single node has numerous connections

(“springs”) to other nodes.

However, under some conditions there exists a special linear transformation that de-couples the second order differential

equations through their eigenvalue decomposition. By introducing the matrix of eigenvectors (mode shapes) and the

modal coordinates (

), the nodal displacements can be expressed as the linear combination of the mode shapes (

).

(4)

The

for the modal coordinates (modal participation vector) is decoupled, i.e. the

coefficient matrices are diagonal. Expanding system behavior into an orthonormal basis set (the modes) provides great

insight and facilitates systematic minimization of the system (modal reduction).

2

2

(5)

Here

m

and

m

are the modal input and output matrices, while

and

are diagonal matrices that can be derived from

the

,

and

matrices in the nodal model.

A linear time-invariant system can always be described by a constant coefficient, first-order matrix differential equation.

(6)

Here

and

are the input and output vectors of the system, while

and

are the input and output gains, respectively.

The

matrix represents feed-through and in our case it is not used

The state of the system is characterized by the state

variable

and

reflects the dynamics of the system. Another way to look at a given system is to define its transfer

function (

) in Laplace transform domain.

1

(7)

To obtain a

of a mechanical structure already expressed in modal space, there is a

straightforward choice for the state variable.

1

2

2

2

(8)

Besides enabling a computationally efficient solution of the equation system, the state space representation also

facilitates the application of linear system theory to structures, most conveniently directly coupling them to control

systems operating from a subset of

(

sensor)

to a subset of

(

control

). In Figure 5, the force disturbance inputs are

represented by

dist

, the performance output is another subset of

(

perf

), while

set

indicates the set point input.

Structure

G(s)

y

set

(s)

y

sen sor

(s)

Controller

K(s)

u

control

(s)

u(s)

u

dist

(s)

y

perf

(s)

Figure 5 General architecture to control structures, with the convenient transfer function representation of the structure

While the optical system is inherently non-linear in its response to mechanical perturbations (

), as described in Section

2.1, a linear small signal approximation can be derived around the operating point representing the well aligned system

[12]. The exit pupil wavefront (

) at any field point can be expanded into an orthonormal annular Zernike basis set

(

z

), which in turn can be approximated by an optical sensitivity (influence) matrix (

).

0

(9)

The most important dynamic optical effect is the dynamic change in the telescope Line-of-Sight, i.e. image jitter. As it is

equivalent to exit pupil tip/tilt, it can also be calculated in this small signal framework.

This linear optical model is also useful to link thermal and other quasi-static perturbations to optical performance.

This section summarizes the image quality (size and shape) metrics used in the LSST Integrated Modeling Framework.

The basis of relevant and reliable image size performance allocation and estimate is a metric that

Properly reflects the science capabilities and efficiency of the observatory,

Can be calculated unambiguously for

of any size and shape, while

Facilitates accurate combination of various performance components, i.e. a correct performance/error budget.

There is a metric,

which meets all three requirements [20]. Assuming

for the perfect system, where the

source of image degradation is entirely the fiducial atmosphere, and

for the combined effect of the real system

and the fiducial atmosphere,

is the ratio of the square integral of these

s.

2

2

(10)

Equation (10) provides a unique and uniform algorithm to calculate the metric, independent of the actual shape of the

. The usual normalization of

to unity ensures that the

metric accounts for

shape effects only. Overall

energy loss is accounted for in throughput.

By definition

is unity for the perfect system, and always smaller than 1; the larger the system contribution to

image degradation, the smaller

is.

01

(11)

The

metric is multiplicative with high fidelity for practical spatial frequency ranges and aberration strengths of the

errors. Let’s assume

is the value characterizing the combined effect of a given number of errors with individual

values of

.

(12)

As reported in [20], Equation (12) is a strong and reliable basis for combining large number of errors with small

individual contributions to image quality degradation, i.e. for error budgeting.

The current LSST error budget assigns

of 0.8638 to the Telescope, 0.8146 to the Camera, and 0.9851 to the

inherent aberration of the optical design, at the wavelength of 500 nm, assuming fiducial atmosphere.

Physical interpretation of the

metric links it directly to the signal-to-noise ratio (

) of the background limited

observation of an unresolved point source. Assuming

the effective number of pixels included in the observation to

maximize

,

the total signal collected, and

the sky background, the optimal

is expressed in Equation (13)

2

(13)

Here sensor noise is already omitted. The total signal corresponding to an

of 5 can be approximated from Equation

(13) as

5

5

for background limited observations. The corresponding star magnitude – limiting magnitude or

image depth, - in a given wavelength band depends on the integration time and the “instrumental zeropoint” (

)

comprising the effects of system throughput and aperture size [21].

510

10

5

0

10

1.25log

2.5 log

1.25log

(14)

As

is directly related to the square integral of the

,

1

2

[22], the degradation of image depth

can be approximated by the

metric.

51

1.25log

0

(15)

The relationship in Equation (15) enables approximate error budgeting directly in limiting depth. The current LSST error

budget assigns limiting depth degradation (Δ

) of 80 mmag to the Telescope, 111 mmag to the Camera, and 8 mmag to

the inherent aberration of the optical design, at the wavelength of 500 nm, assuming fiducial atmosphere.

While

is an unambiguous term for a 1D curve, like a spectrum peak, it is not so for a 2D surface, as the

. At

half maximum, the cross section of the

can be a complex curve with no obvious, unique “diameter”. Consequently,

there are many different ways to define and then estimate the

of a

.

Generally accepted methods for estimating

are defining a rotationally symmetric Gaussian

, which is

equivalent to the given

, based on some criteria.

The criterion can be the “

” determined by geometric ray trace.

Another criterion can be the “

, in particular the same 80% encircled energy

diameter (EE80). Using EE80 instead of the

spot size mitigates the effect of small but non-zero intensity

values on the outskirts of the

resulting in undesirable impacts on the

spot size due to their large

weights proportional to

2

.

The criterion LSST is using based on the “

”. The primary sensitivity metric in the LSST Science

Requirements Document [5] is the single visit 5σ image depth,

, which in turn directly depends on

. The

image quality metric in the SRD is

, or “equivalent Gaussian width” directly in

.

0.663

arcsec

(16)

Using

facilities the approximation of

through the quadrature sum of atmospheric and system

.

2

22

(17)

However, as described in Section 2.2, the LSST fiducial atmosphere assumes an outer scale, resulting in a von Kármán

type

as opposed to a Gaussian. Its deviation from the Gaussian shape, as shown in Figure 4 can be accounted for by

a correction coefficient in Equation (17), as demonstrated in Figure 6.

1

1.086

1

(18)

Equation (18) enables approximate error budgeting in

. The current LSST error budget assigns

of

0.25” to the Telescope, 0.3” to the Camera, and 0.08” to the inherent aberration of the optical design, at the wavelength

of 500 nm.

Figure 6 A numerical test showing the bias in calculating

using Equation (17). In this test, the atmosphere is

represented by a von Karman profile with

= 0.6” at 500 nm, and the

due to the system is represented by a

Gaussian

ellipticity is of great importance to the success of the LSST weak lensing science program. While image ellipticity

can be traced back to system perturbations, the overall system ellipticity cannot be allocated to those perturbations, i.e.

there is no simple way to aggregate “ellipticity components” into a resultant ellipticity. Consequently, our approach is to

calculate overall ellipticity, including the effects of the fiducial atmosphere, at numerous field points.

Our ellipticity is defined as

, with χ being the complex ellipticity.

11

22

12

11

22

2

(19)

Here

,

, and

are the second moments of the

shape. This definition is equivalent to the one based on the

axis ratio (

).

2

2

1

1

(20)

In order to suppress the effect of the far tails of the

, we use a circular Gaussian weighting function with

of

0.6”. The reason for this suppression is that in weak lensing analyses, the ellipticity is defined by the center part of the

. The far tails of the

are typically indistinguishable from the background noise.

When we estimate ellipticity for compliance analysis, the system

is always convolved with a circularly symmetric

atmospheric

, representing the fiducial atmosphere.

The LSST Science Requirements Document [5] prescribes a median ellipticity of less than 4% across the field of view

for unresolved point sources.

To maintain consistently good image quality, LSST is implementing an Active Optics System (AOS). It controls the

rigid body positions of M2 and the Camera relative to M1M3, as well as the shape of M1M3 glass substrate and M2.

Optical feedback is provided by 4 wavefront sensors at the four corners of the focal plane. The hardware components of

AOS, and the general concepts of its operation are summarized in [9], while the details of the optical feedback loop,

together with the environmental and operational inputs are described in [10]. Since these publications, we included

several major improvements to the underlying model:

Control simulations now provide ellipticity estimates, together with the image quality output. As sensor height

is a critical contributor to ellipticity, in particular in the presence of astigmatism, the model now includes the

focal plane deviations from nominal. While these deviations will be deterministic for the as-built system, at this

point sensor height is considered a statistical variable. Its distributions at the 31 evaluation points on the focal

plane, including 30 Gaussian Quadrature points and the field center are shown in Figure 7.

The updated model calculates

and

as the relevant image quality metrics (see Section 3.1). The

control algorithm optimizes the

across the field of view. The output metric of performance is the mean

(

) across the field of view, as determined by the Gaussian Quadrature method, using 31 well

defined field points with proper weights [15]. The overall performance of the AOS in these metrics is shown in

Figure 8. Note that the blue horizontal line at

= 250 mas is the combined error budget for all the errors

included in this particular simulation [7].

Instead of the pre-calculated arroyo atmospheric phase screens, our simulations now use the atmospheric model

included in PhoSim, as described in Section 2.2. Besides properly accounting for the outer scale of the

atmosphere, it also realistically models the time evolution (correlation) of the atmosphere from one exposure to

the other.

Figure 7 Sensor (CCD chip) height probability distributions for the 31 sampling points on the focal plane used for

estimating optical performance [23]

Figure 8 Overall AOS response showing rapid convergence in both

and ellipticity performance

LSST utilizes four wavefront sensors located at the four corners of its focal plane. As outlined above, corrective actions

determined from information derived from the four wavefront sensors are fed to the AOS to maintain alignment and

surface figure on the mirrors. The LSST wavefront sensing software is described in detail in [24]. In addition to the

algorithms used, the paper also describes a set of very extensive tests using simulated images. The validations using real

on-sky data were recently performed using both wavefront sensor images and out-of-focus focal plane images from the

Dark Energy Camera [25]. As we get close to the manufacturing of the corner rafts, it is important to optimize the focal

distance separation of the wavefront sensors.

The optimization of the separation between the extra and intra focal CCD chips for the curvature wavefront sensor needs

to take into account many factors, such as the caustic zone, the atmospheric smearing of the wavefront information in the

defocused images, the linearity of the algorithm, the availability of bright stars, the signal-to-noise ratio due to the

spreading of intensity over many pixels, and our ability to de-blend overlapping star images. Due to the complex

interplay between all these various factors, the best way to optimize the wavefront sensor separation is to perform a trade

study using our integrated model.

Ideally, the trade study would involve a wavefront sensor image pre-processing pipeline. The pipeline would take raw

wavefront sensor half-chip images as the input, and output the donut images that are ready to be used by the wavefront

sensing algorithms. The processing involved includes instrument signature removal, source identification, source

filtering, de-blending etc. The analysis would include the follow steps:

(1)

obtain catalogs with various stellar density,

(2)

with a wide range of operational parameters, raytrace through the atmosphere and the optical model to form

half-chip images on the wavefront sensors,

(3)

run the half-chip images through the wavefront sensor preprocessing pipeline,

(4)

run the wavefront sensing software on the processed intra- and extra-focal donut images, and

(5)

compare wavefront sensing performance with different sensor offset.

Because the wavefront sensor image pre-processing software is still to be written, step (3) cannot be done. The trade

study we describe below involves steps (2), (4), and (5). A separate study on the source availability using a bright star

catalog is currently in progress, and not included in this paper.

Our analysis approach for this trade study is to run PhoSim in Monte Carlo with a wide range of operational parameters

to create single-star wavefront images, and see how the wavefront sensor offset affects the performance of the wavefront

sensing algorithm. These parameters include the atmospheric seeing, the state of the telescope optics, the optical band,

the exposure time, and the position of the wavefront sensor. For each combination of these parameters, we take 100

consecutive exposures, which give us 100 pairs of intra- and extra-focal images. We then run the wavefront sensing

software on the image pairs to get the wavefront solutions. We use the deviation of the mean of the 100 measurements

from the true optical wavefront and the variance of the measurements to quantify the performance of wavefront sensing.

One example of 100 wavefront measurements at the center of the upper right wavefront sensor is shown in Figure 9. For

this example, the wavefront sensor is offset by ±2 mm. The atmospheric

is 0.60”. The telescope has its

secondary mirror decentered by 0.5 mm, with the rest of the telescope degrees of freedom unperturbed. The exposure

time is 15 seconds. The sources have flat spectral energy distribution (SED), while the telescope is imaging in the r-

band. It can be seen that the mean of the 100 measurements agrees with the truth very well, with the largest standard

deviation on individual Zernikes being about 50 nanometers. The error bars in Figure 9 are mostly due to the atmosphere

and detector charge diffusion. The source being multi-chromatic also contributes.

Figure 9 Wavefront simulation results (upper panel) and their mean and standard deviation (lower panel) from 100 images

pairs obtained from 100 consecutive PhoSim exposures. The blue in both panels are the true wavefront of the optical

system. See the text for the parameter values used in these simulations.

Figure 10 Large sensor offset enables better wavefront sensing performance (without taking into account source

contamination). Left: deviation of the mean from the truth wavefront

; Right: standard deviations of the groups of 100

measurements. The sources used in this test are monochromatic stars with wavelength of 770 nm

Figure 10 shows the case where the deviation of the measured mean from the optical truth continually decreases for large

sensor offsets. Each data point on the left plot represent the deviation of the mean for 100 wavefront measurements. The

plot on the right shows the size of the error bars for those points on the left. As the atmosphere gets worse, the deviation

gets worse, but still improves with increased sensor offset. While the error bars of these simulations are not much

affected by the sensor offset, they generally get worse with increased atmospheric seeing. The tests shown in Figure 10

are again with a telescope whose secondary mirror has been decentered by 0.5mm while other degrees of freedom stay

unperturbed. The wavefront sensor is located at the upper right corner of the field.

Another trade study related to the wavefront sensors we have performed recently is on the positioning tolerance of the

wavefront sensor midpoint relative to the best fit plane of the science sensors. By default, a curvature wavefront sensor

measures the wavefront aberration at its midpoint, i.e., exactly half way between the intra- and extra-focal chips. When

there is an imperfection in the positioning of the wavefront sensor midpoint, the measured wavefront is in reference to a

point a little above or below the focal plane. Therefore, a certain amount of additional defocus (Z

4

) is introduced in the

wavefront due to the midpoint offset. Once we know the offset, we can calibrate out the additional defocus from the

measured wavefront. However, there is the question of how this additional defocus affects our ability to measure other

wavefront aberrations. On one hand, the increased defocus could affect the linearity of the wavefront sensors. On the

other hand, if the additional defocus puts us into the caustic zone, our transport of intensity (TIE) based curvature

wavefront sensing algorithm could potentially break down. Considering that in the current nominal design, the wavefront

sensor offset is ±2 mm, while a typical wavefront sensor midpoint offset is 15-25

m,

a caustic breakdown is quite

unlikely. If the algorithm linearity indeed becomes an issue, most likely we just have a different slope, in which case a

different gain or more iteration in the control loop may be needed for the active optics system to converge. Because a

midpoint tolerance of 15

m

requires much more complex engineering procedures than 25

m,

a trade study was carried

out to understand how the positioning of the midpoint affects the wavefront sensing performance.

The analysis approach for the midpoint positioning trade study is similar to the one for the sensor offset. We just need

one more parameter, the midpoint offset. Figure 11 shows how the measured wavefront changes with M2 decenter for 4

different wavefront sensor midpoint offsets: 0, 15, 20 and 25

m.

Defocus (Z

4

), astigmatism (Z

5

, Z

6

) and coma-x (Z

8

) are

the main Zernike terms that are impacted by M2 decenter. The other Zernikes are not shown here. Each data point

represents the mean and standard deviation of 100 measurements. Once the actual offset is measured, a correction term

in Z

4

needs to be added to the wavefront solutions. It is seen that up to the point where the M2 decenter-induced

astigmatisms increase to about 700 nm, there is no invisible change in the wavefront sensing algorithm performance. The

same applies to coma. Based on this trade study, a wavefront sensor midpoint 25

m

off from the best-fit focal plane is

tolerable. The requirement on the midpoint positioning tolerance has been relaxed accordingly.

Figure 11 Measured wavefront defocus (Z

4

), astigmatism (Z

5

, Z

6

) and coma-x (Z

8

) as functions of M2 x-decenter and

wavefront sensor midpoint offset. The solid lines are Zemax-truth. For Z

5

, Z

6

and Z

8

, because the measured mean and

standard deviations are almost identical between the different midpoint offsets, a small offset on the x-axis has been

introduced to avoid overlapping of the data points.

As reported in [26] and [27], the LSST M1M3 mirror surface features narrow, unique trenches called crow’s feet. They

were created in the polishing process by breaking open small air bubbles in the glass. The original concern that initiated

detailed compliance evaluation was the impact of these high spatial frequency features (i) on energy loss in the core of

the

(sensitivity loss), as well as (ii) on increased background around bright stars due to scattered light.

To assess the full impact of the crows' feet it was necessary to have a map with higher resolution than the optics shop

interferometer could provide. The high resolution surface map was synthesized from the interferometer test data and

much higher resolution, local SPOTS (Slope-measuring Portable Optical Test System) measurements [27]. The

synthesized surface accounted for all crows' feet with visual length of at least 5 mm. For image quality analysis

involving the core of the

we used a 4053 x 4053 synthetic surface map for M3, corresponding to M3 surface

sampling of 1.25mm. The M1 synthetic surface was 3148 x 3148, with pixel size of 2.67mm. The scattering analysis

performed by Photon Engineering utilized an 8041 x 8041 M3 surface map provided by LSST.

To get the Point Spread Functions, we used the LSST optical model. We extracted 2048 x 2048

maps from the

model at all 31 field positions, and carried out the Fourier Transform in matlab to get the high-resolution

image

stamps. The field distribution of sensitivity loss due to energy loss in the core of the

is shown in Figure 12. The

overall mean sensitivity loss due to the combined effects of polishing errors and crows’ feet is estimated by the well-

known Gaussian Quadrature method, using the 31 field points: (i) PSSN of 0.9113, with 0.9782 due to crows’ feet; Δ

of 0.051 mag, with 0.012 mag due to crows’ feet; and (iii)

of 0.206”, with 0.102” due to crows’ feet. The

sensitivity loss due to crows’ feet could be regained with an approximately 2.2% longer exposure (see Equation(14)).

The additional loss due to crows’ feet is below the image quality design margin of the telescope. These results are

consistent with an independent image quality study performed by Martin et. al using the same synthesized M1M3

surface [27].

Figure 12 Left:

at 31 field points with M1M3 polishing errors and crows’ feet, but without contributions from optical

design. Both the color and size of the filled circles represent

. The GQ of

is 0.9113, of which 0.9782 can be

contributed to the crows’ feet. Right: The

histogram for the 31 field points shown on the left.

Figure 13 The radial profile of the

with and without crows’ feet, indicating the background increase between ~10” and

~100” due to scattered light (red). The logarithmic scale masks the loss of energy in the core of the

. The magenta points

represent real

measurements [28] that are included here for sanity check; while they represent significantly worse

image quality (

> 1”) than expected for LSST, the wide tail of the PSF (aureole) is relevant in comparison to the

estimated

.

As indicated in Figure 13, background surface brightness increases around point sources at radii ~10” to ~1’ due to

additional scattering from the crows’ feet. At the peak, about 30” from the source, the increase in surface brightness is

about 2 mag. Close to a 10 mag star, in the r band this corresponds to a change from ~28 mag/arcsec

2

to ~26

mag/arcsec

2

. At 60” from the star, the surface brightness drops back to 28 mag/arcsec

2

, which is roughly the surface

brightness limit in LSST co-added data. Thus, for a 10 mag star, the data will suffer from a shallower limit for point

sources and faint surface brightness features (e.g. tidal streams around galaxies) within 3 arcmin

2

, instead of the original

~1 arcmin

2

. There are about 170,000 stars brighter than 10 magnitude in the LSST survey area, adding up to about 150

deg

2

area with reduced sensitivity. It’s about 0.8% of the total survey area (18,000 deg

2

).

Figure 14 Left: Ellipticity at 31 field points, with both the color and size of the filled circles representing ellipticity. The

GQ of ellipticity is 0.59%. Right: The ellipticity histogram for the 31 field points shown on the left. Both M1M3 polishing

errors and crows’ feet are included for both plots.

Figure 14 shows the ellipticity at the 31 field points (left) and the histogram (right) with both polishing errors and crows’

feet included on M1 and M3. In order to benchmark against the SRD [5], all ellipticity results shown have been

convolved with fiducial atmosphere, which is generated using the Von Karman model with 0.6”

. The Gaussian

Quadrature of ellipticity over the entire LSST field is 0.59%. The reported ellipticity values do not account for sensor

(CCD) piston that are major contributors to overall system ellipticity.

One of the critical requirements for LSST is to slew fast from one observation to the next. Considering the two 15

second exposures of a visit (observation), with shutter opening/closing time of 1 second each and readout time of 2

seconds, the required 5 seconds slew time results in 77% open shutter efficiency (30 seconds / 39 seconds).

However, moving the telescope structure with such a speed and corresponding acceleration inevitably excites at least

the first few structural modes, which in turn leads to residual vibrations after slew. While the “sturdy” design of the

Telescope Mount Assembly (TMA) ensures fairly high resonant frequencies (locked rotor frequency of 7.4 Hz) [29], the

original assumption was that the fast settling time required can be achieved only by introducing structural damping in

addition to the natural damping of 1-2% of a steel structure.

The original TMA design included large tuned-mass dampers at the top end of the structure, where the displacements of

the excited modes were expected to be the largest. However, the increased modal mass jeopardized achieving high

enough resonant frequencies. The competing requirements of reducing the energy pumped into structural modes, while

improving the damping of those same modes warranted a trade study.

The trade study carried out by LSST Project Systems Engineering investigated the improvement of settling time due to

(i) minimum jerk command signals and (ii) the damping efficiency of a well-designed atl/az control system. Jerk is the

third derivative of the position signal. The minimum jerk trajectory was determined by optimizing the coefficients of a

polynomial 7

th

order in time. The optimal coefficients set velocity, acceleration, and jerk to zero at the beginning and end

of the slew. The minimum jerk reference torques are shown in Figure 16.

A straightforward control system capable of smoothing structural settling is shown in Figure 15. The corresponding

dynamic model was built in Matlab and Simulink by using the vendor supplied finite element model of the TMA

pointing to 45˚ in elevation [30]. The nodal displacements were linked to optical performance through a linear optical

model, as described in Section 2.3. This linear optical model enabled the direct monitoring of image jitter, i.e. the time

history of the Line-of-Sight.

Figure 15 Block diagram of the alt/az control system used to predict settling characteristics of the Telescope Mount

Assembly (TMA); feed forwarding the “reference torques” improves the slewing performance of the TMA.

Figure 17 shows the correction torques generated by the PID controllers to force the structure to properly follow the

minimum jerk position command and attenuate structural resonances.

Figure 16 Minimum jerk reference torques moving

the telescope from -3.5˚ to 0˚ in 3 seconds both in

elevation and azimuth

Figure 17 Control torques added to the reference

torques while moving the telescope from -3.5˚ to 0˚

in 3 seconds both in elevation and azimuth

Figure 18 Settling of the telescope Line-of-Sight after moving the telescope from -3.5˚ to 0˚ in 3 seconds both in elevation

and azimuth. While the structure has no additional mechanical dampers, the telescope Line-of-Sight settles into the required

10 mas range immediately after slewing.

The results shown in Figure 18 served as an existence proof that a minimum jerk trajectory combined with

straightforward PID controllers can achieve the required <10 mas image jitter in a couple of seconds settling time. The

TMA vendor (Tekniker) designed and simulated a more complex control system to damp the relevant structural modes

and achieved similarly good settling times [30]. Tekniker eventually used the linear optical model provided by LSST

Project Systems Engineering.

LSST Project Systems Engineering developed a comprehensive simulation framework to bridge the gap between

engineering simulations (structural and thermal Finite Element Analyses, control models, fabrication and alignment

tolerance stack-ups, and Computational Fluid Dynamics analyses) and key optical system performance measures: image

size and shape. The framework constitutes complementary, matched tools that can be deployed in conjunction with each

other, as well as individually. It is integrated in the sense that it addresses all aspects of the system: structure, control,

and optics.

While some aspects of the framework rely on existing science simulation tools, it focuses on the high fidelity

representation of the system that the LSST construction project will deliver. This high fidelity system representation can

eventually be migrated to the end-to-end science simulators capable of providing full focal plane images as potential

inputs to the data processing pipeline.

The developed tool set is also essential for

Early verification and compliance analysis

System verification, where the actual requirement cannot be directly tested

System integration and troubleshooting to predict behavior

Commissioning to predict the outcome of commissioning activities

As Section 4 demonstrates, the tools are extensively used for trade studies, for evaluating change and deviation requests,

as well as for compliance assessments. While the framework meets most project and systems engineering needs, further

developments are certainly expected to

(i)

expand the chromatic capabilities of the optical model

(ii)

include validated sensor models

(iii)

implement and test the wavefront sensor pre-processing pipeline, and

(iv)

further optimize the AOS control strategy.

This material is based upon work supported in part by the National Science Foundation through Cooperative Agreement

1258333 managed by the Association of Universities for Research in Astronomy (AURA), and the Department of

Energy under Contract No. DE-AC02-76SF00515 with the SLAC National Accelerator Laboratory.

Additional LSST

funding comes from private donations, grants to universities, and in-kind support from LSSTC Institutional Members.

[1]

S. M. Kahn, “Final design of the Large Synoptic Survey Telescope,” Proc. SPIE

9906, (2016).

[2]

W. Gressler, “LSST Telescope and Site Status,” Proc. SPIE

9906, (2016).

[3]

S. M. Kahn, N. Kurita, D. K. Gilmore

, “Design and development of the 3.2 gigapixel camera for the

Large Synoptic Survey Telescope,” Proc. SPIE

7735, (2010).

[ 4]

M. Juric, J. Kantor, K.-T. Lim

, “The LSST Data Management System,” ASP Conf. Ser.

arXiv:1512.07914, (2016).

[5]

Z. Ivezic, and S. Collaboration, [LSST Science Requirements Document] LSST, LPM-17(2011).

[6]

Z. Ivezic, [LSST high-level performance metrics] LSST, Document-17254(2013).

[7]

A. J. Connolly, G. Z. Angeli, S. Chandasekharan

, “An end-to-end simulation framework for the Large

Synoptic Survey Telescope,” Proc. SPIE

9150, (2014).

[8]

G. Z. Angeli, K. Vogiatzis, D. G. MacMynowski

, “Integrated modeling and systems engineering for the

Thirty Meter Telescope,” Proc. SPIE

8336, (2012).

[9]

D. Neill, G. Z. Angeli, C. Claver

, “Overview of the LSST Active Optics System,” Proc. SPIE

9150,

(2014).

[10]

G. Z. Angeli, B. Xin, C. Claver

, “Real Time Wavefront Control System for the Large Synoptic Survey

Telescope (LSST),” Proc. SPIE

9150, (2014).

[11]

J. R. Peterson, J. G. Jernigan, S. M. Kahn

, “Simulation of astronomical images from optical survey

telescopes using a comprehensive photon Monte-Carlo approach,” The Astrophysical Journal, Supplement

Series

218 (14), (2015).

[12]

G. Z. Angeli, and B. Gregory, “Linear optical model for a large ground based telescope,” Proc. SPIE

5178,

(2003).

[13]

R. J. Noll, “Zernike polynomials and atmospheric turbulence,” Journal of the Optical Society of America 66(3),

207 (1976).

[14]

V. N. Mahajan, “Zernike annular polynomials for imaging systems with annular pupils,” Journal of the Optical

Society of America

71, 75-85 (1981).

[15]

G. W. Forbes, “Optical system assessment for design: numerical ray tracing in the Gaussian pupil,” Journal of

the Optical Society of America

5(11), 1943-1956 (1988).

[16]

A. A. Tokovinin, “From differential image motion to seeing,” Publications of the Astronomical Society of the

Pacific

114, 1156-1166 (2002).

[17]

P. Martinez, J. Kolb, M. Sarazin

, “On the difference between seeing and image quality: when the

turbulence outer scale enters the game,” The Messenger(141), 5-8 (2010).

[18]

M. Boccas, [Technical note: CP seeing and GS IQ] Gemini, (2004).

[19]

W. Gawronski, [Dynamics and control of structures (a modal approach)] Springer -Verlag, (1998).

[20]

B.-J. Seo, C. Nissly, G. Z. Angeli

, “Analysis of normalized point source sensitivity as a performance

metric for large telescopes,” Applied Optics

48(31), 5997-6007 (2009).

[21]

Z. Ivezic, R. L. Jones, and R. Lupton, [LSST Photon rates and SNR calculations] LSST, LSE-40(2010).

[22]

I. R. King, “Accuracy of measurement of star images on a pixel array,” Publications of the Astronomical

Society of the Pacific

95(February), 163-168 (1983).

[23]

A. P. Rasmussen, [Sensor height distribution] personal communication, (2014).

[24]

B. Xin, C. Claver, M. Liang

, “Curvature wavefront sensing for the Large Synoptic Survey Telescope,”

Applied Optics

54(30), 9045-9054 (2015).

[25]

B. Xin, A. Roodman, G. Z. Angeli

, “Comaprison of LSST and DECam wavefront recovery algorithm,”

Proc. SPIE

9906, (2016).

[26]

J. Sebag, W. Gressler, M. Liang

, “LSST Primary/Tertiary monolithic mirror,” Proc. SPIE

9906, (2016).

[27]

H. M. Martin, R. P. Angel, G. Z. Angeli

, “Manufacture and final tests of the LSST monolithic

primary/tertiary mirror,” Proc. SPIE

9912, (2016).

[28]

I. R. King, “The profile of a star image,” Publications of the Astronomical Society of the Pacific

83, 199

(1971).

[29]

S. Callahan, W. Gressler, s. J. Thomas

, “Large Synoptic Survey Telescope mount final design,” Proc.

SPIE

9106, (2016).

[30]

J. Sebag, J. Andrew, G. Z. Angeli

, “LSST Telescope Modeling Overview,” Proc. SPIE

9911, (2016).