Sunday, September 25, 2011

Study of Hypervelocity Impact Influence on Spacecraft Computer Operations


Computer boxes are needed for on-board
data processing/handling (OBDH), control-,
monitoring-, telecommunication
(TC), telemetry (TM), avionics and payload
equipment. A share of 20%-40% of
a satellite bus volume consists of computers.
Computer boxes contain Printed
Circuit Boards (PCB) with analogue and
digital components, capacitors, inductors,
resistors, and micro-chips, which
are enclosed in a milled aluminium box
with a thickness of about 2 mm for electromagnetic
compatibility and radiation
shielding reasons. The criticality of electronic
hardware is slightly reduced by
the fact that most electrical components
are redundant. Moreover, the delicate
electronics are shielded passively by
their own housing. However, the complete
failure of an e-box will result at
least in complications until the redundant
system has taken over, if a redundant
system exists. Otherwise failure of
an e-box could mean the loss of a complete
subsystem, ie, the OBDH, TC, TM,
with potentially catastrophic consequences
for a mission. During the hypervelocity
impact tests, the computerboxes
were operating in what is considered
normal mode, performing basic
read- and write-operations. The
observed failure modes were temporary
failure and permanent failure. The temporary
failures caused interruptions in
the operation of the processor, followed
by nominal operation a few milliseconds
later. The reason for temporary failures
is assumed to be related to conductive
penetrating dust-like fragments causing
transient shorts. Any temporary failure
ie, temporary loss of operational performance
of electronic components may
manifest itself to the systems operator as
an in-flight anomaly. Such in-flight
anomalies, including faulty data transmission
and ‘ghost commands’, have
been reported by spacecraft operators
and may possibly be explained by hypervelocity
impacts. The permanent failures
manifested as sudden loss of supply
voltage or loss of nominal operation of
the computer. In Figure 3, a PCB with
severe impact damages (memory chip
and resistors + capacitances removed,
deposits of metallic spray in various
locations) and the corresponding CPU
signals are shown.
This study was a first step towards a
better understanding of vulnerability of
spacecraft equipment to hypervelocity
impacts. Still, considerable efforts need
to be made especially in the experimental
area to generate a comprehensive
picture of all effects related to the vulnerability
of spacecraft equipment to hypervelocity
impacts. However, the investigations
performed have already led to a
drastic enhancement of knowledge that
can now be exploited by spacecraft
designers. Amongst others, the work
performed can be used by spacecraft
operators to possibly provide explanations
for unexplained malfunctions of
equipment operations in satellite missions.

Investigation of Data Transmission Degradation within Electrical Harnesses


The main function of harnesses onboard
satellites are power distribution and data
transmission. Harnesses onboard spacecraft
are typically arranged in bundles
and can be routed through several paths
throughout the spacecraft. Electrical harness
can claim large areas at the inner
surfaces of the satellite structure wall.
On a typical spacecraft, the total weight
of harness can amount to several percent
of the overall spacecraft weight.
Harnesses primarily are critical due to
the fact that they are often located just
behind the satellite structure wall. An
impacting particle which penetrates the
spacecraft structure may endanger

unprotected harness, since the impact
fragments are dispersed into a ‘spray
cone’ that may hit and severely damage
large parts of unprotected harness. Each
harness submitted to hypervelocity
impact testing consisted of several operating
power- and twisted-pair data
cables, and one radiofrequency (RF)
line, transmitting a 9.35 GHz signal. An
example of data transmission measurements
is shown in Figure 2, where the
differential transmission method is used.
As can be seen, there is temporary data
transmission errors during several tens
of microseconds, followed by nominal
operation of the cables at later stages
after the impact again. Larger impact
energies can lead to more violent impact
damages that can cause longer temporary
perturbations up to permanent
failure of operation e. g. the severing of
cables. It is to be expected that such
damages lead to a functional deficiency
of the entire spacecraft.
The larger the stand-off between structure
wall and harness, the lower the probability
of failure is. Therefore, if it is feasible,
harnesses should be moved away
from structure walls. If additional
spacing cannot be realised, wrapping the
harness in a moderate amount of protective
fabrics, such as Nextel or Kevlar,
should improve dramatically the protection
performance. NASA has followed
such procedures successfully for ISS
harnesses routed outside the manned
modules.

Experimental Investigation of the Vulnerability of Spacecraft Equipment


Consequently, measures to protect
spacecraft against space debris are being
investigated all over the world. Most of
the studies concentrate on reducing the
vulnerability of spacecraft by introducing
external shielding. These studies
ignore the intrinsic impact protection
capability of the equipment under consideration.
To overcome this shortcoming,
ESA funded work to investigate
the vulnerability of satellite equipment
to hypervelocity impacts and the corresponding
equipment failure modes (ESA
contract 16483, Michel Lambert). The
considered equipment was fuel and heat
pipes, pressure vessels, electronics
boxes, harness, and batteries. All equipment
was placed behind aluminium honeycomb
sandwich panels (Al H/C SP),
representing the typical satellite structure
wall. The impact experiments were
performed at Fraunhofer Institute for
High-Speed Dynamics - Ernst-Mach-
Institut - in Freiburg, Germany, under
the supervision of Robin Putzar, using
their powerful two-stage light gas gun
accelerators to simulate experimentally
hypervelocity impacts of Space Debris
particles in a laboratory environment.
Cooperating partners were QinetiQ of
Farnborough, UK (Hedley Stokes), and
OHB-System AG in Bremen (Rolf
Janovsky, Oliver Romberg). One novel
aspect of this project was that the equipment
was evaluated in its normal operating
mode, thus being highly representative
of actual spacecraft operation. In
the following sections, some results of
impact tests on operating harnesses and
computers placed behind typical satellite
structure walls are provided and discussed.

The Effects of Hypervelocity Impacts on Spacecraft


Hypervelocity impacts can affect spacecraft
in various ways. Micron sized particles
can degrade sensitive spacecraft surfaces
and equipment, like mirrors and
optical sensors. Larger particles with
sizes ranging from tens to hundreds of
microns can penetrate coatings and foils
as well as solar cells. Damage such as
this has been observed on satellite surfaces
returned to Earth (LDEF, HST
solar arrays, EURECA) and on the windows
of the U.S. Space Shuttle which
have been replaced many times due to
impact damage. Millimeter-sized parti-

cles can penetrate satellite structure
walls or shielded walls of manned spacecraft,
posing a serious threat to equipment,
astronauts, or both. To reduce the
destructive effects of impacts, all modules
of the International Space Station
have debris shields to defeat sub-cm
objects. Impacts of such large particles
may also induce considerable changes in
the satellite’s attitude through transfer of
momentum. The impact of centimeteror
decimeter-sized particles will typically
lead to complete destruction of
important spacecraft parts or even to disintegration
of the spacecraft. Prominent
examples of collisions involving large
fragments with spacecraft are the 1996
collision between the French CERISE
military satellite and a 1 m fragment that
was generated from the explosion of an
Ariane 4 upper stage 10 years prior, and
the 2005 collision between an American
Thor rocket motor with a large fragment
of the third stage of a Chinese CZ-4
launcher. Many satellites and manned
space-stations are known to have performed
collision avoidance manoeuvres
with catalogued Space Debris parts, such
as ESA’s ERS-1 and ENVISAT satellites,
the U.S. Shuttle, the MIR station
and the ISS, to name only a few. Besides
the effects of structural damage, every
hypervelocity impact generates metal
vapour plasma that can result in electromagnetic
interference or result in
plasma-induced discharges: The
European Space Agencies’ (ESA)
OLYMPUS communication satellite
may have failed as a consequence of
hypervelocity impact of a Perseid meteoroid
in 1993.

The Threat of Space Debris and Micrometeoroids to Spacecraft Operations


To date, there are roughly 9,000 catalogued
objects in orbit with a size larger
than 10 cm (Figure 1), of which only
about 600 are functioning satellites and
the remaining ca. 8,400 classified as
space debris. Most of the Space Debris
mass consists of non-functional satellites
and upper stages of launchers. The
majority of millimeter- and sub-millimeter-
sized debris particles were generated
through one of the ca. 170 explosions
that have been registered up to
now. Such explosions can be caused by
spontaneously triggered combustion of
residual amounts fuel in upper stages or
by overcharged batteries. The micronsized
debris particles mostly stem from
the combustion products of solid rocket
motor firings and fragments of varnish.
The encounter velocities between space
debris and spacecraft in low earth orbits
are in the range up to about 15 km/s,
which corresponds to head-on collisions.
Micrometeoroids can have much higher
impact velocities, depending on their
origin. Even tiny particles possess considerable
kinetic energies as a result of
the very high impact velocities.

Galaxy Filament Detection using the Quality Candy Model- II


A filament is
not a single structure with sharp edges,
but instead a fuzzy set of points more or
less scattered, which makes its detection
difficult. Another difficulty in the detection
process comes from the difference
of spatial scales between sparse and
prominent compact features. The
gradual disappearance of structures with
increasing distance results from the use
of a magnitude-limited sample. The
apparent luminosity of any object is
fainter as distance increases, and only
the few galaxies with the highest
intrinsic luminosity are then included.
Up to now, there are only a few methods
to extract the filamentary structure. The
Minimal Spanning Tree (MST) method
has been mostly used. Recently, we have
adapted a method based on marked point
process, initially proposed for road network
extraction to this framework. The
network of filaments is modelled by a
marked point process, that is to say a
random set of objects whose number of
data points is also a random variable.
The objects of this process are segments
described by three random variables corresponding
to their midpoint, their length
and their orientation. The segment distribution
is simulated by a density probability.
In order to find the segment configuration
that better fits the filamentary
network, we define a density probability
which takes into account the interactions
between segments. The configuration of
segments composing the filament network
is estimated by the minimum of the
energy of the system which has two
components: the prior term forces the
segment configuration to be a network. It
takes into account the geometrical constraints
of the network: slow curvature
and good crossing points between the
segments. The network structure is
obtained by penalising segments which
are not connected. The curvature constraint
is optimized by quality functions
with respect to the connection angles and
the orientation between the segments.
The overlaps between segments are forbidden
in order to have neat crossing
points. The second term is a data term
which helps this network to best fit the
data. Results are shown on Figure 2
starting from data kindly provided by the
center for astrophysics at Harvard.

Galaxy Filament Detection using the Quality Candy Model- I


Beyond one billion light-years, when
averaged over 30 Mpc, the visible
Universe can be seen as a gas of
galaxies, uniformly distributed. At
smaller spatial scales, astronomical
observations and dedicated numerical
simulations have shown that the repartition
of the luminous matter is not so
homogeneous.
The three-dimensional distribution of
galaxies in the today Universe is indeed
characterised by a complex network of
filamentary structures which delineates

spherical regions of about 100 million
light-years in diameter devoid of objects,
suggesting a sponge-like or cell-like
topology for the underlying matter density
field. The finite age of the Universe
and the low enough peculiar velocity of
galaxies imply that information about
the initial conditions are preserved at
such large spatial scales. Characterising
the properties of the galaxy clustering
puts therefore strong constraints on the
theoretical models for the formation of
structures under the influence of gravity
in an expanding Universe known to be
dominated by dark matter and energy
components. Identifying elongated
structures like filaments, which might
only occupy 10% of the volume, and
measuring their statistical properties
would allow one to go beyond the information
provided by the usual two-point
correlation function measurements
which suffer from degeneracies with
respect to the topology.

Saturday, September 24, 2011

Galaxy Filament Detection using the Quality Candy Model- Intro


A joint project between INRIA and the French Riviera Observatory proposes to apply
a marked point process to detect a galaxy filament network. The method is based
on a model initially developed for road network extraction in remotely sensed images.

The Astro-Wise System: A Federated Information Accumulator for Astronomy- III


The core of the system exploits
three properties in database
environment. First, we apply the
principle of inheritance using
Object Oriented Programming
(Python), where all Astro-Wise
objects inherit key properties for
database access, such as persistency
of attributes. Second, the
linking (associations or references)
between instances of
objects in the database is completely
maintained, and for each

bit of information, it is possible to trace
those bits of information that were used
to obtain it. Third, each step, and the
inputs used for it, is kept within the
system. The database grows constantly
through the addition of new information
or improvements made to existing information.
All system components are distributed
over Europe, enabling research groups to
collaborate on shared projects.
Knowledge added by one group is
immediately accessible by others via a
Web portal, which includes data
viewing, quality labelling and computeservices
(see links). Currently,
researchers use the Astro-Wise system
with 10 Tbyte of astronomical images.
Hundreds of Tbytes of data will start
entering the system when the
OmegaCAM panoramic camera starts
operations in Chile. This camera is dedicated
to various large surveys using the
Astro-Wise system.
Astro-Wise coordinator OmegaCENNOVA
is collaborating with the LOFAR
consortium and CWI to explore usage of
the Astro-Wise system for LOFAR, the
next generation Low Frequency Array of
radio telescopes, which is being built in
the Netherlands and Germany. Astro-
Wise can also be applied to other fields
of science. The object-oriented use of
the database allows for classes of objects
dealing with arbitrary forms of digitized
observational data. Scans of cultural heritage,
DNA sequences, data from highenergy
particle collisions or financial
markets can be processed using similar
principles to the images of the sky.

The Astro-Wise System: A Federated Information Accumulator for Astronomy- II


This improvement is achieved by:
• emphasis on project management;
enforcing a global data acquisition and
processing model, while retaining
flexibility
• translating the data model to an object
model, with full registration of all
dependencies
• storing all I/O of the project in a single,
distributed database, containing all
metadata describing the bulk data (eg
images) and derived results in catalogue
form (eg lists of celestial
sources).
• connecting to the database a federated
file server that stores hundreds of
Terabytes of bulk data
• an own compute-GRID which sends
jobs (including clients) to single nodes
or parallel clusters, which then request
data from the distributed database.
The database with all metadata and catalogues
provides the infrastructure to
develop tools for a variety of purposes.
These include rapid trend analysis of
data, complex queries and fast hunting
for ‘needles in the haystack’ of Terabytesized
catalogues. Thus, the system provides
the user with fully integrated,
transparent access to all stages of the
data processing and thereby allows the
data to be reprocessed and the system to
be improved and expanded.
For a given project/instrument, the
system initially starts in a naive, ‘quick
look’ mode, which gradually improves
as various researchers add refined information
to the system under the supervision
of project leaders. Approved calibration
modifications automatically
become public, beyond the project
boundaries. A mechanism for quality
control is implemented which allows for
changes due to one of:
•true physical changes of parameter
values
•improvements in encoded
methods, or
•improved insight in either of
these.

The Astro-Wise System: A Federated Information Accumulator for Astronomy- I


Much of modern research involves the
accumulation of huge amounts of digitized
data. The analysis of this data by
distributed communities represents a significant
challenge to project management
and ICT implementation, and is
relevant to fields as diverse as biology,
physics, astronomy, economics and cultural
heritage projects. Furthermore, the
projects are often global efforts requiring
collaborators in many places to share,
validate and combine processed data and
derived results. It is therefore necessary
to develop more efficient data lineage,
mining and analysis systems to allow
researchers to search intelligently
through previously unmanageable volumes
of data.
The Astro-Wise consortium has developed
an information system to meet these
challenges for wide-field imaging in
astronomy. The Astro-Wise consortium is
a partnership between OmegaCENNOVA/
Kapteyn Institute (Groningen,
The Netherlands; coordinator),
Osservatorio Astronomico di
Capodimonte (Naples, Italy),
Terapix at IAP (Paris, France),
ESO, Universitäts-Sternwarte &
Max-Planck Institut für
Extraterrestrische Physik
(Munich, Germany).
Large data projects in highenergy
physics, space missions
and astronomy typically push
data through various platforms
in an irreversible way (eg a
TIER node setting). In such a situation,
the end user has little or
no influence on what happens
upstream. This ‘classical’
paradigm is characterized by
fixed ‘releases’ of homogeneous,
well-documented data
products. In contrast, the Astro-
Wise system allows the end user to trace
the data product, following all its dependencies
up to the raw observational data
and, if necessary, to re-derive the result
with better calibration data and/or
improved methods.

The Astro-Wise System: A Federated Information Accumulator for Astronomy- Intro


The progress of astronomy is about to hit a wall in terms of the processing, mining
and interpretation of huge datasets. The Astro-Wise consortium has designed and
implemented a fully scalable and distributed information system to overcome this
problem for wide-field imaging. The same principles can be applied to other sciences.

A New Approach for Advanced Life-Support Systems Control -III


The proposed MAS is organized in a layered
structure. In the first layer, agents will be in
charge of the planning and coordination of
necessary tasks and interaction with the crew,
requiring their intervention only when
mandatory. The use of Expert Systems in this
layer will allow actions to be planned in a systematic
and efficient manner. For example,
achieving an optimal harvest depends not
only on the maturity of the crop but also on
food requirements and storage capability.
In the second layer, agents will control
the execution of these tasks, interfacing
with the sensors and actuators of each
LSS subsystem and notifying relevant
process data, progress information or
unexpected events to the planner layer.
This layer will be heterogeneous in the
sense of implementation, but for each
process control an agent will be on top,
permitting a standard communication
with the rest of the system.
An especially critical problem to be
solved is the amount of supervisory
information that this system will generate.
Agents specializing in information
synthesis are needed to process this
information before communicating it to
the crew. Automated supervision of the
processes will be required, providing
only relevant information and requiring
crew attention only when mandatory.

A New Approach for Advanced Life-Support Systems Control- II


The goal of BLSS is very ambitious, as
the fact of using living beings generates
a problem in the determinism of the
system. These systems are highly nonlinear,
with a high level of uncertainty in
their behaviour, making it impossible to
perform a complete analytical modelling
of the processes. It is therefore necessary
to develop, in parallel with the biochemical
and physiological studies, new
approaches to system control (Figure 1).
Model-based control systems will not be
successful since they cannot deal with
incomplete or inaccurate information.
For instance, the maturity level of the
crop must be assessed indirectly using
several variables (atmosphere gas composition,
plant colour, biomass, time
from seeding etc). In addition, these variables
will depend on a great number of
factors, making predictability very poor.
New control-system architecture to cope
with these problems can be effectively
implemented using a Multi-Agent
System (MAS). This approach allows
the problem to be broken down into
small parts, each dealing with specific
tasks but in a coordinated manner, performing
as an organization with a
common objective and sharing a set of
rules. In addition, designing this system
as a multi-agent network will allow specific
control solutions to be applied to
each part as needed. The different types
of controllers will become encapsulated
in the agent structure and only relevant
information will be shared to enable
monitoring and global control. Another
benefit will be the reconfiguration capability,
in cases of, for instance, failure of
part of the system or the need to adapt
the system to new objectives.

A New Approach for Advanced Life-Support Systems Control- I


Life Support Systems (LSS) provide the
necessary conditions to sustain human
life in a hostile environment over prolonged
periods of time. Current LSS used
in manned spacecrafts (eg the
International Space Station) control the
atmosphere composition (ie the percentages
of oxygen, nitrogen and carbon
dioxide) and regulate pressure and temperature
by means of physico-chemical
processes, most of which require periodic
re-supply of fungible materials. Other
vital elements are to some extent recycled
(eg water) or uploaded from Earth (eg
food). Re-supply is a major problem for
the feasibility of long-term planetary
missions. Such missions are currently in
the scope of Space Exploration programs

encouraged by ESA or NASA, whose
objective is to establish permanent
manned outposts, first on the moon and
later on Mars, by 2030.

A new generation of Biological Life
Support Systems (BLSS) is starting to be
developed (eg NASA’s BIOplex and
ESA’s MELiSSA). These use biological
organisms (bacteria, algae, plants etc) to
regenerate air, water and food with the
objective of complete self-sufficiency.
Microorganism cultures are employed to
recycle water from wastes; higher plants
are an essential source of fresh food
through cultivation and harvesting,
water is recycled through plant transpiration,
and oxygen is produced by photosynthesis.


A New Approach for Advanced Life-Support Systems Control-Intro


Recent developments in the International Space Community have shown there is
a rising interest in the human exploration of outer space. In particular, the objective
of sending a manned mission to Mars by 2030 has been set. The feasibility of such
a mission will require Life Support Systems (LSS) able to provide vital elements to
the exploration crew in an autonomous, self-sustained manner, as re-supply from
Earth will not be possible. Bio-regenerative LSS (BLSS) are considered to be the
LSS technology alternatives that can meet this demand. Developing effective BLSS
is a challenge for the Control community because of the high degree of automation,
indeterminism, non-linearity and total instability. Agent-based approaches are being
analysed as a suitable means of overcoming these difficulties.