First page Back Continue Last page Overview Graphics
Latency and distributed simulation
It is easy to split a simulation across many computers
- But the data has to flow across the connection
- Different splits imply different data types
- Some splits are extremely sensitive to latency
Splitting visual displays is easy - 10 ms is usually ok
Co-Simulation is harder, generally needing 1 ms or better
- eg the FDM in one computer, autopilot in another
Hardware in the loop and stability research is even harder
Normal network cards can usually achieve 1 ms
- Substituting high performance cards can help
- Mercury Raceway offers latencies below 1 us
Would like compatible APIs; eg Mercury and UDP/IP
Notes:
For co-simulation, the delay due to pushing the data back and forth has to be small compared to the loop gain and time constant of whatever is attempting to control the subsystem to keep the system stable ... so the aircraft will even fly. This is complicated by the long term performance being dependent on the worst case latency rather than the average latency, since the average value determines how well the feedback loop can damp out transients and the each incident of the bad latency generates such a transient.
FlightGear's terrain following autopilot uses the built in terrain (from TerraGear) which is simplistic smooth data. This enables routine users to avoid the whole issue.
I'm not aware of any hardware-in-the-loop usage, so far, but it is used as a streaming source of aircraft-like GPS data.