What's new

PAC/JF-17 - DxB Air Show 2019

no body is above forum rules
such polls of reversing infractions and bans must stop

moderators take care when giving bans in view of fairness and compliance
As i told the eagle earlier when he wuotes my same post, i was referring to block 72 thread mess, not the recent skirmish
 
PAF and LM relationship goes back many years, this is just a courtesy call as all PAK-US weapon deals happen at the government level and not by a sales pitch made by company reps at an exhibition stall.
No doubt the participants on these events visit each others stalls as a courtesy measure, have a chat and share views but i doubt they visit along with replicas of their representation and take a permanent seat at others venue.
 
Look buddy, I just took the pics.

People here got excited that Lockheed Martin was there.

I went to the extent by even mentioning myself that DERCO was a division of LM with other details.

So you're initial point was: Was he lost.
Well you outta ask LM since he's hired by 'em & generally smart people have a pretty good sense of Direction.

PAC don't have anything to do with LM, but LM have something to do with PAF.

That Diecast Model was gifted & was later placed inside the Reception Counter where you'd find a couple of girls throughout the Air Show.

I'm not drawing any conclusions. I'm just presenting the facts of what I saw, heard & covered most of the time.
Okay, "buddy"

@Trailer23 Bro Thanks for all your Efforts, Really appreciate all that you have done to bring us this event regardless of where we are in the world.
indeed. Good job.
 
https://www.pac.org.pk/c-130-qec

Derco.jpg
 
just a exchange of gifts, what's better than a model of your product you are pushing. PAF gives Thunder models to all who visit PAC, it doesn't mean they are going to buy it or have agreed to buy it.


No doubt the participants on these events visit each others stalls as a courtesy measure, have a chat and share views but i doubt they visit along with replicas of their representation and take a permanent seat at others venue.
 
There is little or no room for AI/ML in flight control software battlefield decision making. For higher level decision making AI/ML won't be reliable enough for operation anytime soon. At the very basic level AI/ML is glorified curve-fitting with no guarantees. A bunch of computer science people discovered "least sqaures fitting" and discovered if they have ridiculous amounts of data AND they do this for long enough AND they tweak it by trial and error, THEN they SOMETIMES get good results AND they don't understand how it works so it must be magic. And now they have a boner for applying it to everything they can get their hands on. Nobody certifies anything like that to fly anywhere near people. Yes there are applications for AI and ML especially in image processing, data-processing, and seekers for weapons but battlefield decision making and flight controls are NOT such applications.

To explain where I am coming from: My area of expertise is flight control systems and I see AI/ML people claiming magic everyday (this includes people in the US gov labs) only to find out simple control techniques from the 60s can beat their performance without having to use terabytes of data and GPUs. I don't have anything against the AI/ML but I am strongly suspicious of the raging boner for ML types lol. Sorry for the long rant which is now over.

Hi @JamD ,
This is something that piqued my interest and since I work on reinforcement learning/integral reinforcement learning (RL/IRL) I can shed some light on this. What you have written above is partially true. I will elaborate where it is wrong. You can have a look at my papers on arxiv for detailed discussion of the proof.
I would like to start off by saying that there is indeed room for implementation of "some" of the notions of AI on lower level flight control like lets say attitude control, velocity control "with guarantees". Now what I am alluding to is the fact that quite recently (in last 8-10 years), the researchers have been able to port some of the algorithms developed by computer algorithmists to control theory. What this entails is developing requisite mathematical framework for those algorithms so that those algorithms adhere to Lyapunov or extended versions of Lyapunov stability. Now, the problem here is, instead of having asymptotic stability-- wherein the error (difference between actual and desired value) goes to 0, we have what is known as uniform ultimate boundedness (UUB)-stability-- this is when the error goes to a "neighborhood" of 0. So, yes, there are "guarantees" that the algorithm will be stable (although not quite what you were expecting)!
Now, AI in control theory is a very vast world and I believe that I can not describe it all here, but I will give it a try. The most essential element of any AI is what is known as "neural networks" (NNs). These NNs are nothing but glorified approximators. They approximate "smooth" functions. In control theory the function that we are interested to approximate is usually 'unknown'. Generally, NNs are employed in two very distinct operations in control theory:
(A) System Identification:
(1) With structure-- i.e., when we know the dynamics or the differential equation but not the parameters like lets say mass, inertia or aerodynamic coefficients. In this case, the function appearing in your dynamics form the regressor vector/ functions in the hidden layer of the NN.
(2) Without structure-- i.e., when we do not even know the differential equation governing the evolution of the states. In this case generic functions such as RBF, tanh, logistic function etc are utilized.
The main challenge is to come up with parameter update laws for the NN so that you are able to reliably approximate the unknown functions with some version Lyapunov stability.

(B) Control: In control, there are again two very different paradigms in which NNs are being utilized
(1) Inversion frameworks-- like Sliding mode controller (SMC), Feedback linearization etc (Disclaimer: The nonlinear dynamics should be affine-in control form!).
In this case, the NNs are utilized to either approximate the unknown dynamics on the go or the equivalent control appearing in SMC framework. The major drawback of this approach is the complexity involved to make the "approximated version of control coupling dynamics" invertible! People have come up with bizarre mathematics to achieve that. For instance generalized inverses (GI) are used instead of your normal matrix inversion. But when you use GI you inherently get some error-- because this is not the true inverse and to compensate that, people then came up with this notion of using Nussbaum gains (to predict the sign of the control coupling matrix) and then using a robustifying term to compensate for the errors induced by GI.
(2) Reinforcement Learning (also known as Adaptive dynamic programming (ADP) among control theorists!)--
This application of NN does not involve taking inverses of the control coupling matrix. So where does the utility of the NN lie in this case. Well JamD, in this case, you use NN to approximate the solution of the HJB equation which is a nonlinear PDE! Again as usual, the challenge lies in finding "how" to update the NN weights so that your NN is able to faithfully approximate the solution of HJB with reasonable accuracy while satisfying some version of Lyapunov stability (the second part is what makes it all interesting!).
There are two major ways in which RL in employed- On-policy/ Off-policy methods.
People have come up with actor-critic, critic-only, actor-critic-disturbance frameworks to solve either regulation or tracking problem for nonlinear systems. Once again, all of these have "guarantees" in terms of the size of the residual set-- what is also known as UUB-set in which the state trajectories will eventually come to and stay there within that set.

PS- I was specifically talking about online methods. Which means, the NN evolve in real-time on the go as the aircraft or UAV flies! Also note, in these cases, NNs are used in feedback and hence they dont require huge amounts of data to train. They are supposed to work with the live stream of sensor data and evolve. The dynamics governing the evolution of the weights is what you might call as "parameter update law".
 
Last edited:
Hi @JamD ,
This is something that piqued my interest and since I work on reinforcement learning/integral reinforcement learning (RL/IRL) I can shed some light on this. What you have written above is partially true. I will elaborate where it is wrong. You can have a look at my papers on arxiv for detailed discussion of the proof.
I would like to start off by saying that there is indeed room for implementation of "some" of the notions of AI on lower level flight control like lets say attitude control, velocity control "with guarantees". Now what I am alluding to is the fact that quite recently (in last 8-10 years), the researchers have been able to port some of the algorithms developed by computer algorithmists to control theory. What this entails is developing requisite mathematical framework for those algorithms so that those algorithms adhere to Lyapunov or extended versions of Lyapunov stability. Now, the problem here is, instead of having asymptotic stability-- wherein the error (difference between actual and desired value) goes to 0, we have what is known as uniform ultimate boundedness (UUB)-stability-- this is when the error goes to a "neighborhood" of 0. So, yes, there are "guarantees" that the algorithm will be stable (although not quite what you were expecting)!
Now, AI in control theory is a very vast world and I believe that I can not describe it all here, but I will give it a try. The most essential element of any AI is what is known as "neural networks" (NNs). These NNs are nothing but glorified approximators. They approximate "smooth" functions. In control theory the function that we are interested to approximate is usually 'unknown'. Generally, NNs are employed in two very distinct operations in control theory:
(A) System Identification:
(1) With structure-- i.e., when we know the dynamics or the differential equation but not the parameters like lets say mass, inertia or aerodynamic coefficients. In this case, the function appearing in your dynamics form the regressor vector/ functions in the hidden layer of the NN.
(2) Without structure-- i.e., when we do not even know the differential equation governing the evolution of the states. In this case generic functions such as RBF, tanh, logistic function etc are utilized.
The main challenge is to come up with parameter update laws for the NN so that you are able to reliably approximate the unknown functions with some version Lyapunov stability.

(B) Control: In control, there are again two very different paradigms in which NNs are being utilized
(1) Inversion frameworks-- like Sliding mode controller (SMC), Feedback linearization etc (Disclaimer: The nonlinear dynamics should be affine-in control form!).
In this case, the NNs are utilized to either approximate the unknown dynamics on the go or the equivalent control appearing in SMC framework. The major drawback of this approach is the complexity involved to make the "approximated version of control coupling dynamics" invertible! People have come up with bizarre mathematics to achieve that. For instance generalized inverses (GI) are used instead of your normal matrix inversion. But when you use GI you inherently get some error-- because this is not the true inverse and to compensate that, people then came up with this notion of using Nussbaum gains (to predict the sign of the control coupling matrix) and then using a robustifying term to compensate for the errors induced by GI.
(2) Reinforcement Learning (also known as Adaptive dynamic programming (ADP) among control theorists!)--
This application of NN does not involve taking inverses of the control coupling matrix. So where does the utility of the NN lie in this case. Well JamD, in this case, you use NN to approximate the solution of the HJB equation which is a nonlinear PDE! Again as usual, the challenge lies in finding "how" to update the NN weights so that your NN is able to faithfully approximate the solution of HJB with reasonable accuracy while satisfying some version of Lyapunov stability (the second part is what makes it all interesting!).
There are two major ways in which RL in employed- On-policy/ Off-policy methods.
People have come up with actor-critic, critic-only, actor-critic-disturbance frameworks to solve either regulation or tracking problem for nonlinear systems. Once again, all of these have "guarantees" in terms of the size of the residual set-- what is also known as UUB-set in which the state trajectories will eventually come to and stay there within that set.

PS- I was specifically talking about online methods. Which means, the NN evolve in real-time on the go as the aircraft or UAV flies! Also note, in these cases, NNs are used in feedback and hence they dont require huge amounts of data to train. They are supposed to work with the live stream of sensor data and evolve. The dynamics governing the evolution of the weights is what you might call as "parameter update law".
My argument is more philosophical and less technical so I am not going to try to refute all the details you have written. All I'll say is that "convergence of an algorithm" and certification of an aircraft are COMPLETELY different things. People have know recursive least squares converges since the 60s but that doesn't meant anything I apply RLS to is safe to fly. But this is a common fallacy engineers make. Try selling convergence results to Boeing, Raytheon, or Lockheed as guarantees of safety and see how they laugh at you. I know because I've seen people do it personally. You can't even sell adaptive control to them. I know because I've tried.

Also I wish you had not mentioned arxiv. I have very poor regard for work that is "published" without peer review (which is another reason I am deeply suspicious of most ML work). I dont ever read arxiv as a matter of principle. I believe this is a fad that will die down and the area of research will find its ACTUAL niche in time. But that is just my professional opinion and you're welcome to disageee.
 
convergence of an algorithm
Hi @JamD, convergence of an algorithm and Lyapunov stability of that algorithm are two very different things. Especially in an on-policy setting.
Try selling convergence results to Boeing, Raytheon, or Lockheed as guarantees of safety and see how they laugh at you. I know because I've seen people do it personally. You can't even sell adaptive control to them. I know because I've tried.
That is simply because it is very new and not mature. If you look at RL for continuous time nonlinear systems you'd realise that the whole field itself came into being quite recently. In fact the rigorous stability proof were produced only 8-10 years ago. Also you can't use RLS in on policy setting yet. What I mean is it still lacks rigorous stability proof.
In my opinion, aerospace industry is little hesitant to embrace adaptive control after that fated crash of X15 back in 60s. It is for this reason that they probably shied away from your adaptive control. However I'd like to point out that at least in fighter jets with inherent instability, they're now using model reference adaptive control (MRAC) instead of traditional gain scheduling. For more advanced control strategies that have recently been proposed (with rigorous stability proof), it'll take some more time and a lot of $$$ before it can fly anywhere on a jet that carries real humans. However for UAVs these strategies and algorithms will see the light of the day much earlier.

Also I wish you had not mentioned arxiv. I have very poor regard for work that is "published" without peer review (which is another reason I am deeply suspicious of most ML work). I dont ever read arxiv as a matter of principle. I believe this is a fad that will die down and the area of research will find its ACTUAL niche in time. But that is just my professional opinion and you're welcome to disageee.
This is not entirely correct, if you're working with algorithms, especially something very fundamental and you send the manuscript to a journal or conference for review, it is quite customary to put it first on arxiv. Of course it depends on the policy of that journal /conference if they allow you to do that.
In the field of computer science, motion planning etc it is almost a rule to upload the manuscript at arxiv as you send your manuscript for review at a journal or conference. In control theory, if you're working on something very novel or some new formula or let's say an update law, it is better if you upload your manuscript to arxiv when you're sending for review at journal /conference. This is purely to protect your algorithm from possible copying etc.
In case you missed, the famous poincare conjecture as proven by Grigory Perelman was first published at arxiv - - that won him fields medal in mathematics. What you don't get is the fact that arxiv is like a safety or guarantee for your algorithm that it'll not be copied by reviewers. Most of the work in control theory that are uploaded at arxiv are uploaded at the time when a researcher is submitting his work for review at a journal /conference. Of course as you pointed out, without peer review, it is not really considered published. Also a lot of the "novel" algorithms and "formulas" I have seen getting published at places like automatica, ITAC, ITC (cybernetics), ITNNLS, IET-CTA were also uploaded at arxiv.
In fact some of the journals /conferences make it mandatory to put a header over your manuscript stating that "This paper is currently under review at so and so" before uploading it at arxiv. Hope it clears a lot of your misconceptions!

Finally, what I mean by ML is adaptive dynamic programming. ADP in On-policy setting is a very very powerful tool. You'd soon realize it!
 
Last edited:

Latest posts

Back
Top Bottom