Hello again Erik;
It seems we are in complete agreement at my current level of understanding. This all being new to me, I've not yet attempted to dig into the details of how the calibration coefficient polynomials enter into the mix, but more than that, I now don¡¯t see why they are used, much less how they are implemented.
Have you been able to identify where this approach diverges from those that employ the coefficients, why they are used, and where they differ? I am not familiar with the other algorithms, and since I¡¯m also not a programmer, the tedium of extracting the existing algorithm(s) from the firmware source listings would be tedious for me and prone to error and misinterpretation.
GIN&PEZ have stated that the technique is not new, and if done carefully will yield the same results. I would expect this to be true if the standards used in both were set equal to ideal. I believe the NanoVNA meets that criteria, since there is no provision given for defining the standards.
So my understanding is that the standards uncertainties that seed the solution are applied as a fundamental part of the calculation... for example where S,O, & L are defined as ideal in the proof, they are defined as the standards in use, and the s, o, & l measurements establish the frequency response of the aforementioned and accurately defined standards. The result being an accurately corrected computational result, void of any need for further correction. This certainly appears to be the case, and compensating the results with additional bias following this first calculated result would seem to introduce errors and uncertainties in the outcome rather than improve accuracy.
My thinking has been that the process being used in VNAs today is To measure the standards, compute the results, then correct the computer results with a polynomial algorithm.
Is there something here that you think I may be missing or misunderstanding in my interpretation of their work.
--
73
Gary, N3GO