Keyboard Shortcuts
ctrl + shift + ? :
Show all keyboard shortcuts
ctrl + g :
Navigate to a group
ctrl + shift + f :
Find
ctrl + / :
Quick actions
esc to dismiss
Likes
Search
|S11| > 1
I've been analyzing Touchstone files Rudy Severns, N6LF, has recorded with his NanoVNA-H4. The magnitude of S11 is greater than 1 for many files. For example, after calibration right at the VNA connector, |S11| was 1.0007 maximum for the open cal part itself. For the short the maximum was 1.0005. For both, |S11| > 1 for all 401 points from 0.1 to 50 MHz.
The images show calculated permittivity and conductivity for a ground probe with the rods in air. The image with most of the points missing used uncorrected data. The other image is for normalized data where |S11| = 1 maximum. Although normalization solves the problem for my software, I'm curious why |S11| is ever > 1. Brian |
On 6/13/22 1:31 PM, Brian Beezley wrote:
I've been analyzing Touchstone files Rudy Severns, N6LF, has recorded with his NanoVNA-H4. The magnitude of S11 is greater than 1 for many files. For example, after calibration right at the VNA connector, |S11| was 1.0007 maximum for the open cal part itself. For the short the maximum was 1.0005. For both, |S11| > 1 for all 401 points from 0.1 to 50 MHz. Measurement uncertainty? You're essentially measuring (V refl)/(V incident) with a noisy sensor.?? 1 part per 1000 (1.001) is 60dB.. the SNR of the measurement is in that ballpark. Calibration peculiarities - you determine the cal coefficients with noisy measurements, so the combination of cal coefficient high, and measurement high,. The ADC measuring the output of the mixer has an ideal SNR of ~90 dB.? It's a 16 bit adc, so there's some quantization uncertainty. Then there's the arithmetic aspect. The basic software multiplies the ADC numbers by sin and cos, then sums to get I/Q.?? That's done with 16 bit signed integers, 32 bit products, but then truncated before integrating. There's also the single precision floating point calculation of the various calibration coefficients using single precision float (32 bit, with 24 bit mantissa/significand). |
15070000 -1.000470519 -0.000045675
15194750 -1.000452399 -0.000081647 15319500 -1.000464559 -0.000072501 15444250 -1.000433683 -0.000034980 15569000 -1.000416160 -0.000060047 15693750 -1.000486493 -0.000025182 15818500 -1.000452280 -0.000066847 Jim, that's a data sample for the short. The real part is consistently beyond -1.0004. The open is similarly beyond +1. The largest |S11| I found in other .s1p files was 1.0037. I think it is some sort of systematic issue, not noise. Most users will never notice it since the effect is so tiny. My application is sensitive to errors at extreme S11 values, which yield nonphysical results (negative conductivity). I forgot to mention that the VNA firmware for the open and short was DiSlord 1.2. Brian |
On 6/14/22 4:29 AM, Brian Beezley wrote:
15070000 -1.000470519 -0.000045675 Interesting - I wonder if it's a "round off" or truncation error of some sort.? The "detector" mixes with I/Q 5 kHz, summing, and there could be a 1/2 LSB bias or something like that. here's the raw I/Q calculation code: void dsp_process(int16_t *capture, size_t length) { uint32_t *p = (uint32_t*)capture; uint32_t len = length / 2; uint32_t i; int32_t samp_s = 0; int32_t samp_c = 0; int32_t ref_s = 0; int32_t ref_c = 0; for (i = 0; i < len; i++) { uint32_t sr = *p++; int16_t ref = sr & 0xffff; int16_t smp = (sr>>16) & 0xffff; int32_t s = sincos_tbl[i][0]; int32_t c = sincos_tbl[i][1]; samp_s += smp * s / 16; samp_c += smp * c / 16; ref_s += ref * s / 16; ref_c += ref * c / 16; } acc_samp_s = samp_s; acc_samp_c = samp_c; acc_ref_s = ref_s; acc_ref_c = ref_c; } raw (uncalibrated) gamma is calculated here void calculate_gamma(float gamma[2]) { #if 1 // calculate reflection coeff. by samp divide by ref float rs = acc_ref_s; float rc = acc_ref_c; float rr = rs * rs + rc * rc; //rr = sqrtf(rr) * 1e8; float ss = acc_samp_s; float sc = acc_samp_c; gamma[0] =(sc * rc + ss * rs) / rr; gamma[1] =(ss * rc - sc * rs) / rr; #elif 0 gamma[0] =acc_samp_s; gamma[1] =acc_samp_c; #else gamma[0] =acc_ref_s; gamma[1] =acc_ref_c; #endif } this is the code that applies the calibration: if (cal_status & CALSTAT_APPLY) apply_error_term_at(i); static void apply_error_term_at(int i) { // S11m' = S11m - Ed // S11a = S11m' / (Er + Es S11m') float s11mr = measured[0][i][0] - cal_data[ETERM_ED][i][0]; float s11mi = measured[0][i][1] - cal_data[ETERM_ED][i][1]; float err = cal_data[ETERM_ER][i][0] + s11mr * cal_data[ETERM_ES][i][0] - s11mi * cal_data[ETERM_ES][i][1]; float eri = cal_data[ETERM_ER][i][1] + s11mr * cal_data[ETERM_ES][i][1] + s11mi * cal_data[ETERM_ES][i][0]; float sq = err*err + eri*eri; float s11ar = (s11mr * err + s11mi * eri) / sq; float s11ai = (s11mi * err - s11mr * eri) / sq; measured[0][i][0] = s11ar; measured[0][i][1] = s11ai; // CAUTION: Et is inversed for efficiency // S21m' = S21m - Ex // S21a = S21m' (1-EsS11a)Et float s21mr = measured[1][i][0] - cal_data[ETERM_EX][i][0]; float s21mi = measured[1][i][1] - cal_data[ETERM_EX][i][1]; float esr = 1 - (cal_data[ETERM_ES][i][0] * s11ar - cal_data[ETERM_ES][i][1] * s11ai); float esi = - (cal_data[ETERM_ES][i][1] * s11ar + cal_data[ETERM_ES][i][0] * s11ai); float etr = esr * cal_data[ETERM_ET][i][0] - esi * cal_data[ETERM_ET][i][1]; float eti = esr * cal_data[ETERM_ET][i][1] + esi * cal_data[ETERM_ET][i][0]; float s21ar = s21mr * etr - s21mi * eti; float s21ai = s21mi * etr + s21mr * eti; measured[1][i][0] = s21ar; measured[1][i][1] = s21ai; |
On 6/14/22 7:26 AM, Brian Beezley wrote:
I've asked Rudy to remeasure the short and open using MA (magnitude/angle) mode. That might shed some light on where the inaccuracy lies. Also, I think he was using averaging. It might be interesting to disable it. what would be interesting is to measure it without calibration, and with, and make sure there's not some mis calibration going on somehow. |
Thank you, Jim!!!
toggle quoted message
Show quoted text
Again, we are not running a metrology lab nor do our measurements approach those of HP, R&S, Tek, and others. Something my PhD friends tell me is that error analysis and assignment of error bars is no longer taught - not even at CU/Boulder. I had a required course dedicated to that musing to obtain my Physics degree some 50+ years ago at Michigan State U. The students today have no idea how the error bar is established or even what it truly indicates. Several of us have tried introducing our STEM students at LTO (Little Thompson Observatory <starkids.org>) to the concept of measurement errors and how they affect final outcomes. We usually get wrinkled foreheads and "why". Yes, 5 parts in E-5 is -86 dB. Yes, the HP 8753C noise floor can measure below that. But, not our inexpensive NANOVNAs. Mine typically shows a noise floor of -60 dB, depending on frequency and measurement type. Again, thank you to those who put these VNAs at a reachable price in the hands of us amateurs and those who want to learn the "fine points" of RF engineering. Never measure the temperature with more than one thermometer. Never determine the time of day with more than one Cesium clock. Never determine _________with more than one _________. Dave - W?LEV On Tue, Jun 14, 2022 at 12:02 AM Jim Lux <jim@...> wrote:
On 6/13/22 1:31 PM, Brian Beezley wrote:--I've been analyzing Touchstone files Rudy Severns, N6LF, has recordedwith his NanoVNA-H4. The magnitude of S11 is greater than 1 for many files. *Dave - W?LEV* *Just Let Darwin Work* --
Dave - W?LEV |
On 6/14/22 9:24 AM, W0LEV wrote:
Thank you, Jim!!! "sig figs" is taught in high school and undergrad. If you do any sort of hard science classes they cover measurement uncertainties as part of the class (e.g. in lab) - Most lab classes discuss this (Undergrad chem lab certainly does). The more sophisticated stuff is covered in classes like numerical analysis - if you're doing signal processing, for instance, round off and error propagation are a big thing. Same with classes on numerical solutions of differential equations, in connection with things like Runge-Kutta.? I doubt there's many classes that specifically care about "calibration uncertainty" - you're on your own with mfr notes and the professional literature. As a grad student, you'd be expected to get this knowledge in some way - there are a variety of short courses offered by various government labs as well as industry. For instance NIST has annual meeting in certain fields, and there's often some short courses associated with it. |
Jim, I can assure you our STEM students know nothing about significant
toggle quoted message
Show quoted text
figures! Terry (Dr. Terry Bullett) and I have tried, but no knowledge of the concept. They are Juniors going into their senior year and have had Chem. and Physics (not "advanced" phys which is seldom taught due to low demand). Dave - W?LEV On Tue, Jun 14, 2022 at 5:49 PM Jim Lux <jim@...> wrote:
On 6/14/22 9:24 AM, W0LEV wrote:--Thank you, Jim!!!approach *Dave - W?LEV* *Just Let Darwin Work* --
Dave - W?LEV |
Measurement of a short at the VNA connector after clearing cal:
49002000 -0.869956992 0.080768176 49126750 -0.869913280 0.080994536 49251500 -0.869870784 0.081221216 49376250 -0.869828352 0.081447216 49501000 -0.869784000 0.081672776 49625750 -0.869736192 0.081898376 49750500 -0.869684800 0.082123680 49875250 -0.869630720 0.082347872 50000000 -0.869575232 0.082570920 After calibrating: 49002000 -1.000384927 -0.000153716 49126750 -1.000387430 -0.000148167 49251500 -1.000389814 -0.000142248 49376250 -1.000392199 -0.000138649 49501000 -1.000394344 -0.000137316 49625750 -1.000395775 -0.000135393 49750500 -1.000395656 -0.000129219 49875250 -1.000393629 -0.000117017 50000000 -1.000389934 -0.000100459 After switching to MA mode. File stayed in RI mode but data changed: 49002000 -0.999972800 0.000051251 49126750 -0.999974336 0.000058021 49251500 -0.999975360 0.000064329 49376250 -0.999976000 0.000067545 49501000 -0.999977344 0.000067800 49625750 -0.999980672 0.000067987 49750500 -0.999986496 0.000071876 49875250 -0.999994304 0.000081417 50000000 -1.000003099 0.000095119 These are excerpts of a 401-point file from 0.1 to 50 MHz. At lower frequencies the last data set is beyond -1 like the second one. Observations: 1. |S11| > 1 after calibration. 2. MA mode leaves file in RI mode but affects the data. Brian |
Brian,
toggle quoted message
Show quoted text
As a reference point, when calibrated, the HP 8753D has a linear magnitude reflection uncertainty of 0.015 or so when the reflection magnitude is near 1. I am not sure if that means the repeatability is limited to that or if that is due to imperfections of the standards used. (Source: Quick Reference Guide, 08753-90259, page 7-5) --John Gord On Tue, Jun 14, 2022 at 02:45 PM, Brian Beezley wrote:
|
Thanks to DAve W0LEV and others for bring up the subject of error analysis and significant figures.
toggle quoted message
Show quoted text
My own background is like Dave¡¯s; a physics degree in 1965. Our professor in senior labs was very big on error analysis. It¡¯s a hard subject. I asked at our local hospital regarding uncertainty and error in lab tests. I never got a good answer. A friend of mine, an engineering professor, made a joke ¡. ¡°If you want to be absolutely certain about a measurement, only measure once.¡± Do I need to explain? Chuck KF8TI On Jun 14, 2022, at 1:49 PM, Jim Lux <jim@...> wrote: |
Chuck,
Take a read of Bob Witte¡¯s measurement equipment books, written when he was at HP. Joe Carr¡¯s tests and measurements book is another one rendering a cogent treatment of measurement errors without going very deep into the statistical deep-end. This work is based upon formal true-score theory: the central issue of which is whether the observed score¡¯s errors are correlated with the true score and/or the observed score. 73, Frank K4FMH |
Frank:
toggle quoted message
Show quoted text
Thanks for the tip about Bob Witte¡¯s books. Chuck KF8TI On Jun 15, 2022, at 7:55 PM, Frank K4FMH <frankmhowell@...> wrote: |
to navigate to use esc to dismiss