Since we can't yet measure the distance to even relatively close stars with complete accuracy, isn't it possible for errors to accumulate in the sensing when the parameters are initially inaccurate? The difference may not be large in percentage terms, but enough to cause erroneous conclusions.
That's a reasonable question. The accuracy of distances can improve with more and more different lines of evidence. A nearby Cepheid, of the known type, will give an "accurate" value. The reason Cepheid's are known to be useful for distances is due originally to Henrietta Swan Leavitt who studied thousands of variable stars. By using the ones from a fixed, but initially unknown, distance found in the Small Magellanic Cloud, she discovered that certain ones (Cepheids) gave a fixed luminosity for a known period in their varying brightness.
With modern space telescopes, parallax measurements have greatly improved the "distance ladder" to the closer stars. These improvement measurements, in turn, tweak the Cepheid accuracies as well.
For distant galaxies, there are several methods used to help produce a confluence in the same, or close, results. Certain galaxies of a certain size have about the same brightness. But they know this could be fooling them, so when you have millions of galaxies cataloged, better use of them are possible for distances.
The Type Ia supernova are considered reliable for the very distant galaxies, which, of course, helps confirm, or improve, the accuracy of the other galactic methods. And vice versa, no doubt. But how accurate is still a question, IMO. There are debates on just how accurate they are, or whether or not they are all the same type of explosion. The more they get studied the better the accuracy. This may have something to do with the "paradox" mentioned in the Hubble tension.
So, it's fair to say, IMO, that a grain of salt is needed when we claim one distance or not. But these grains are far smaller than they used to be. You will find astronomers are careful in determining their margin of error, based on a standard deviation analyses.
What makes any scientific hypothesis or theory appear solid is how well it holds up to scrutiny. No theory is proveable, but they must, by requirement, be falsifiable, else they're just suppositions. Subjective opinions can be helpful, but objective evidence is required for the basis of the theory itself, and objective evidence must be what is later found from the predictions of the theory.
When the priest, Georges Lemaitre, introduced what we now call the BBT he based it on Einstein's theory, which he learned from Edington and MIT in getting his doctorate in physics, but also it was based on objective observations from two astronomers.: Vesto Slipher, who discovered the first nebulae (galaxies) redshift values; and Hubble, who had the earliest galaxy distance measureents (using the wrong Cepheids for some). Lematire and Einstein knew each other, at conferences at least, and Einstein called his theory fine for the math, but "abominable" for its physics. To him, and mainstream science, the universe was static.
My point is that BBT had to prove itself as it was not desired by the scientific community initially. Fortunately, Eddington and deSitter and others soon realized how important it was. Subsequent observations have helped it greatly, but it didn't rule on top until the predicted cosmic microwave background was discovered. That sealed the deal.