As I said, not enough life testing has been carried out.
They can do as much projection forecasting as they like, but until they've been running for X amount of years they don't know what will happen to them.
It is more the "whishful thinking" when in terpreting the accelerated test data.
Does some (accelerated) test suggest a problem? "Bring be some proof it is that bad (explanation,...). You don't have such proof so can not rule out a test inaccuracy? So don't delay the production start".
Problem is, from the tons of tests there are really very frequent occasions when the indirect test results show up pessimistic, but the reality then shows it is just fine. So ignoring unfavorable results when they are somehow questionable (e.g. using long stretch extrapolation,...), instead of really evaluating why the results showed up the way they showed (so going to ignore them only when proven to be really wrong by further analysis), becomes the default way to do business by many companies. Many times it plays out OK, but many times those test predictions turn out to be real problems.