All ActivityMost RecentMost LikesSolutionsRe: Ford's answers to the NHTSA 6.7 InvestigationI would question it too. There are two main factors that influence the results of a simulation: 1) the quality of the model, and 2) the quality of the inputs. For many years hardware was a limiting factor, so the models were dumbed down to be able to run in a reasonable time. Then along came the US DOE and their focus on simulation for nuclear degredation testing and the field of high performance computing was born. Sometime around the late 90s or early 2000s, computing capacity became so powerful that inaccuracies in what were previously believed to be accurate models became apparent. Couple those issues with the quality of the input data, and you realize that very quickly it's easy for what gets simulated to be completely disconnected from what they intend to simulate. Many of these models have hundreds or thousands of free parameters, and if any one of them is off is can completely change the results of the simulation. Further, you have timing and synchronization issues with distributed systems. The picture gets pretty muddy. In my view, simulation is a good tool to use as reinforcement for old-fashioned bench work, not a replacement.Re: Ford's answers to the NHTSA 6.7 InvestigationNow you're getting into my area of expertise. When it comes to debugging code, you're absolutely right. There's no single person that can do it all, and we're already at the point where even large teams are stressed. As this problem becomes increasingly pervasive in design, new tools and practices are developed to aid in the debugging process. We have automated theorem provers to help with software checking, and as a technology they are becoming increasingly useful. They do require a certain amount of expertise to use properly, and the "automated" in their name is a bit of a misnomer as they have to be used in partnership with a human software tester. But they are still a relatively young technology that I believe will be increasingly helpful as software systems continue to mature. I'd imagine there's a similar phenomenon taking place in design mechanical design. Software simulation is still a relatively young technology in the grand scheme of things. The scientific and engineering communities are still in their infancy in terms of knowledge about how to best implement, validate, and interpret simulation systems for design. There is always going to be a ladder process when it comes to innovation not just in design, but in the process of design. As new tools and techniques come out that enable something to be designed that wasn't feasible or economical before, new ways for these designs to fail will be become apparent. In turn, the design tools and techniques will be refined to account for those failures in future designs. Then new flaws will be uncovered. And so it goes, just like an arms race. Each side leveraging the insights of the other to make progress. So I don't think we'll ever hit a maximum, as each time we're nearing our capacity, someone comes along to develop a tool or technique that increases our capacity. It just sucks when you're at the bleeding edge and are the one paying the piper when those limits are identified empirically.Re: Ford's answers to the NHTSA 6.7 InvestigationRicatic: You've mentioned a few times now that the rate of failure in Canada is significantly different than in the US. When I munged the numbers using the NHTSA document from Ford, they seemed to be essentially the same: 0.6 failures per thousand vehicles in the U.S. vs. 0.4 failures per thousand vehicles in Canada. I'll admit that 0.6 = 150% of 0.4 and that I didn't do a Student's t-test to verify significance of the difference. Are those the numbers you are referring to?
GroupsTravel Trailer Group Prefer to camp in a travel trailer? You're not alone.Jun 30, 202544,039 Posts