Forum Discussion
raleighdogguy
Mar 13, 2012Explorer
I would question it too. There are two main factors that influence the results of a simulation: 1) the quality of the model, and 2) the quality of the inputs. For many years hardware was a limiting factor, so the models were dumbed down to be able to run in a reasonable time. Then along came the US DOE and their focus on simulation for nuclear degredation testing and the field of high performance computing was born. Sometime around the late 90s or early 2000s, computing capacity became so powerful that inaccuracies in what were previously believed to be accurate models became apparent.
Couple those issues with the quality of the input data, and you realize that very quickly it's easy for what gets simulated to be completely disconnected from what they intend to simulate. Many of these models have hundreds or thousands of free parameters, and if any one of them is off is can completely change the results of the simulation. Further, you have timing and synchronization issues with distributed systems. The picture gets pretty muddy.
In my view, simulation is a good tool to use as reinforcement for old-fashioned bench work, not a replacement.
Couple those issues with the quality of the input data, and you realize that very quickly it's easy for what gets simulated to be completely disconnected from what they intend to simulate. Many of these models have hundreds or thousands of free parameters, and if any one of them is off is can completely change the results of the simulation. Further, you have timing and synchronization issues with distributed systems. The picture gets pretty muddy.
In my view, simulation is a good tool to use as reinforcement for old-fashioned bench work, not a replacement.
About Around The Campfire
37 PostsLatest Activity: Feb 22, 2025