NAFEMS’ top trainer gives advice for all CAE users.
Tony Abbey remembers the struggles of learning simulation analyses back when he started in 1976. That’s one reason why he’s been the training manager at NAFEMS since 2007: it enables him to ease the learning curve of the next generations of simulation expects. But Abbey’s experience doesn’t end with NAFEMS’ in person and online courses. He’s also worked in the aircraft industry, and for simulation software companies such as MSC and NEi software.
Along the way, Abbey’s seen many simulations and has learned a lot about what can go wrong and how to avoid them. Engineering.com sat down with him to learn the top five dos and don’ts he teaches his students. Here is what was learned.
Do understand the fundamental physics of the problem.
Abbey’s first, and perhaps most important, suggestion is to understand the underlying physics of any problem you need to simulate. He says, “Simulation is all about trying to [model] the real world, so you have to have some idea of what’s involved in that real world application. [With] physics, if you don’t have a grasp, it can lead you in the wrong direction.”
For instance, linearity is a simplification often added to simulations to make them easier to use and less computationally expensive. Abbey notes that the world is very non-linear and assuming the opposite can lead to nonsensical results. If a model is limited to assessing only linear situations, then it will not correlate to the real world once the system moves past this region. When this happens, it’s best to choose a new model.
Abbey isn’t against simplifying models. In fact, it’s often a requirement to make a simulation. “You try to simplify the real world, [because] you can’t take it all on board,” said Abbey. “The level you simplify … due to the scope of the analysis and computational capabilities, will affect [your results]. [You] need to understand the limitations of the squeezed-down view of the world and [the] big problems [that can arise] if you assume the wrong things.”
For example, Abbey once worked with a consultant to model the heat flow of a cooling fan. One individual tried to connect two simulation models into a co-simulation without taking into consideration how the system would work physically or attempting to baseline the results. “It was scary,” Abbey said. “Lots of pretty pictures, but it wasn’t based in reality.”
Do what you can to keep computations to a minimum.
With development cycles shrinking, engineers need to quickly iterate their designs to optimize products in time for launch. This entails simplifying simulations for speed. But Abbey notes that you must simplify knowingly. “You can’t just cut off a chunk of an analysis,” he says. “You break it down to the basic physics. Then you can start to explore [how it can be simplified].”
At the same time, “you don’t want to be the analyst at the meeting saying your model needs two more days to compute when you are asked for answers now,” says Abbey. “If they want answers within two to three days you can have a [simplified] model to build up information and then run the big complex model in parallel to get a general sense when asked at the meeting.”
It’s a balancing act. You need to maximize the quality of the results based on the computational resources available and the turnaround time needed for the simulation. In other words, according to Abbey, “if you commit to an overcomplicated model, you can risk the project.”
Abbey suggests one method of simulation simplification is to assess the mesh. To optimize here, guess a mesh density (regionally or globally) based on the computational resources. Then mesh the model and see how those resources react when running the simulation. Then refine the mesh density accordingly until you find “your threshold of pain,” he says. “Maybe you want results in a minute for a sanity check and [then] go on to the next design. For analysists, maybe it’s 30 minutes. Determine what was the design you were able to squeeze through that computational resource. Then you can use this as a benchmark based on if you want to get an answer for a given timeframe.”
Abbey clarifies that if you “need a quick assessment you do a quick and dirty model, and then for plasticity and fatigue you will look at a really refined mesh. Do the simplification with the benefit of why you care about the results.”
Do research into the orders-of-magnitude to expect from inputs and outputs.
When setting up a physics problem it’s important to know and understand the inputs, such as the orientation of planes, levels of damping, forces in play and more. These values should all be based on your understanding of the physics within the problem. For the output, engineers also need ways to verify that the results they get make sense.
“You need to do sanity checks,” says Abbey. “If you [make] an FEA [simulation] you need to check the deflection based on real world [numbers] and engineering judgement. A sense of scale is needed.”
He continued with an example. When performing a linear stress analysis to determine the maximum stress, if the value you get is three times the yield, then you either made a wrong input or there is something funny going on. “Drill down and see [if] the numbers look crazy,” is a form of verification that must be performed.
To get a sense for the numbers, Abbey notes that it’s important for engineers to understand test and field data. This may require getting to know the test, quality and operations teams to see what numbers they get in the field. This way, “anything that is unusual looking is a red flag,” says Abbey. “You won’t be doing a PhD thesis on everything you simulate. So, [typically] you are not going to be exploring new phenomena. [Coworkers] offer the best experience to head off these issues. So have a coffee or beer with the testers and designers.”
Read the rest of this story at ENGINEERING.com