10
Accuracy, Convergence and Mesh Quality Posted on July 5, 2012 by John Chawner That statement is counter to what we all know to be true in practice, that a good mesh helps the computational fluid dynamics (CFD) solver converge to the correct answer while minimizing the computer resources expended. Stated differently, most every decent solver will yield an accurate answer with a good mesh, but it takes the most robust of solvers to get an answer on a bad mesh. The crux of the issue is what precisely is meant by “a good mesh.” Syracuse University’s Prof. John Dannenhoffer points out that we are much better at identifying a bad mesh than we are at judging a good one. Distinguishing good from bad is clouded by the fact that badness is a black-white determination of whether the mesh will run or not. (Badness often only means whether there are any negative volume cells.) On the other hand, goodness is all shades of gray – there are good meshes and there are better meshes. Neither is goodness all about the mesh. Gone are the days when one could eyeball the mesh and make a good/bad judgment. Adaptive meshes that are justified by visual inspection of how much thinner shock waves are in a contour plot of density just do not make the grade. What matters is how accurately the CFD solution reflects reality. Therefore, the solver’s numerical algorithm and the physics of the flow to be computed also have to be accounted for in the evaluation of a mesh. Implicit in the paragraphs above is the idea of judging mesh quality in advance of computing the CFD solution. There are those who think that a priori mesh quality assessment is of limited value and that changing the mesh in response to the developing flow solution (via mesh adaption or adjoint methods or other technology) is the better way to generate a good mesh and an accurate solution.

Meshing, accuracy, aspect ratio and convergence

Embed Size (px)

DESCRIPTION

Meshing, accuracy, aspect ratio and convergence

Citation preview

Accuracy, Convergence and MeshQualityPosted onJuly 5, 2012 byJohn Chawner

That statement is counter to what we all know to be true in practice, that a good mesh helps the computational fluid dynamics (CFD) solver converge to the correct answer while minimizing the computer resources expended. Stated differently, most every decent solver will yield an accurate answer with a good mesh, but it takes the most robust of solvers to get an answer on a bad mesh.The crux of the issue is what precisely is meant by a good mesh. Syracuse Universitys Prof. John Dannenhoffer points out that we are much better at identifying a bad mesh than we are at judging a good one. Distinguishing good from bad is clouded by the fact that badness is a black-white determination of whether the mesh will run or not. (Badness often only means whether there are any negative volume cells.) On the other hand, goodness is all shades of gray there are good meshes and there are better meshes.Neither is goodness all about the mesh. Gone are the days when one could eyeball the mesh and make a good/bad judgment. Adaptive meshes that are justified by visual inspection of how much thinner shock waves are in a contour plot of density just do not make the grade. What matters is how accurately the CFD solution reflects reality. Therefore, the solvers numerical algorithm and the physics of the flow to be computed also have to be accounted for in the evaluation of a mesh.Implicit in the paragraphs above is the idea of judging mesh quality in advance of computing the CFD solution. There are those who think thata priorimesh quality assessment is of limited value and that changing the mesh in response to the developing flow solution (via mesh adaption or adjoint methods or other technology) is the better way to generate a good mesh and an accurate solution.Mesh Quality WorkshopGiven this state of affairs, it was important to assemble mesh generation researchers and practitioners to assess the topic of mesh quality. Pointwise participated in the Mesh Quality/Resolution, Practice, Current Research, and Future Directions Workshop last summer in Dayton and hosted by the DoD High Performance Computing Modernization Program (HPCMO) and organized by the PETTT Program (User Productivity, Enhancement, Technology Transfer and Training) and AIAAs MVCE Technical Committee (Meshing, Visualization, and Computational Environments).The workshop brought together all the stakeholders of mesh quality: CFD practitioners, CFD researchers, CFD solver code developers (both commercial and government) and mesh generation software developers. A list of the workshop presentations is included at the end of this article (References 1a-1i). Hugh Thornburg from High Performance Technologies wrote an overview of the workshop (Reference 2) that nicely sums up the current state of affairs: A mesh as an intermediate product has no inherent requirements and only needs to be sufficient to facilitate the prediction of the desired result. I interpret this as the double-negative quality judgment that the grid is not bad. The mesh must capture the system/problem of interest in a discrete manner with sufficient detail to enable the desired simulation to be performed. As long as desired simulation implicitly includes to a desired level of accuracy, this is a good definition. Thornburg also acknowledges many practical constraints on mesh generation such as time allotted for meshing, topology issues for parametric studies, limits on mesh size due to computational resources, and solver-specific requirements.Thornburg also offers Simpsons Verdict library (Reference 3) as a de facto reference that covers most if not all commonly used techniques for computing element properties.Users PerspectiveThe importance ofa prioriindicators of mesh quality is exemplified by NASAs Stephen Alter, who defined and demonstrated the utility of his GQ (grid quality) metric that combines both orthogonality and stretching into a single number. Driven by the desire to ensure the accuracy of supersonic flow solutions over blunt bodies computed using a thin layer Navier-Stokes solver, he has established criteria for the GQ metric that give him confidence prior to starting a CFD solution.Two aspects of GQ are notable. First, this metrics reliance on orthogonality is closely coupled to the numerics of the solver TLNS assumptions break down when the grid lacks orthogonality. Second, use of a global metric aids decision making, or as Thornburg wrote, A local error estimate is of little use. GQ represents domain expertise the use of specific criteria within a specific application domain.Researchers PerspectiveDannenhoffer reported on an extensive benchmark study that involved parametric variation of a structured grids quality for a 5 degree double-wedge airfoil in Mach 2 inviscid flow at 3 degrees angle of attack. Variations of the mesh included resolution, aspect ratio, clustering, skew, taper, and wiggle (using the Verdict definitions).Dannenhoffers main conclusion was very interesting: there was little (if any) correlation between the grid metrics and solution accuracy. This may have been exacerbated by the fact that he found it difficult to change one metric without influencing another (e.g. adding wiggle to the mesh also affected skew) or it may have been due to the specific flow conditions.Dannenhoffer also introduced the concept of grid validity (as opposed to grid quality), which is intended to measure whether the grid conforms to the configuration being modeled (which in practice it sometimes does not). He proposed three types of validity checks:1. Type 1 checks whether cells have positive volumes and faces that do not intersect each other. Here again is an instance of the Is this grid bad? question.2. Type 2 checks whether interior cell faces match uniquely with one other interior face and whether boundary cell faces lie on the geometry model of the object being meshed.3. Type 3 checks whether each surface of the geometry model is completely covered by boundary cell faces, whether each hard edge of the geometry is covered by edges of boundary cell faces, and whether the sum of the boundary faces areas matches the actual geometry surface area.

Figure 1: A simple demonstration of how a poor mesh from a cell geometry perspective (right) results in lower discretization error than one with perfect cells (left). From Reference 1c.Prof. Christopher Roy from Virginia Tech showed a counter-intuitive example (at least from the standpoint ofa priorimetrics) that the solution of 2D Burgers equation on an adapted mesh (with cells of widely varying skew, aspect ratio, and other metrics) has much less discretization error than the solution on a mesh of perfect squares. From this example alone, it is clear that metrics based solely on cell geometry are not good indicators of mesh quality as it pertains to solution accuracy.Solvers PerspectiveThe workshop was fortunate to have the participation of several flow solver developers, who shared details about how their solver is affected by mesh quality. The common thread among all was that convergence and stability are more directly affected by mesh quality than solution accuracy.CFD++Metacomp Technologies Vinit Gupta cited cell skewness and cell size variation as two quality issues to be aware of for structured grids. In particular, grid refinement across block boundaries in the far field where gradients are low has a strong, negative impact on convergence. For unstructured and hybrid meshes, anisotropic tets in the boundary layer and the transition from prisms to tets outside the boundary layer also can be problematic.Gupta also pointed out two problems associated with metric computations. Cell volume computations that rely on a decomposition of a cell into tets are not unique and depend on the manner of decomposition. Therefore, volume (or any measure that relies on volume) reported by one program may differ from that reported by another. Similarly, face normal computations for anything but a triangle are not unique and also may differ from program to program. (This is a scenario we have often encountered at Pointwise when there is a disagreement with a solver vendor over a cells volume that turns out to be the result of different computation methods.)Fluent and CFXANSYS Konstantine Kourbatski showed how cell shapes that differ from perfect (dot product of face normal vector with vector connecting adjacent cell centers) make the system of equations stiffer slowing convergence. He then introduced metrics, Orthogonal Quality and two skewness definitions, with rules of thumb for the Fluent solver. It was interesting to note that the orthogonality measure ranges from 0 (bad) to 1 (good) whereas the skewness metric is directly opposite: 0 is good and 1 is bad. Another example of a metric criterion was that aspect ratios should be kept to less than 5 in the bulk flow. Kourbatski also provided guidelines for the CFX solver.He also pointed out that resolution of critical flow features (e.g. shear layers, shock waves) is vital to an accurate solution and that bad cells in benign flow regions usually do not have a significant effect on the solution.KestrelKestrel, the CFD solver from the CREATE-AV program, was represented by David McDaniel from the University of Alabama at Birmingham. At the start, he made two important statements. First, their goal is to do well with the mesh given to us. (This is similar to Pointwises approach to dealing with CAD geometry do the absolute best with the geometry provided.) Second, he notes that mixed-element unstructured meshes (their primary type) are terrible according to traditional mesh metrics, despite being known to yield accurate results. This same observation is true for adaptive meshes and meshes distorted by the relative motion of bodies within a mesh (e.g. flaps deflecting, stores dropping).More significantly, McDaniel notes a scary interdependence between solver discretization and mesh geometry by recalling Mavriplis paper on the drag prediction workshop (Reference 4) in which two extremely similar meshes yielded vastly different results with multiple solvers.To address mesh quality, Kestrels developers have implemented non-dimensional quality metrics that are both local and global and that are consistent in the sense that 0 always means bad and 1 always means good. The metrics important to Kestrel are an area-weighted measure of quad face planarity, an interesting measure of flow alignment with the nearest solid boundary, a least squares gradient that accounts for the orientation and proximity of neighbor cell centroids, smoothness, spacing and isotropy.

Figure 2: Using Kestrel one can show a correlation between mesh and solution quality. From Reference 1f.Differing from Dannenhoffers result, McDaniel showed a correlation of mesh quality with solution accuracy with the caveat that a well resolved mesh can have poor quality and still produce a good answer. (In other words, more points always is better.)STAR-CCM+Alan Muellers presentation on CD-adapcos STAR-CCM+ solver began by pointing out that mesh quality begins with CAD geometry quality and manifests as either a low quality surface mesh or an inaccurate representation of the true shape. This echoes Dannenhoffers grid validity idea.After introducing a list of their quality metrics, Mueller makes the following statement, Results on less than perfect meshes are essentially the same (drag and lift) as on meshes where considerable resources were spent to eliminate the poor cells in the mesh. Here we note that the objective functions are integrated quantities (drag and lift,) instead of distributed data like pressure profiles. After all, integrated quantities are the type of engineering data we want to get from CFD.This insensitivity of accuracy to mesh quality supports Muellers position that poor cell quality is a stability issue. Accordingly, the approach with STAR-CCM+ is to be conservative opt for robustness over accuracy. Specifically, they are looking for metrics that will result in division by zero in the solver. Skewness as it effects diffusion flux and linearization is one such example.Meshers PerspectiveDr. John Steinbrenner and Nick Wyman shared Pointwises perspective on solution-independent quality metrics by taking a counter-intuitive approach. You would think that a mesh generation developer would promote the efficacy ofa priorimetrics. But the error in a CFD solution consists of geometric errors, discretization errors, and modeling errors. Geometric errors are similar to points made by Dannenhoffer and Mueller about properly representing the shape. Modeling errors come from turbulence, chemical, and thermophysical properties. Discretization involves degradation of the solvers numerics. The discretization error is driven by coupling between the mesh and the solvers numerical algorithm.

Figure 3: This table summarizes the mesh quality metrics available in Pointwise. From Reference 1h.Therefore, although Pointwise can compute and display many metrics, it is important to note that many of them lack a direct relationship to the solvers numerics and accordingly they are only loose indicators of solution accuracy. On the other hand, these metrics are convenient to compute, can address Dannenhoffers grid validity issue, and provide a mechanism for launching mesh improvement techniques. They also form the basis of a users ability to develop domain expertise metrics that correlate to their specific application domain.Conclusions1. CFD solver developers believe mesh quality affects convergence much more than accuracy. Therefore, the solution error due to poor or incomplete convergence cannot be ignored.2. One researcher was able to show a complete lack of correlation between mesh quality and solution accuracy. It would be valuable to reproduce this result for other solvers and flow conditions.3. Use as many grid points as possible (Dannenhoffer, McDaniel). In many cases, resolution trumps quality. However, the practical matter of minimizing compute time by using the minimum number of points (what Thornburg called an optimum mesh) means that quality still will be important.4. A priorimetrics are valuable to users as an effective confidence check prior to running the solver. It is important that these metrics account for cell geometry but also the solvers numerical algorithm. The implication is that metrics are solver-dependent. A further implication is that Dannehoffers grid validity checks be implemented.5. There are numerous quality metrics that can be computed, but they are often computed inconsistently from program to program. Development of a common vocabulary for metrics would aid portability.6. Interpreting metrics can be difficult because their actual numerical values are non-intuitive and stymie development of domain expertise. A metric vocabulary should account for desired range of result numerical values and the meaning of bad and good.References1. Workshop presentationsA. Stephen Alter, NASA Langley, A Structured-Grid Quality MeasureB. John Dannehoffer, Syracuse University, On Grid Quality and ValidityC. Christopher Roy, Virginia Tech, Discretization ErrorD. Vinit Gupta, Metacomp Technologies, CFD++ Perspective on Mesh QualityE. Konstantine Kourbataski, ANSYS, Assessment of Mesh Quality in ANSYS CFDF. David McDaniel, University of Alabama at Birmingham, Kestrel/CREATE-AV Perspective on Mesh QualityG. Alan Mueller, CD-adapco, A CD-adapco Perspective on Mesh QualityH. John Steinbenner and Nick Wyman, Pointwise, Solution Independent MetricsI. Presentations from the Mesh Quality Workshop are available by email request to [email protected]. Thornburg, Hugh J., Overview of the PETTT Workshop on Mesh Quality/Resolution, Practice, Current Research, and Future Directions, AIAA paper no. 2012-0606, Jan. 2012.3. Stimpson, C.J. et al, The Verdict Geometric Quality Library, Sandia Report 2007-1751, 2007.4. Mavriplis, Dimitri J., Grid Quality and Resolution Issues from the Drag Prediction Workshop Series, AIAA paper 2008-930, Jan. 2008.5. Roache, P.J., Quantification of Uncertainty in Computational Fluid Dynamics Annual Review of Fluid Mechanics Vol. 29, 1997, pp. 123-160.6. Knupp, Patrick M., Remarks on Mesh Quality, AIAA, Jan. 2007.