Once the modeling and implementation part is worked out, the most critical and important phase of the code development stares at us. Yes, you guessed it right. The validation part !!
Most often, what is coded in the software (on a theoretical basis) involving detailed physical approximations of a global phenomenon is directly tested against well known "experiments" and "test cases". During the testing (which is pretty much making sure code does what it is supposed to) and later, the validation (where the model is supposed to mimic a process) process, the developer/coder realizes that more approximations or model adjustments is necessary to satisfy the user!!
Stability, convergence, accuracy are all dealt with in the validation process. If the code is made to work on a variety of test cases and the coder, who is the best critique of his own work, tries to break the code by providing a wide range of operating parameters. With every input, the developer monitors the behavior of the code - as to what happens when the code confronts certain aspects of numerical deficiencies or challenges based on the input defined.
Every test case is unique in the sense that it offers a completely new perspective of the modeling process - what the model lacks and what additional effects needs to be incorporated etc. It is not an easy process. No way. However, as a rule of thumb, modelers start by applying the code to smaller domains and escalate the size and complexity of the process as they move into advanced testing/validation process.
For example, once you have developed a "boiling model" - it makes no sense to directly apply it into a large scale channel flows with high complexity when no tests have been run on smaller channels/ducts and compared with experiments. Sometimes, in fact many times should I say, going across different scales and applying the codes, one figures out that some "problems" are often masked by other parameters operating in the system. Only by running the procedure on a wide range of scenarios can one explicitly make sure all modeling constraints act as per plan and their interactions are stable.
A small discussion is available here: http://www.innovative-cfd.com/cfd-validation.html
This site shares some well known validation resources
NPARC Alliance CFD Verification and Validation Web Site: http://www.grc.nasa.gov/WWW/wind/valid/homepage.html
AIAA CFD Drag Prediction Workshop: http://aaac.larc.nasa.gov/tsab/cfdlarc/aiaa-dpw/
CFL3D Test/Validation Cases: http://cfl3d.larc.nasa.gov/Cfl3dv6/cfl3dv6_testcases.html
Sample validation/testing (exp) study links
1. CFD model validation for hydrogen dispersion: Nice study - http://www.gexcon.com/doc//PDF%20files/Middha_Hansen_Storvilk_HydrogenDispersion_09.pdf
2. Stirred vessel mixing test case study released by ANSYS: http://www.bakker.org/cfm/publications/tn253.pdf
3. Often wind tunnel experiments and studies form a large portion of the CFD testing and validation database: I came across one of these sites
4. How about a CFD evaluation study of wind tunnel flow quality itself !!?
5. one of the openfoam based WIKI has a site for multiphase activities: http://openfoamwiki.net/index.php/Sig_Multiphase
6. I am no supporter of any particular book / reference..recently had a chance to look at this book ("Validation of Advanced Computational Methods for Multiphase Flow", C2005). It was a nice collection of the test cases (fairly well known to most of us)
I found this article sometime ago on the "general BEST practices guide for numerical accuracy"
Coding and implementation is one part of the puzzle. Without validation and testing, no software package is complete. (No wonder so many job availability for validation and testing are on the rise !! Customers want to see more validation reports / testing reports before they award any contract to potential CFD consulting services or the big players)
Make sure whatever you code -----> is properly validated.