Casting process simulation is widely accepted as an important tool in product design and process development to improve yield and casting quality.
Now, the ESI Group, a provider of digital simulation software for metalcasters, has introduced ProCAST 2005, a comprehensive finite element software package for foundry simulation with a new Distributed Memory Parallel (DMP) version. This version enables designers and users to conduct casting simulation using several processors running in parallel.
“With parallel processing, even the most complex processes can be simulated overnight, allowing foundries to do more mold design iterations in less time,” says Dominique Lefebvre, casting solutions product manager. “This new DMP version was specifically tuned for small- to medium-sized parallel hardware configurations to deliver high performance at a very competitive cost. The hardware configuration, number of CPUs, and interconnect switches, can be adapted to meet the end-user requirements in terms of turn-around times.”
ProCAST 2005 is suitable for a wide range of processes and alloys, from high- and low-pressure diecasting to sand and investment casting. It is designed to be used on Linux clusters and UNIX multiprocessor platforms.
For optimal scalability, the parallelization of ProCAST 2005 relies on dynamic domain decomposition technology. Automatically activated in ProCAST 2005, this feature distributes a task to multiple processors. The finite element mesh of the CAD model is partitioned into as many sub-domains as there are processors.
Different technologies are currently available for parallel processing: symmetric Multi Processing (often referred to as Shared Memory Parallel processing) is one. With SMP, specific programming directives are added to the software to allow computations to be distributed on several processors, all processors sharing the same memory addresses. However, SMP technology is available only on specific hardware platforms and is limited in terms of the maximum number of processors. In practice, it does not provide significant speed improvements beyond eight processors.
Distributed Memory Parallel (DMP) processing is an alternative in which each processor accesses its own memory. A Message Passing Interface (MPI) is required to distribute and share data among the different processors. With DMP, it is possible to design parallel applications that scale to a greater number of processors at the expense of substantial programming efforts. Because of its versatility and potential for very high performance, the DMP technology was implemented in the ProCAST solvers.
In ESI’s implementation of DMP, the decomposition provides a balanced repartition of the workload and a minimal interface between the different partitions in order to reduce the data exchange between processors. As the casting is gradually filled during the simulation, the load balancing evolves continuously. To maintain optimal scalability, ProCAST 2005 automatically updates the domain decomposition at regular intervals.
The performance of the parallel solver depends on the size of the model, the type of application, and system components such as interconnects related to interprocessor communication. Results obtained on a Linux cluster with Gigabit Ethernet interconnect show an acceleration of the speed of analysis up to six times for both filling and solidification on eight processors. Higher scalability can be obtained with more powerful interconnect for up to 32 processors.
ESI offers the ProCAST 2005 parallel version for use on Linux clusters (Redhat 7.3 and Enterprise server), as well as on UNIX multi-processor platforms from IBM and SGI.
The ProCAST 2005 parallel version currently includes the main software functionalities to simulate mold filling and solidification with radiation. However, other specific software features — such as lost foam, non-Newtonian flows and advanced solidification modules including micro-porosity and CAFE modules — are not available for parallel processing at this time.
The stress solver is being finalized now and will be released later this calendar year to allow for parallel, fully coupled thermo-mechanical simulations.