Foundry processes are complex, so it’s extremely difficult to improve scrap rates permanently and to pinpoint exactly what is causing persistent quality problems. A data-based process view helps foundries find the best machine and process parameters for each pattern and move away from slow, expensive experimentation. But even digitally expert foundries struggle to identify which element of their process causes scrap.
Why track individual castings? IIoT tools like DISA’s Monitizer | DISCOVER offer a real-time view of production and trustworthy historical data to analyze. But powerful as it is, conventional batch-level data does not have the resolution needed to precisely pinpoint the root causes of quality problems.
That’s because, even when a parameter like pouring temperature is sampled for every mold or casting, it’s impossible to know which data points relate to which individual defective castings. So when foundry teams meet to solve current scrap problems, their root-cause analysis is handicapped from the start.
For example, the data might show pouring temperature was slightly too low for 200 castings within a batch. But no-one can really be sure if that error led directly to the 100 scrap castings. Typically, more than one part of a process will be out-of-tolerance at any time and there can be multiple, interwoven reasons for defects like porosity or sand inclusions.
The best the team can do is estimate roughly when within the run the scrap appeared – was it the start, the middle or the end? They can then look at suspect parameters within that approximate time range.
Hunches and guesstimates. After much debate, they might decide that pouring temperature is the most likely cause and strive to keep it constant. Yet despite this effort, the same casting defect reappears a few days later. This is a costly waste of time, not to mention extremely frustrating.
What do they do next? Usually, attention shifts to improving other out-of-specification parts of the process. But, even if it were possible, keeping all process variables precisely within tolerance during a run would be prohibitively expensive. Instead of wasting time and money improving a sub-process with little or no impact on quality, foundry operators need to know exactly the process data combination that created those scrap castings.
Foundries pouring ductile iron have a related challenge: identifying defective/suspect castings following a bad magnesium treatment. They don’t know exactly which ones to discard because castings are mixed up in the shakeout, so typically one or more batches before and after the suspect batch have to be quarantined for inspection. That should ensure no bad castings make it through to the customer, but much time is wasted.
DISA’s Trace and Guidance (TAG) system, now in final customer testing, aims to tackle both challenges.
TAG adds an ID to each casting that can be tracked through the entire process. Via this ID, it’s possible to look up exactly when the casting was produced and so check all the many different parameters that produced it – from sand moisture content and melt chemical composition to pouring temperature – together with final quality inspection data.
Tracing every single casting. TAG marking units add the IDs to all castings and one unit per casting cavity is fitted to the pattern plate. A small pocket is needed in each casting cavity to accommodate the marking unit and its connector.
The unit features three dials, each of which turns after each molding machine cycle to create a new, unique batch ID for each fresh casting. These combinations allow a maximum batch size of 19,684 castings. Each casting is also marked with the traditional day code that, along with the TAG ID, uniquely identifies it.
During pattern changes, the molding machine automatically connects to and identifies the marking units in the pattern plate. Foundries can install marking units permanently in some or all of their patterns, swap units between patterns as required, or use a combination of these approaches.
During casting sorting and quality inspection, operators use hand scanners to gather the codes of any scrapped castings and input the defect type for each one. This links each scrap casting’s ID and defect type with its process parameters in the database. Scanning or entering any casting ID thereafter calls up its exact production time, individual process parameters and, if scrapped, its defect type.
A pin-sharp view of production. Now quality-control teams can base their root-cause analysis on accurate evidence. Perhaps they suspect low sand compressibility? The data shows that sand inclusions present in the first part of a batch does indeed coincide with a combination of low compressibility and overly high sand-shot pressure. But, later in the batch, low compressibility and low compression strength look like the culprits. Both need to be investigated.
Casting-level data also will reveal when in-tolerance process parameters interact with each other and cause scrap, something that is notoriously hard to diagnose at batch level. Overall, it helps foundries avoid wasting money on parts of their process that have no influence on a particular quality problem.
This granular data reduces the time needed to identify the most influential process parameters, solve tough scrap problems, and find the optimal settings for any pattern. Basing those optimal settings on casting-level analysis also should deliver a lower scrap level compared to traditional, “seat of the pants” analysis.
Just as with humans, linking process data with final quality at casting level steepens the learning curve for quality optimization using Artificial Intelligence (AI) tools, like DISA’s Monitizer | PRESCRIBE. Because there are far more of these 1:1 correlations available compared to batch data, the AI self-learns much more quickly and its final optimization results should be more accurate too – maximizing scrap reduction.
A transparent view for customers. Returning to the bad magnesium treatment example above, casting-level tracking also helps foundries identify scrap within a larger batch of good castings. So if operators know that a group of castings produced within a certain time range should be scrapped, they can immediately classify them as “bad” in the foundry database.
Following shakeout, QC inspectors use their hand scanners to find those bad castings quickly; the scanner shows a green icon for good castings, a red one for scrap. This saves time and only definitely bad castings are remelted, with obvious benefits for costs and sustainability.
If a foundry delivers scrap castings to a customer, the customer naturally will demand the foundry take remedial action. Whether the inquiry arrives days, months, or even years later, the operators can simply scan casting IDs to locate the process data and start investigating.
With batch-level data, foundry experts on the customer side often demand changes that, as above, simply increase costs without solving the quality issue. With casting-level data, both parties have a very accurate picture of what actually is causing the problem and can jointly agree a solution. Simply knowing the foundry can track castings accurately will give customers confidence that any quality issues first can be identified quickly, and second, dealt with rapidly.
Read every casting’s life story. Producing scrap incurs significant costs, partly through direct losses when castings are scrapped, remelted and produced again, and partly through the additional cost of tracking and sorting scrap, then finding the problem’s root cause. Digital tools now are the most effective way to understand why scrap happens and take the right actions to reduce it.
Casting-level tracking extracts full value from that digital investment. With every casting’s process history available, foundries can speed up root-cause analysis, reduce scrap and wastage, and make operations more profitable and sustainable.