Precision Measurment

So are those 3 significant decimal places really what they appear to be from a scan?

Lately I’ve been thinking about what precision manufacturing and measurement means to several industrial communities, and it gets pretty murky at times. For the record, I spent 13 years on a previous career path as a job shop machinist. That body of work defined “precision” for me, as I regularly maintained cylindricity & taper & size tolerances of 0.0005 inch or about 13 μm. Whichever unit du jour one may prefer, that translates roughly to 1/4th the diameter of a human hair. Those measurements were performed real-time during the machining process with certified analog instruments, and then on to quality control for final inspection, over 35 years ago.

Fast forward to today, I’m going to talk a little about digital inspection techniques/systems  that prevail in today’s manufacturing environments and additive manufacturing.

Digital Metrology
Let’s explore at a bare minimum, the current state of digital metrology – this could be a white paper all on its own, for the sake of conversation here, I’ll keep it short. First we will cast out the traditional analog and digital measurement tools (calipers, micrometers, dial indicators, CMMs) and skip straight to 3D image acquisition for digital measurement in software. Stop – what? If you are not familiar with this mode of inspection, then perhaps you should perform a search on metrology software.

The trend to use imaging systems to “scan” manufactured objects and compare the “actual” to the electronic or CAD design (nominal) object, is fast becoming the norm today. It has become common place to see structured light, IR and laser light scanners in the workplace of many companies. These devices can be manually operated and may also be mounted to robotic arms or conveyors, with the resulting imagery processed into a 3D shape or “model” for metrology purposes. The result is typically represented by either a point cloud or a polygonal mesh (formed from a point cloud).

Level of Error
The purpose here is simple; reduce human measurement error and automating inspection should result in more reliable products and increased productivity at less cost – thus… more profit. We could argue that rationale at length, but that is the common reasoning for those taking the leap into next generation metrology verification tools and methods. One only has to look towards the definition of a Gage R&R to understand, that even how each operator uses a caliper has to be incorporated in to the ubiquitous mean error result of organic measurement.

At a previous company where I performed this type of technical training, I had the benefit of working both with the output of several of these imaging devices, and at onsite sessions, engaged in the operation of them. The various aforementioned shape acquisition technologies do present their own nuances, caveats and error variables. Again from operator usage to calibration of equipment to the environment they are used in, and all points in between – thus, no methodology is perfect or without its own intrinsic drawbacks. Therein also lies the circular route back to my opening statements about precision – shape acquisition devices are not equal. This is easily confirmed by the consumer shopping a staggering array of available equipment – typically finding, better is more expensive, e.g. you get what you pay for.

Optical Imaging
The cost difference between scanners advertised with .4mm accuracy and .010mm (10 μm) accuracy could be 50K to 300K USD. That said, there are definitely applications for both ends of the spectrum and points in between. If scanning cut lines for cardboard boxes, .4mm level of precision is probably just fine – but for a threaded bezel of an instrument readout, even 10 μm might not cut it. Then there is the problem that all of these optical imaging devices share, line of sight acquisition. Frequently, this is solved by flipping the part over during a scan operation and then stitching the different “views” of the object back together to get a complete 3D shape. So aside from the device that could experience variances as described earlier, so to can software contribute to error in this stitching process, as the coordinate system for these multiple viewpoints to capture undercuts and backsides must be aggregated by software algorithms to put humpty-dumpty back together again, or… Pooh Bear as seen in the following image. From left to right; single scan image, all scan images (multiple coordinate systems), software aggregation or registration, polygonized with no cosmetic manipulation, and finally… what many scan software packages present to you – yes, the object has changed in size, topology and shape description from what was originally captured.

Optical scan pooh bear through to polygonized object – click to enlarge

Typically, behind the scenes in software for line of sight scanners or, introduced by the software operator – is point cloud noise reduction and/or polygonal smoothing – no one wants to see the seam lines where the different coordinate systems were stitched back together, and no one wants to see the spikes or noise that often occurs when scanning angles change and converge. The above example demonstrates the typical transitions that will never be seen in any marketing materials. The end result will always have a level of error introduced by all of these possible variances. The variances can be managed if you have access to each of the processes that the original scan data may go through, and you are a few notches better than novice level with that software.

Software Measurement
Okay, so now that I have the 3D shape of a manufactured object, I want to measure it. I’ll volunteer that most all of the software programs that are designed to perform geometric measurements are accurate, it is what is being measured that should be considered subjective. Some software may be easier to use than others and some may use different criteria for the measurements given, but what is measured is sure to be accurate in software – though the path that data took to get under the measurement tool in software can be subject to several fail points or variances.

Industrial Computed Tomography
Enter industrial computed tomography or industrial CT as it is commonly referred. The devices used for imaging in this sector are a significant investment as they are all predicated on the use of X-Ray for the capture of projections to construct a 3D volume. Similar to optical imaging, there are many resolutions available, with a specific designation referred to as “metrology grade” devices as well. Typically, when comparisons are available, precision of the same range will cost more in a CT device than with an optical device. Software that gathers this information and constructs voxels from 2D X-Ray projections is also at least one, if not 3X more sophisticated that many offerings in the optical imaging world.

So why pay more for the same level of precision between these two imaging technologies – simple, CT is not line-of-sight imaging, therefore reducing the level of error by one factor immediately, no stitching of widely disparate coordinate systems required, if the object scanned can be fit into a single imaging window or detector field of view. The next level of error reduction is at the presentation or output of what was “scanned” – with CT you get exactly what was scanned with no background cosmetic processing of the data. So typically, in a one to one comparison of the two imagining technologies at the same advertised precision, CT will always have more fidelity. The caveat to that last statement however, presumes that operators in both realms are proficient in using their respective devices to gather the data. The other obvious benefit of a CT system is that you also will capture voids/inclusions in the material of a part scanned, extremely useful for instance, when refining the design of a plastic injection mold. This short description of industrial CT does not reflect usage in the medical products, forensics or academic research areas, as I am primarily focusing on the definition of precision as it applies to industrial product manufacturing inspection.

The good news is; as industrial CT is more widely known, it becomes more widely accepted – driving CT OEMs to become increasing cost competitive in an emerging market… targeting the lower tier manufacturing consumers currently dominated by the optical imaging sector.

Additive Manufacturing
The message here is relatively simple, what is old is not new, just refined… with Additive Manufacturing (AM) having as many if not more failure points than traditional manufacturing. Definitely AM is here to stay and in some cases, an ideal choice for low volume/hi-value components. The purpose of this section is to understand that “precision” in the AM world is often at odds with the traditional understanding of precision from a machinists point of view. AM works by layering (Carbon’s CLIP™ excluded) 2D depositions on top of each other, whether this be SLA, FDM, Metals (numerous acronyms) or other related variant processes.

For instance a device with an advertised precision of 10 μm, is somewhat misleading, that number typically represents the thickness of the next layer to be deposited. It is not representative of the overall accuracy or fidelity of the part produced. The method to obtain a dimensionally correct part using AM often includes multiple attempts/iterations and can change from device to device, even of the same brand and type.

The overlooked or not discussed issue, especially with metal deposition AM is… what is the structure like inside? Only CT can discover this, and in many instances – porosity IS present, un-melted source material, and in the case of “optimized” structures – struts in the design can vary dimensionally – again, only detectable with CT. The following image of a bracket is shown to transition through various states of topology optimization, and all of this wonderful technology takes place, even the structural analysis – on the design side of the design to manufacturing process.

Bracket transition through topology optimization

What is ultimately output in AM could have warpage, shrinkage, porosity, and/or inclusions that may compromise the fidelity of the design. In the the CNC machined block version (typical), the hole pattern may be held to repeatable geometric tolerances, including dimensional size and true position. The strength of the AM struts resulting from optimization of the body are entirely dependent on the correct size produced in AM. Line of sight optical scanning, even on this simple optimized part topology will produce several blind spots – CT is the answer.

Summary
In closing here; the precision in manufacturing message can be subjective based on who the software/device vendor/marketeer is. Typically, the software involved with these processes are geared to the respective imaging industry which they serve – there are very few exceptions that work on both types of imaging (points/polys and voxels) equally well. The consumer should be aware of any background point/polygon processing that may be “automatic” to “simplify” data usability, as those process may physically change the topology of the optical scan data. It is acceptable to presume that all of the software applications that perform digital measurements are accurate in what they produce, the main differences being ease of use and number of operations available in each.

The consumer looking to digitally augment either their R&D or metrology/inspection process should be aware of the “precision” rhetoric and determine for themselves, depending on their individual manufacturing processes, budget and precision requirements, what is the best fit… both for imaging type and the software application – for their respective company.

Bookmark the permalink.

Comments are closed.