In an effort to provide clarification and a better understand regarding the ISO Presentation Attack Detection testing that iBeta in accordance with  the ISO/IEC 30107-3 standard and in alignment with the ISO/IEC 30107-1 framework, we provide the following explanation.

Confirmation Letters

iBeta posts, by written vendor consent, the Presentation Attack Detection (PAD) Confirmation letters that provide the results of the iBeta PAD testing.  These letters specify the type of PAD testing and the configuration of the vendor product that was tested.  In order to fully assess the vendor product, the audience of these letters should understand the test method used to assess conformance with the ISO 30107-3 Standard.

Understanding the Test Method

iBeta is accredited by NIST NVLAP as an Independent Test Lab and ISO 30107-3 is included in our scope of accreditation.   As such, the specific test procedures, processes, and report templates, as developed by iBeta, are audited and approved as part of the NIST administration of the NVLAP.

Prior to test start, the PAD mechanism is determined:

  • PAD subsystem test – evaluates presentation attack detection only

  • Data capture test – evaluates the coupled presentation attack and quality checks

  • Full system test – evaluates the biometric comparison capabilities of the full biometric subsystem

The type of PAD mechanism determines the ISO/IEC 30107-3 mandated reporting metrics for the evaluation.  For subsystem PAD evaluations, the classification error rates (APCER, BPCER and associated non-response rates) are determined whereas when evaluating full systems, the imposter attack presentation match rates (IAPMR and associated FNMR/FMR) is determined.

In order to specify more exactly  the test method, iBeta identified  levels of testing (although similar to the FIDO Alliance Levels A & B, there are significant differences).  The testing levels and performance requirements are identified as:

Level Time Expertise Artefact source Limit
1 8 hours per subject or species None Cooperative subject and equipment is readily available in a normal home or office environment 0% penetration or match rate allowed
2 2-4 days per subject or species Moderate – participated in at least 1 other PAD test with the target modality and has an understanding of the liveness detection functionality of the test target Cooperative subject and equipment is more expensive (such as a 3D printer, resin mask, latex mask) 1% penetration or match rate allowed
3 Unlimited Significant – has insider knowledge of the functionality of the test target and has participated in at least 2 other PAD tests with the target modality Cooperative Subject and latent sources for subject data.  Equipment is extensive e.g., special order contact lenses, facial silicone masks, and 3D printed spoofs Not established

In order to maintain uniformity between vendor solutions and modalities and to apply the test methods to each vendor consistently, the 6 species of attacks (called “PAIS” in ISO 30107-3 ) are selected as uniformly as  possible.  If the application uses an active liveness detector to elicit a “voluntary response” (the ISO 30107-1 term), then the species of attacks are tailored just as an impostor would tailor them to provide the movement, smile, blink, etc.  Our procedure sets a material cost limit  such that Level 1 artefacts cannot exceed $30 and Level 2 are limited to $300.  This keeps the artefacts creation or procurement costs generally consistent across all vendor solutions.

In addition, each species set of artefacts is created and applied to the PAD within a time limit.   For Level 1, if liveness detection only is being assessed, then iBeta will create and apply each of the species artefacts within 8 hours, targeting to present 150 attacks alternated with 50 genuine presentations.  If a full system is under evaluation, then the artefacts associated with the species for a single genuine subject are created and applied within 8 hours.  With 6 subjects and/or 6 species, the testing therefore requires 48 hours.

ISO 30107-3 discusses cooperative versus uncooperative subjects.  iBeta uses cooperative subjects in that the artefacts are created from biometric characteristics provided by volunteer data subjects (if not purchased) who are willing and able to pose for photos, record videos, provide their fingerprints in molding material or sit for a live cast.  We only use cooperative subjects as the artefacts created from  willing volunteers are of better quality making for a more conservative test.  iBeta is evaluating the vendor solution and not the ability of our testers to obtain latent prints, as an example.

Prior to applying any presentation attack, the configuration of the vendor solution is recorded and the version is referenced  in the confirmation letter.  In addition, because the device used to capture the biometric sample impacts the results, the exact device configuration is recorded and also referenced in the confirmation letter.

During the test effort, bona fide biometric presentations to the PAD device not only provide the  reporting metrics  required by ISO 30107-3 but also provide the indication of usability by the data subjects.  For this reason, iBeta is now limiting the BPCER and FNMR to 15% to obtain a “PASS” rating.  If a bona fide data subject cannot be recognized as live and/or matching to the enrolment reference, iBeta will suspend testing as a high BPCER or FNMR biases the test result.  These error rates were originally unconstrained but have been limited to 20% and now recently reduced to 15%.  This limit is reviewed every 6 months and is subject to further reduction as PAD technology improves.

iBeta certifies the results of our testing ; however, this does not translate to a certification of the vendor product.  The iBeta testing and reporting provides results that indicate that the tested solution is compliant or conforms with the ISO 30107-3 testing and reporting requirements.   

Certification vs. Conformance

When iBeta was initially accredited, the term certification was used but was corrected during the NIST/NVLAP audit in March 2019.  There is no difference in our test methods, procedures, or processes between the earlier testing and the testing after that date with the exception that the allowed limit for genuine or Bona Fide presentations is now stricter.