United States Customs and Border Protection (CBP) has authorized a $225,000 contract with Clearview AI, granting access to the controversial facial recognition technology. This move expands surveillance capabilities within CBP’s intelligence divisions, including the headquarters unit and the National Targeting Center.
The system will leverage a database of over 60 billion publicly scraped images for “tactical targeting” and “strategic counter-network analysis.” This implies the tool will be integrated into daily intelligence workflows rather than reserved for isolated cases. CBP already uses various data sources, including commercial tools, to monitor individuals and their connections for security and immigration enforcement.
The agreement mandates nondisclosure for contractors handling sensitive biometric data. Crucially, the contract does not specify whether U.S. citizens will be subject to searches or how long images/results will be stored. This absence of clarity raises concerns about potential misuse and privacy violations.
This contract comes amid growing scrutiny of federal face recognition practices. Civil liberties groups and lawmakers question whether these tools are becoming routine surveillance infrastructure without sufficient safeguards or transparency. Senator Ed Markey recently proposed legislation to ban ICE and CBP from using face recognition entirely, citing concerns about unchecked biometric surveillance.
CBP has not clarified how Clearview will be implemented, what types of images agents can upload, or whether U.S. citizens will be included in searches. Clearview’s reliance on scraping photos without consent remains a key ethical issue. The company also appears in DHS’s AI inventory, linked to CBP’s Traveler Verification System.
Despite CBP’s public claims that its verification system does not use commercial data, it is likely Clearview access will instead be integrated with the Automated Targeting System. This system already links biometric galleries, watch lists, and enforcement records, including those from recent ICE operations.
Recent testing by the National Institute of Standards and Technology reveals that face recognition accuracy declines in uncontrolled settings (like border crossings), with error rates exceeding 20%. The technology cannot eliminate false matches without increasing the risk of failing to identify the correct person. As a result, agencies often rely on ranked lists for human review, which can still generate incorrect matches.
The expansion of face recognition technology by CBP raises serious questions about privacy, accountability, and the potential for misuse. Without clear limits and transparency, this surveillance infrastructure could erode civil liberties without providing meaningful security benefits.




























