RESOURCE
Privacy-Preserving Model Auditing
Enabling AI model evaluation while protecting individual privacy.
Sometimes a model owner might want to use sensitive individual data to evaluate the groupwise performance of their model. What can they do if they don’t have direct access to that data?
Our demo of privacy-preserving model auditing shows one potential solution. Using an open source cryptographic protocol called Private Join and Compute, we demonstrate the calculation of groupwise metrics between two parties without revealing any private individual information from either side. More details on the demo can be found in our blog posts below.
- Part 1: Privacy-preserving model auditing overview
- Part 2: Technical deep dive
You can find the full repository with instructions on Github