top of page

Welcome! My name is Hafiz Asif, and I am an assistant professor at the Department of Info. Systems and Business Analytics at Frank G. Zarb School of Business at
Hofstra University. I received my PhD from Rutgers University, USA, and completed a post-doc at the Institute of Data Science, Learning, and Applications.

My research develops methodologies and systems to realize Safe AI & Analytics, which ensure security and privacy to guard our freedoms, foster algorithmic transparency for accountability, and champion fairness and equity to counteract bias. Through safety, I believe, we can address the pressing need for socially responsible data-driven solutions, which are becoming pervasive in high-stakes decision-making, such as loan eligibility, job candidate recommendation, parole determination, and healthcare resource allocation. 

Besides pursuing my research endeavors, I love hiking, bouldering, traveling (I just returned from my long trip to Hawaii, the paradise), and learning new magic tricks (especially, mathematical ones, e.g., The Kruskal Count).

What it takes to develop Safe AI & Analytics:

Developing safe approaches requires interfacing with different areas: ergo, in my work, I use techniques from many areas of computer science and analytics, such as cryptography, data privacy, statistics, machine learning, and optimization. To address safety problems comprehensively, my research answers three essential questions:

1) How to characterize safety?

2) How to devise safe algorithms?

3) How to build easy-to-use safe systems?

Characterizing safety. In the data analytics landscape, guaranteeing safety, e.g., privacy and fairness,  requires translating these ambiguous social constructs into actionable theoretical frameworks. My research not only quantifies public attitudes toward privacy (e.g., in public health emergencies) but also resolves complex definitional issues, providing a nuanced understanding of these socially constructed ideals.

Devising safe algorithms. Achieving safety often comes at a cost, e.g., enhancing privacy can lower accuracy, or securely analyzing data silos can be computationally intensive. Therefore, I develop algorithms whose safety-utility tradeoff can be calibrated or is optimal. In this context, I’m focusing on various domains, including efficient data mining, outlier analysis, spatiotemporal queries, and synthetic data generation.

Building safe systems. I'm committed to democratizing safety in data-driven fields, equipping analysts and researchers with user-friendly tools that inherently embed privacy, fairness, and utility. My research develops intuitive systems such as a privacy-protecting platform tailored for small businesses and frameworks for privacy-preserving outlier analysis. Among my notable contributions are Covid Nearby (a differentially private system for pandemic tracking) and the winning federated learning system for the US-UK PETs prize challenge.

bottom of page