Nearly 725,000 cases involving youth are processed by juvenile courts across the United States each year. Risk assessment instruments (RAIs) are routinely used as critical supportive tools during the decision-making process. RAIs have begun to adapt machine learning models to identify the risk of recidivism. Few RAIs consider the possibility for change, which is especially important when assessing youth who are amenable to treatment. Furthermore, RAIs may suffer from algorithmic bias which can arise due to the inherent bias in the data used to train them, as well as run into the risk of leaking sensitive personal information. Despite recent progress in building private machine learning models in other domains, the literature on maintaining user privacy for RAI models for criminal justice domains are critically lacking. This project?s novelties are the design of a framework for risk assessment that can incorporate the impact of interventions while simultaneously achieving individual fairness and data privacy. The project?s broader significance and importance includes the building of trustworthy RAIs which can reduce disproportionate minority contact in juvenile justice systems. This project pursues the above goals via three inter-related research thrusts. Thrust 1 leverages and adapts the potential outcomes framework from causal inference to build intervention-aware risk assessment models. Thrust 2 focuses on devising methodologies for simultaneously enforcing individual fairness and differential privacy constraints while training intervention-aware RAI models using causal inference techniques. Several new theoretical challenges have been tackled, including the operationalization of individual fairness constraints within the context of criminal justice and compatibility with differential privacy. In addition to theoretical efforts, Thrust 3 validates the proposed framework using real world datasets and open source risk assessment tools in order to analyze the tradeoffs between privacy, fairness and accuracy, and assess the overall impact of the proposed framework. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.