(Ferdinando Fioretto)
The integration of differentiable optimization techniques into diffusion models is motivated by the need to ensure that generated outputs adhere to specific properties such as physical principles, constraints, and other laws. This approach aims to create more trustworthy and reliable generative AI models.
This project explores how to incorporate constraints directly into generative diffusion processes. By leveraging differentiable optimization, we can guide the generation process to produce outputs that satisfy predefined constraints, enhancing the applicability and trustworthiness of these models in various domains.
generative AI, constraints, trustworthy ML, guarantees
Physics-informed machine learning has the potential to significantly accelerate optimization processes, particularly in complex systems such as energy management and scheduling. This project seeks to harness the power of ML to improve the efficiency and effectiveness of optimization tasks.
By using techniques from optimization theory, we can construct surrogate models that learn to approximate optimal solutions. These models can be trained on physical principles to provide faster and more accurate optimization, reducing computational costs and improving performance.
differentable optimization, decision focused learning
Providing guarantees in the outputs of large language models is crucial for ensuring their reliability and trustworthiness. Constraints help in formalizing these guarantees, making the models more robust and dependable.
This project aims to formalize and impose constraints on the outputs of autoregressive generative models. By defining and integrating constraints, we can guide the generation process to produce outputs that meet specific criteria, enhancing their utility and trustworthiness.
Genrative AI, LLMs, guarantees, trustworthy ML
The principle of data minimization is central to data privacy, aiming to collect and process only the data necessary for a specific purpose. However, its application in machine learning systems requires careful definition and assessment.
This project seeks to define what data minimization means for ML systems and how to assess its privacy goals. It includes developing notions of data minimization that have legal validity, are useful for policy makers, and can be implemented efficiently in ML algorithms.
data privacy, guarantees, privacy-preserving ML
Fairness in machine learning (ML) is crucial to ensure that ML systems do not perpetuate or exacerbate biases. This is especially important when these systems are constrained by privacy, space, and robustness requirements, which can lead to unintended fairness issues.
This project aims to explore how constraints in ML systems impact fairness. By studying the effects of privacy, space, and robustness constraints, we can develop methods to mitigate unintended consequences and improve fairness.
Fairness, Privacy
Fairness in large language models (LLMs) is essential to avoid generating biased or harmful content. Unfairness can arise in various stages of model training and deployment, making it critical to understand and address these issues.
This project focuses on studying fairness in LLMs, particularly in techniques like Low-Rank Adaptation (LoRA). By examining how unfairness arises and identifying mitigation strategies, we aim to improve the fairness of LLM outputs.
Fairness, LLMs
Unlearning in machine learning (ML) refers to the ability to remove specific data points or information from a model, ensuring that the model no longer retains any influence from the removed data. This is important for privacy, compliance, and correcting errors.
This project explores methods to provide unlearning guarantees, develop unlearning concepts, and implement unlearning under various constraints. It also investigates unlearning in the context of LLMs to ensure that these models can forget specific information as needed.
Privacy preserving ML, LLMs, differntial privacy
Differential privacy (DP) is a technique used to protect individual data points in a dataset. However, it can also affect the fairness of machine learning models, potentially introducing biases or impacting model performance.
This project investigates the theoretical analysis of the bias and variance introduced by differential privacy mechanisms. It focuses on specific applications, such as resource allocation, to understand the trade-offs between privacy and fairness.
Privacy preserving ML, differntial privacy, fairness