This repository contains the project for the Advanced AI course @CentraleSupélec
-
Updated
Apr 11, 2022 - Jupyter Notebook
This repository contains the project for the Advanced AI course @CentraleSupélec
Code for the attack multiplicative filter attack MUFIA, from the paper "Frequency-based vulnerability analysis of deep learning models against image corruptions".
Evaluation & testing framework for computer vision models
Predicting Out-of-Distribution Error with the Projection Norm
[ICML 2019] ME-Net: Towards Effective Adversarial Robustness with Matrix Estimation
Pre-Training Buys Better Robustness and Uncertainty Estimates (ICML 2019)
The Combined Anomalous Object Segmentation (CAOS) Benchmark
📚 A curated list of papers & technical articles on AI Quality & Safety
Repo for "Benchmarking Robustness of 3D Point Cloud Recognition against Common Corruptions" https://arxiv.org/abs/2201.12296
Aligning AI With Shared Human Values (ICLR 2021)
ImageNet-R(endition) and DeepAugment (ICCV 2021)
Self-Supervised Learning for OOD Detection (NeurIPS 2019)
PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to adversarial prompt attacks. 🏆 Best Paper Awards @ NeurIPS ML Safety Workshop 2022
Deliver safe & effective language models
Deep Anomaly Detection with Outlier Exposure (ICLR 2019)
A Harder ImageNet Test Set (CVPR 2021)
Corruption and Perturbation Robustness (ICLR 2019)
🐢 Open-Source Evaluation & Testing for ML & LLM systems
Add a description, image, and links to the ml-safety topic page so that developers can more easily learn about it.
To associate your repository with the ml-safety topic, visit your repo's landing page and select "manage topics."