We are building the first holistic Testing & Evaluation platform for AI models. We help AI practitioners (Data Scientists & AI Engineers) increase the efficiency of their AI development workflow, eliminate risks of AI biases and ensure robust, secure & compliant AI models.
We are a team of engineers & researchers on AI Quality, Security & Compliance who have been working on this topic since 2021. While we are excited about the new AI opportunities, we acknowledge the risks involved.
We believe crucial to have independent third-party evaluations to control the risks of AI models. These evaluations, conducted by separate entities from the AI developers, provide important checks and balances to ensure responsible regulation of the AI ecosystem.
By sponsoring our open-source project, you can help bring AI into the age of Quality, Security & Compliance!
Meet the team
-
Jean-Marie John-Mathews jmsquareCo-founder & co-CEO of Giskard | Ph.D. in AI Ethics, Ex-Thales data scientist
-
Alex Combessie alexcombessieCo-founder & co-CEO of Giskard | Ex-Dataiku AI engineer & data scientist
-
Matteo mattbitCTO @ Giskard
-
Inoki InokinokiSoftware Engineer @ Giskard
-
Blanca Rivera Campos BlancaRiveraCamposCommunity & Growth Manager @ Giskard
-
Pierre Le Jeune pierljML Research @ Giskard
-
Kevin Messiaen kevinmessiaenSoftware Engineer @ Giskard
-
Henrique Chaves henchavesDeveloping data products 😊
Featured work
-
Giskard-AI/giskard
🐢 Open-Source Evaluation & Testing for ML & LLM systems
Python 4,071 -
Giskard-AI/awesome-ai-safety
📚 A curated list of papers & technical articles on AI Quality & Safety
-
Giskard-AI/giskard-vision
📸 Open-Source Evaluation & Testing for Computer Vision models
Python 21