We’re excited to announce the second version of the Databricks AI Safety Framework (DASF 2.0—obtain now)! Organizations racing to harness AI’s potential want each the ‘fuel’ of innovation and the ‘brakes’ of governance and danger administration. The DASF bridges this hole, enabling safe and impactful AI deployments to your group by serving as a complete information on AI danger administration.
This weblog will present an outline of the DASF, discover key insights gained because the unique model was launched, introduce new sources to deepen your understanding of AI safety and supply updates on our {industry} contributors.
What’s the Databricks AI Safety Framework, and what’s new in model 2.0?
The DASF is a framework and whitepaper for managing AI safety and governance dangers. It enumerates the 12 canonical AI system parts, their respective dangers, and actionable controls to mitigate every danger. Created by the Databricks Safety and ML groups in partnership with {industry} specialists, it bridges the hole between enterprise, information, governance, and safety groups with sensible instruments and actionable methods to demystify AI, foster collaboration, and guarantee efficient implementation.
In contrast to different frameworks, the DASF 2.0 builds on present requirements to offer an end-to-end danger profile for AI deployments. It delivers defense-in-depth controls to simplify AI danger administration to your group to operationalize and might be utilized to your chosen information and AI platform.
Within the DASF 2.0, we’ve recognized 62 technical safety dangers and mapped them to 64 really helpful controls for managing the danger of AI fashions. We’ve additionally expanded mappings to main {industry} AI danger frameworks and requirements, together with MITRE ATLAS, OWASP LLM & ML High 10, NIST 800-53, NIST CSF, HITRUST, ENISA’s Securing ML Algorithms, ISO 42001, ISO 27001:2022, and the EU AI Act.
Operationalizing the DASF – try the brand new compendium and the companion tutorial video!
We’ve obtained useful suggestions as we share the DASF at {industry} occasions, workshops, and buyer conferences. A lot of you’ve got requested for extra sources to make navigating the DASF simpler, operationalizing it, and mapping your controls successfully.
In response, we’re excited to announce the discharge of the DASF compendium doc (Google sheet, Excel). This useful resource is designed to assist operationalize the DASF by organizing and making use of its dangers, threats, controls, and mappings to industry-recognized requirements from organizations resembling MITRE, OWASP, NIST, ISO, HITRUST, and extra. We’ve additionally created a companion tutorial video that gives a guided walkthrough of the DASF and its compendium.
Our purpose with these updates is to make the DASF simpler to undertake, empowering organizations to implement AI methods securely and confidently. For those who’re desperate to dive in, our group recommends the next method:
- Perceive your stakeholders, deployment fashions, and AI use instances: Begin with a enterprise use case, leveraging the DASF whitepaper to establish the best-fit AI deployment mannequin. Select from 80+ Databricks Resolution Accelerators to information your journey. Deployment fashions embody Predictive ML Fashions, Basis Mannequin APIs, Fantastic-tuned and Pre-trained LLMs, RAG, AI Brokers with LLMs, and Exterior Fashions. Guarantee readability on AI improvement inside your group, together with use instances, datasets, compliance wants, processes, purposes, and accountable stakeholders.
- Overview the 12 AI system parts and 62 dangers: Perceive the 12 AI methods parts, the standard cybersecurity and novel AI safety dangers related to every element, and the accountable stakeholders (e.g., information engineers, scientists, governance officers, and safety groups). Use the DASF to foster collaboration throughout these teams all through the AI lifecycle.
- Overview the 64 obtainable mitigation controls: Every danger is mapped to prioritized mitigation controls, starting with perimeter and information safety. These dangers and controls are additional aligned with 10 {industry} requirements, offering further element and readability.
- Use the DASF compendium to localize dangers, management applicability, and danger impacts: Begin through the use of the “DASF Threat Applicability” tab to establish dangers related to your use case by choosing a number of AI deployment fashions. Subsequent, overview the related danger impacts, compliance necessities, and mitigation controls. Lastly, doc key particulars to your use case, together with the AI use case description, datasets, stakeholders, compliance issues, and purposes.
- Implement the prioritized controls: Use the “DASF Management Applicability” tab of the compendium to overview the relevant DASF controls and implement the mitigation controls in your information platform throughout 12 AI parts. If you’re utilizing Databricks, we included hyperlinks with detailed directions on how you can deploy every management on our platform.
Implement the DASF in your group with new AI upskilling sources from Databricks
Based on a current Economist Affect examine, surveyed information and AI leaders have recognized upskilling and fostering a progress mindset as key priorities for driving AI adoption in 2025. As a part of the DASF 2.0 launch, we have now sources that can assist you perceive AI and ML ideas and apply AI safety finest practices to your group.
- Databricks Academy Coaching: We advocate taking the brand new AI Safety Fundamentals course, which is now obtainable on the Databricks Academy. Earlier than diving into the whitepaper, this 1-hour course is a good primer to AI safety matters highlighted within the DASF. You’ll additionally obtain an accreditation to your LinkedIn profile upon completion. If you’re new to AI and ML ideas, begin with our Generative AI Fundamentals course.
- How-to movies: We’ve recorded DASF overview and how-to movies for fast consumption. You will discover these movies on our Safety Greatest Practices YouTube channel.
- In-person or digital workshop: Our group presents an AI Threat Workshop as a reside walkthrough of the ideas outlined within the DASF, specializing in overcoming obstacles to operationalizing AI danger administration. This half-day occasion targets Director+ leaders in governance, information, privateness, authorized, IT and safety features.
- Deployment assist: The Safety Evaluation Device (SAT) displays adherence to safety finest practices in Databricks workspaces on an ongoing foundation. We just lately upgraded the SAT to streamline setup and improve checks, aligning them with the DASF for improved protection of AI safety dangers.
- DASF AI assistant: Databricks prospects can configure Databricks AI Safety Framework (DASF) AI assistant proper in their very own workspace with no prior Databricks expertise, work together with DASF content material in easy human language, and get solutions.
Constructing a neighborhood with AI {industry} teams, prospects, and companions
Guaranteeing that the DASF evolves in keeping with the present AI regulatory atmosphere and rising risk panorama is a prime precedence. For the reason that launch of 1.0, we have now fashioned an AI working group of {industry} colleagues, prospects, and companions to remain intently aligned with these developments. We need to thank our colleagues within the working group and our pre-reviewers like Complyleft, The FAIR Institute, Ethriva Inc, Arhasi AI, Carnegie Mellon College, and Rakesh Patil from JPMC. You will discover the entire record of contributors within the acknowledgments part of the DASF. If you wish to take part within the DASF AI Working Group, please contact our group at [email protected].
Right here’s what a few of our prime advocates should say:
“AI is revolutionizing healthcare supply by means of improvements just like the CLEVER GenAI pipeline, which processes over 1.5 million scientific notes day by day to categorise key social determinants and impacting veteran care. This pipeline is constructed with a powerful safety basis, incorporating NIST 800-53 controls and leveraging the Databricks AI Safety Framework to make sure compliance and mitigate dangers. Trying forward, we’re exploring methods to increase these capabilities by means of Infrastructure as Code and safe containerization methods, enabling brokers to be dynamically deployed and scaled from repositories whereas sustaining rigorous safety requirements.” – Joseph Raetano, Synthetic Intelligence Lead, Summit Information Analytics & AI Platform, U.S. Division of Veteran Affairs
“DASF is the important instrument in remodeling AI danger quantification into an operational actuality. With the FAIR-AI Threat method now in its second 12 months, DASF 2.0 permits CISOs to bridge the hole between cybersecurity and enterprise technique—talking a typical language grounded in measurable monetary influence.” – Jacqueline Lebo, Founder AI Workgroup, The FAIR Institute and Threat Advisory Supervisor, Protected Safety
“As AI continues to rework industries, securing these methods from refined and distinctive cybersecurity assaults is extra important than ever. The Databricks AI Safety Framework is a good asset for firms to steer from the entrance on each innovation and safety. With the DASF, firms are outfitted to raised perceive AI dangers, and discover the instruments and sources to mitigate these dangers as they proceed to innovate.” – Ian Swanson, CEO, Defend AI
“With the Databricks AI Safety Framework, we’re in a position to mitigate AI dangers thoughtfully and transparently, which is invaluable for constructing board and worker belief. It’s a sport changer that enables us to convey AI into the enterprise and be among the many 15% of organizations getting AI workloads to manufacturing safely and with confidence.” — Coastal Group Financial institution
“Throughout the context of information and AI, conversations round safety are few. The Databricks AI Safety Framework addresses the customarily uncared for facet of AI and ML work, serving each as a best-in-class information for not solely understanding AI safety dangers, but additionally how you can mitigate them.” – Josue A. Bogran, Architect at Kythera Labs & Advisor to SunnyData.ai
“We’ve used the Databricks AI Safety Framework to assist improve our group’s safety posture for managing ML and AI safety dangers. With the Databricks AI Safety Framework, we at the moment are extra assured in exploring prospects with AI and information analytics whereas making certain we have now the right information governance and safety measures in place.” – Muhammad Shami, Vice President, Jackson Nationwide Life Insurance coverage Firm
Obtain the Databricks AI Safety Framework 2.0 right now!
The Databricks AI Safety Framework 2.0 and its compendium (Google sheet, Excel) at the moment are obtainable for obtain. To study upcoming AI Threat workshops or to request a devoted in-person or digital workshop to your group, contact us at [email protected] or your account group. We even have further thought management content material coming quickly to offer additional insights into managing AI governance. For extra insights on how you can handle AI safety dangers, go to the Databricks Safety and Belief Heart.