7-Eleven’s Knowledge Documentation Dilemma
7-Eleven’s information ecosystem is very large and sophisticated, housing hundreds of tables with a whole lot of columns throughout our Databricks atmosphere. This information types the spine of our operations, analytics and decision-making processes. Historically, 7-Eleven’s information dictionary and documentation lived in Confluence pages, meticulously maintained by our information staff members who would manually doc desk and column definitions.
We confronted a crucial roadblock as we started exploring the AI-powered options on the Databricks Knowledge Intelligence Platform, together with AI/BI Genie, clever dashboards and different purposes. These superior instruments rely closely on desk metadata and feedback embedded immediately inside Databricks to generate insights, reply questions on our information, and construct automated visualizations. With out correct desk and column feedback in Databricks itself, we have been basically leaving highly effective AI capabilities on the desk. For instance, when Genie lacks column definitions, it will possibly misread the which means of bespoke columns, requiring finish customers to make clear. As soon as we enriched our metadata, Genie’s contextual understanding improved dramatically—precisely figuring out column functions, surfacing the fitting tables in response to pure language queries, and producing way more related and actionable insights. Merely put, Genie, like all AI brokers, will get extra considerate and extra useful when it has higher metadata to work with.
The hole between our well-documented Confluence pages and our “metadata-light” Databricks atmosphere was stopping us from realizing the complete potential of our information platform funding.
Handbook Migration’s Not possible Scale
Once we initially thought-about migrating our documentation from Confluence to Databricks, the dimensions of the problem turned instantly obvious. With hundreds of tables containing a whole lot of columns every, a handbook migration would require:
- Time-intensive labor: A whole lot of person-hours to repeat and paste documentation
- Handbook metadata updates: Crafting hundreds of particular person SQL statements to replace metadata or going to every desk UI
- Challenge oversight: Implementing a monitoring system to make sure all tables have been correctly up to date
- High quality assurance: Making a validation course of to catch inevitable human errors
- Ongoing maintenance: Establishing an ongoing upkeep protocol to maintain each techniques in sync
Human error can be unavoidable even when we devoted important sources to this effort. Some tables can be missed, feedback can be incorrectly formatted, and the method would doubtless have to be repeated as documentation advanced. Furthermore, the tedious nature of the work doubtless results in inconsistent high quality throughout the documentation.
Most regarding was the chance value. Whereas our information staff targeted on this migration, they couldn’t work on higher-value initiatives. Each day, we confronted delays in strengthening our Databricks metadata, leaving untapped potential within the AI/BI capabilities already at our fingertips.
The Clever Doc Processing Pipeline
To unravel this problem, 7-Eleven developed a complicated agentic AI workflow powered by Llama 4 Maverick, deployed via Mosaic AI Mannequin Serving, that automated the complete documentation migration course of via an clever multistage pipeline:
- Discovery part: The agent makes use of Databricks APIs to get all tables, desk names and column buildings.
- Doc retrieval: The agent pulls all related information dictionary paperwork from Confluence, making a corpus of potential documentation sources.
- Reranking and filtering: Implementing superior reranking algorithms, the system prioritizes probably the most related documentation for every desk, filtering out noise and irrelevant content material. This crucial step ensures we match tables with their correct documentation even when naming conventions aren’t completely constant.
- Clever matching: For every Databricks desk, the AI agent analyzes potential documentation matches, utilizing contextual understanding to find out the proper Confluence web page even when names don’t match precisely.
- Focused extraction: As soon as the proper documentation is recognized, the agent intelligently extracts related descriptions for each tables and their columns, preserving the unique which means whereas formatting appropriately for Databricks metadata.
- SQL technology: The system mechanically generates correctly formatted SQL statements to replace the Databricks desk and column feedback, dealing with particular characters and formatting necessities.
- Execution and verification: The agent runs the SQL updates and, via MLflow monitoring and analysis, verifies that metadata was utilized appropriately, logs outcomes, and surfaces any points for human overview.
- Monitoring and insights: The staff additionally makes use of the AI/BI Genie Dashboard to trace venture metrics in actual time, making certain transparency, high quality management, and steady enchancment.
This clever pipeline remodeled months of tedious, error-prone work into an automatic course of that accomplished the preliminary migration in days. The system’s means to know context and make clever matches between in a different way named or structured sources was key to reaching excessive accuracy.
Since implementing this resolution, we plan emigrate documentation for over 90% of our tables, unlocking the complete potential of Databricks’ AI/BI options. What started as a flippantly used AI assistant has advanced into an on a regular basis device in our information workflows.. Genie’s means to know context now mirrors how a human would interpret the info, due to the column-level metadata we injected. Our information scientists and analysts can now use pure language queries via AI/BI Genie to discover information, and our dashboards leverage the wealthy metadata to offer extra significant visualizations and insights.
The answer continues to offer worth as an ongoing synchronization device, making certain that as our documentation evolves in Confluence, these adjustments are mirrored in our Databricks atmosphere. This venture demonstrated how thoughtfully utilized AI brokers can resolve advanced information governance challenges at enterprise scale, turning what appeared like an insurmountable documentation job into a sublime automated resolution.
Need to study extra about AI/BI and the way it might help unlock worth out of your information? Study extra right here.