Founder and Principal Consultant

Faisal Ali, CISM, CRISC

Founder of ELSA AI. Advisory AI governance for private healthcare.

CISM, ISACACRISC, ISACA20+ years in regulated environmentsAI governance for private healthcare

I help private GP, dental and specialist clinics move from informal AI use to a documented governance position that can be read and challenged by their DPO, board, insurer, MDO and relevant advisers.

Faisal Ali, CISM, CRISC, Founder and Principal Consultant, ELSA AI

Same founder-led delivery as on the homepage: evidence for private clinics using ambient scribes, Copilot, ChatGPT and shadow AI.

What I do

Advisory AI governance, built for deployers, not model builders

My work sits at the intersection of cybersecurity, information risk, data protection evidence and practical AI governance. ELSA AI focuses on organisations deploying third-party AI tools, not companies building AI models or medical devices.

Most private clinics do not start with a formal AI programme. They start with informal adoption: one clinician using an ambient scribe, another using ChatGPT to draft correspondence, reception staff testing AI summaries, or AI-enabled features appearing inside existing supplier platforms.

The governance risk is not that AI is being used. The risk is having no documented position when the DPO, board, insurer, MDO or regulator asks how that use is controlled.

ELSA AI helps clinics answer that question with evidence: what AI tools are in use, where patient data may be involved, what vendor evidence exists, whether DPIA review is likely required or strongly indicated, how patients are informed, how outputs are reviewed, and what actions are needed in the next 30 days.

What I bring

More than two decades in regulated environments

I hold the CISM and CRISC certifications from ISACA and have more than two decades of experience across cybersecurity, information risk, governance, assurance and control evidence in regulated environments.

Before founding ELSA AI, I worked across sectors where weak evidence could quickly become a board, regulator, insurer or operational problem. That experience shaped the way ELSA AI works: practical, evidence-led, commercially focused and built for accountable decision-makers.

Selected previous environments include work across organisations in the sectors listed below. That experience informs approach; it is not an endorsement by those organisations unless separately agreed.

Financial services

National infrastructure

Defence and secure environments

Retail and supply chain

Public sector and manufacturing

Healthcare and patient data

How I work

Governance evidence over theoretical frameworks

Across regulated environments, the question that matters is simple: can you show the evidence, or can you only describe the intention?

Decision-makers need documentation they can read, challenge and act on. That discipline now applies to AI governance in private healthcare.

1

What controls exist today?

Surface the actual position: declared tools, shadow AI, vendor evidence, existing policies and staff use.

2

Where are the evidence gaps?

Map gaps to relevant published expectations and governance-standard signals, including ICO, CQC, NHS England, MHRA, MDO and jurisdiction-specific sources where applicable.

3

Who owns the risk?

Identify which decisions belong to the DPO, clinical lead, board, insurer, MDO or appointed safety role. ELSA AI does not own final sign-off.

4

What does leadership need before it can act?

Produce board-ready evidence in plain English: not abstract AI ethics documents, not generic policy packs and not consultancy theatre.

Why founder-led matters

Senior-led delivery. Evidence based on your clinic's actual AI use.

Every ELSA AI engagement is senior-led by Faisal Ali. The work requires senior judgement applied to the clinic's actual AI tools, actual workflows, actual patient data exposure and actual evidence gaps.

No generic policy packs. No junior-led delivery model. No offshore handover.

ELSA AI is designed for clinic owners, partners, boards, DPOs and clinical leads who need clear evidence they can act on.

Services

Core engagements

Clinical AI Exposure Diagnostic

4 working days from completed intake.

Identifies declared and shadow AI use, maps patient data exposure, assesses ambient scribe governance readiness where relevant, and produces a board-ready evidence pack and 30-day priority action plan.

Typical fee: £4,500-£6,500 + VAT.

Clinical AI Safe Usage Launchpad

4 to 6 weeks.

Converts Diagnostic findings into a working AI governance baseline structured for board review and adoption: policy, tool register, risk register, DPIA readiness workpack, vendor evidence tracker, patient transparency wording, staff guidance, incident process and board evidence pack.

Typical fee: £14,500-£22,000 + VAT.

AI Exposure Sentinel

Quarterly retainer.

Keeps the AI governance evidence pack current as tools, vendor terms, staff usage, insurer questions and regulatory expectations change.

Typical fee: £950/month or £10,500/year prepaid + VAT.

Advisory governance support only. ELSA AI does not provide legal advice, CQC certification, ICO approval, insurer coverage advice, MDO indemnity advice or clinical safety case sign-off. Final decisions on legal interpretation, DPIA sign-off, clinical safety documentation, MDO disclosure, insurer notification, regulatory engagement and risk acceptance remain with the client's accountable officers, DPO, legal advisers, clinical leads, insurers and appointed safety roles where applicable.