Enterprise technology attorney with 15+ years negotiating complex commercial deals, building privacy programs, and governing AI risk. Deep legal expertise combined with the operational fluency of someone who has sat on the business side of the table.
My career has not followed a straight line, and that is the point. Fifteen years across Oracle and Ellucian built a foundation that most attorneys develop only one piece of: deep commercial deal experience, in-house privacy program leadership, hands-on AI governance, and M&A integration work at scale. The domains below reflect how I actually practice, not how a title describes me.
I build AI-assisted legal workflows using Claude. Below is a live demonstration of a five-skill product counsel governance system: a master router, pre-ship AI governance review, DPIA assessment, post-ship monitoring, and report assembly.
This is a live demonstration of a five-skill AI governance workflow system I built using Claude: a master router, pre-ship AI governance review, DPIA assessment, post-ship monitoring, and report assembly. The static walkthrough shows full stack output from a sample scenario. The live component runs the router and one workflow against any scenario you submit.
Annex III, Point 4(b): AI systems used to assist in decisions on promotion, compensation, task allocation, and monitoring of performance and behavior. Employment-related AI systems that influence compensation and promotion fall squarely within Annex III. The use of behavioral metadata as proxy performance indicators reinforces this classification.
GOVERN: No documented ownership or accountability structure is described. It is unclear who is responsible for the system's outputs, who has authority to override a score, and what escalation path exists when a score is disputed. This is a blocking gap.
MAP: Foreseeable risks include disparate impact on protected classes; proxy discrimination via metadata; opacity to employees; vendor dependency on a third-party LLM; and scope creep risk.
MEASURE: No testing, bias detection, or benchmark methodology is described. For a High-Risk system under the EU AI Act, this is a blocking gap.
MANAGE: No mitigation or incident response procedure is described. Required before ship: a human oversight protocol, a dispute/correction mechanism for employees, and a vendor incident notification clause.
Article 22 (Automated Decision-Making): If the system produces scores that managers use without meaningful independent review, this may constitute automated decision-making with legal or similarly significant effects, triggering Article 22 rights. This is the highest-priority legal question for EU/UK deployment.
Data Minimization: Email metadata and Slack message frequency are behavioral proxies. The proportionality argument for using communication volume as a performance indicator is weak without validation evidence.
Purpose Limitation: Email and Slack data were almost certainly collected for communication purposes, not performance evaluation. Repurposing for scoring requires either a compatibility assessment or fresh consent basis in the EU/UK.
Five of nine WP248 high-risk criteria are satisfied. DPIA is required when two or more are present.
| Risk | Score | Residual |
|---|---|---|
| Disparate impact via proxy metrics | Critical | Medium |
| Article 22 violation | Critical | Medium |
| Germany §87 BetrVG non-compliance | Critical | Low (if works council engaged) |
| Employee opacity/contestation failure | High | Medium |
| LLM vendor DPA missing | High | Low |
| Article 9 latent exposure | High | Medium |
If residual risk remains HIGH or CRITICAL after mitigation, prior consultation with the relevant supervisory authority (ICO for UK, Landesbeauftragter for Germany) is required under Article 36 GDPR before processing begins. On current facts, consultation may be required even with mitigations in place.
This system is a High-Risk AI system under the EU AI Act and triggers mandatory DPIA obligations under GDPR Article 35. Three CRITICAL-level risks are present: structural proxy discrimination through behavioral metadata, potential Article 22 automated decision-making violations, and a hard legal block on German deployment absent works council consent under §87 BetrVG.
The system cannot legally launch in Germany without works council approval, which must be obtained before deployment, not after. In the EU and UK, the Article 22 compliance posture — specifically whether manager review of AI-generated scores constitutes meaningful human oversight — is unresolved and must be designed into the product before launch.
Pre-deployment bias auditing is required both as a matter of EU AI Act compliance and as a practical defense against disparate impact claims. Four legal research questions are flagged as requiring external verification before this review can be finalized.
Select a pre-loaded scenario or describe your own. The router will determine which workflow applies and run a condensed analysis.
This demo runs a condensed version of the workflow. Full stack output includes detailed risk matrices, consolidated action items, and cross-workflow research flags.
Have a question about my experience, skills, or fit for a specific role? Ask below. This is powered by AI and trained on my actual background. Try it the way a recruiter or hiring manager would.
The same qualities that make me effective as a lawyer show up everywhere else: a drive to understand things deeply, a preference for doing over observing, and a habit of not stopping once I start.
I'm currently exploring new opportunities in commercial technology transactions, privacy, and AI governance. If you're looking for counsel who combines deep legal expertise with genuine operational fluency, someone who has managed hundreds of deals and can engage as a business partner rather than just an advisor, I'd welcome the conversation.