AI Helps Palo Alto Networks Legal Move at the Speed of Commerce
AGC Hayden Creque shares insights on leadership in the age of AI, plus actionable legal prompting tips
by Petra Pasternak
Survey after survey shows generative AI adoption growing among legal professionals. In-house teams are experimenting to find the best use cases and moving beyond pilots to practical application. Some are even building their own bots.
Hayden Creque, Assistant General Counsel for Sourcing and Operations at leading cybersecurity company Palo Alto Networks, heads up a five-person global procurement legal team and oversees more than $500 million in annual transaction volume. He uses GenAI daily in his work, and the team is experimenting with Agentic AI.
Like most in-house professionals, Creque is responding to pressure for speed and doing more with less in a fast-moving and increasingly complex regulatory, business, and tech environment. GenAI tools can be powerful assistants that help shave substantial time off rote tasks, he says, including triaging contracts, synthesizing complex regulatory updates, and drafting routine correspondence.
Creque sat down with Everlaw to discuss the leadership qualities needed to create a culture of innovation, and why making space for failure is crucial. He says GenAI can help move legal pros from a drafting role into a consultative one – with transformative results. And he shares practical tips for how to get started.
The most valuable asset for a modern lawyer isn’t a library of static prompts, he says, but the engineering mindset required to build them.
You’re an enthusiastic GenAI user. How did you get started and how do you use these tools in your personal life?
I have always been technology curious and Palo Alto Networks incentivized that curiosity. Our CEO rolled out an AI challenge. That served as a catalyst. I found in my personal life I could use AI to generate a website for my side-hustle. I do voiceover work occasionally. My niece loves to write and I fed her stories into the AI to generate comic books for her.
Where is GenAI helpful in your legal practice?
I’ve learned that if you’re still drafting from a blank page, you’re already behind. Even a generic AI tool can make a practice more efficient. I no longer build presentations or documents like contract amendments, statements of work, or correspondence from scratch. Instead, I feed the base materials into AI to generate about 95% of the initial output.
I never treat its output as a final product, but it saves considerable time and frees you up for more consequential work.
How does Palo Alto Networks make sure people are adopting AI tools?
Like many technology companies, Palo Alto Networks embraced AI at scale almost right out of the box. With the mandate coming from the top, the company invested in first-level, first-tier, non-purpose-driven AI that has been deployed across the entire organization.
The pitch was that this makes you more effective. To incentivize first use, our CEO announced a company-wide competition a week after deployment. People submitted AI ideas, and they got hundreds of submissions. We ended up with a database of ideas, several of which have now been brought to fruition. Everyone who made a submission got a small bonus. I think the word of mouth after that payment really got people thinking.
For our legal team, starting with the enterprise tools your company already uses has been most efficient. They are secure, vetted, low cost, and provide a safe environment to experiment with AI without exposing sensitive data. We’re now headed with our adoption beyond those foundational AI tools and looking at very purpose-driven AI applications.
"AI cannot be a solution in search of a problem. You need to start with the bottlenecks."
What are some of the most successful GenAI use cases that have made your legal department shine with the business?
AI cannot be a solution in search of a problem. You need to start with the bottlenecks where there is a disconnect between the speed of commerce and the speed that legal was going. My, at the time, three-person legal team was inundated with a high-pain, high-volume, yet low-complexity drafting and document review. We identified several tasks that I would argue are repetitive and ministerial that AI can solve.
For example, our sourcing team needed a way to consistently generate termination letters, so we designed a Gemini Gem. The user can submit a single contract or a spreadsheet, and the AI generates letters based on the contract’s requirements for termination.
We are also deploying other specifically configured AI sub-tools that empower our clients in procurement to do things they would traditionally have approached legal for. It’s those repeatable, relatively simple tasks that take up an inordinate amount of attorney time for relatively low value.
This allows my team to move from the drafting chair to the review and consultation chair. You’re much happier spending 10 minutes helping clarify an issue than an hour drafting an amendment.
What's been the result?
The impact’s been transformational. Our turnaround times for these routine documents have shrunk from about five business days to just one business day and saves the legal team 20 to 40 hours per month. Our internal clients are happier and the business achieves its goals faster.
I’d advise others looking to move from experimentation to implementation to not try to solve all problems with a single tool. Start with one tool that solves one specific problem. That will build stakeholder trust, help with funding, and fuel future AI projects.
AI is not infallible, and has well-known limitations, such as hallucinations. How do you build trust in its accuracy and quality of outputs?
Our pitch, anytime we deploy a tool, is this is the alpha version, it’s not guaranteed to work. We believe that it’s good to experiment even if we blow up and fail miserably. And we want users to try it because we think it will empower and enable them to move faster. Their use also serves to help vet and improve the tool. Speed is a premium, I suspect, at every company, but certainly at ours.
The key is making certain that your internal clients know when to come see you. To mitigate against errors we’ve designed guardrails that direct the AI to throw up red flags. We say, “If you go beyond this point, you need to send a message to the user that this document needs to be checked with legal before it’s given to suppliers.”
Different tasks require different levels of review. If I'm extending an effective date, verification takes two seconds. If there's a $2 million deal, I'm still reading that contract end-to-end. AI is better at ministerial tasks than at nuance in my experience.
In your practical AI adoption framework for in-house legal teams, you emphasize the idea that AI adoption is a leadership challenge, not just a tech challenge, requiring trust building and managing change thoughtfully. Can you elaborate?
There is zero value proposition in AI unless you are willing to experiment. My personal belief is that your first AI project isn’t actually supposed to succeed. If it fails spectacularly, that is often a better outcome than if it works perfectly, because failure is where you actually learn about the limits of the technology and learn about ensuring appropriate guardrails.
As a leader, my job is to instill a culture where it’s safe to make those mistakes. For example, early on, we tried using NotebookLM as an engine for document generation. We thought we could automate the drafting of complex deal summaries by feeding it a massive corpus of data.
It failed for that specific purpose. It wasn’t the drafting chair solution we expected. However, that failure taught us something vital: the tool was actually an incredible analytical engine. It was better at finding the needle in the haystack across 50 documents than it was at writing the final memo.
Because we didn’t fall apart when the automation didn’t work, the team lost their fear of the tech. We stopped looking for a magic button and started looking for the right tool for the right task.
That is the leadership challenge: moving the team from a place of anxiety. From, “What if the AI is wrong?” and to a place of curiosity, where you ask “How can I break this tool to find its limits?” When you remove the penalty for failure, you gain the freedom to innovate.
[Edit. Note: Learn more in "The AI-Empowered Counsel: A Practical Framework for In-house Legal Teams."]
"Do not underestimate your abilities. For the first time probably ever, without a coding background, you can create a pretty solid prompt that will give you accurate results on a repeatable basis."
We’ve been focusing on GenAI. But the headlines have moved on to Agentic AI – the autonomous systems that can actually take action and learn on their own over time. What risks and opportunities do you see with agents compared to GenAI?
I view GenAI as a partner that helps me move from a blank page to a first draft. Agentic AI is the next leap — it’s about moving from drafting to doing.
The opportunity is the ability to automate entire ministerial workflows. An agent doesn’t just draft a document; it can be configured to check a contract’s notice requirements, verify dates against a database, and package the final result for my review. For a high-volume team, that is a massive force multiplier.
However, I approach this with healthy skepticism. The more autonomy you give a tool, the higher the risk that a hallucination becomes a process error rather than just a typo. In legal, you cannot have an agent taking autonomous action without a solid governance framework.
My philosophy is that AI can process, but it cannot think or discriminate based on complex fact patterns. You still need a human in the loop to navigate the nuance and gray areas.
Are you building your own agents for legal?
We are already experimenting. We use Gems and custom-instructed tools to handle repeatable tasks for our internal clients. But we aren’t just handing out finished bots; we’re focused on teaching our team to fish by building a centralized prompt library. We have our eye on the bleeding edge, but we only deploy where the guardrails are as strong as the technology.
"When you remove the penalty for failure, you gain the freedom to innovate."
You discuss prompting as a core legal skill. Will all lawyers need to become prompt engineers to use AI effectively?
I believe the entirety of the AI game, from an attorney’s standpoint, is about how good a prompt engineer you become. The AI can do most everything you need on a day-to-day basis. The issue is some practitioners have not yet become proficient at giving the AI sufficient direction to make things more efficient.
I like to say to other attorneys, especially the skeptics, do not underestimate your abilities. For the first time probably ever, without a coding background, you can actually create a pretty solid prompt that will give you accurate results on a repeatable basis. You know, ChatGPT uses Projects as the mechanism to store custom instructions, Gemini uses a system called Gems.
A lot of the problems that we saw where the work involved routine, repetitive, and largely ministerial tasks, we’ve been able to squash within the legal department just doing prompt engineering on our own.
And here’s the beautiful part about any of the AI that’s out there: if you don’t know how to prompt engineer, it’ll teach you that, too. It’s not that hard to learn how to get very sophisticated and even go so far as to develop a fully-fledged in-house agent.
[Edit. note: For Creque's comprehensive guide to legal prompting, with templates and prompt language to get started in multiple practice areas and matter types, see "The 6-Vector Legal AI Prompt Framework."]
What’s your advice to legal teams who fear that AI will replace them in their work?
The entire AI game in my humble opinion is about prompting. The worst thing you can do if you lead attorneys – as I have the privilege of doing every day – is when somebody says “Can I have the prompt?” and you give it to them. AI prompting is very much along the lines of, “Teach a man to fish.”
Teach your people, your colleagues, and indeed your bosses, how to construct a prompt, because that’s going to be way more important in the grander scheme than giving them the particular prompt that gave a particular output today. Don’t be afraid to experiment, even if you fail. The return on investment is clear if you use it for the right tasks.
When AI first rolled out to the public, I thought lawyers would lose jobs in five years. Now that I’ve used it, I think it’ll be much longer, if it ever happens at all. AI can analyze and process, but it cannot think or discriminate based on complex fact patterns, understand nuance, or bring to the fore the panoply of experiences I have had over more than 20 years of practice. With hallucinations, we’re years away.
Petra Pasternak is a writer and editor focused on the ways that technology makes the work of legal professionals better and more productive. Before Everlaw, Petra covered the business of law as a reporter for ALM and worked for two Am Law 100 firms. See more articles from this author.