There's a New Type of Employee at Your Company — And Nobody's Onboarding Them
AI agents are functioning like employees. They access sensitive data, interact with customers, and make judgment calls. But nobody gave them the employee handbook. Here's a free governance framework to fix that.
Your company just hired a bunch of employees who've never been through orientation. The only rules they follow? Written by engineers in San Francisco. Not by you. You didn't post the job. You didn't interview them. You don't even know what rules they're following. But they're already working.
They're AI agents. And they're everywhere now.
For most of last year, they were an engineering thing. Writing code, testing software, generating documentation. Useful, but contained. Then over the last couple of months, they broke out of engineering and into every function. Sales. Finance. HR. Customer support. What was a slow trickle has turned into a flood.
What's an AI agent? Unlike a chatbot that waits for you to ask it something, an AI agent takes action on its own. It reads documents, sends emails, updates records, makes decisions, all based on a goal you give it. Think less search engine, more new hire who's been handed access to your systems and told to get things done.
These AI agents are functioning like employees. They access sensitive data. They interact with customers. They make judgment calls. They operate under your brand.
But nobody gave them the employee handbook. Nobody told them the rules. They were "hired" without the governance that HR brings to the table. And the AI agents? They aren't even aware there are any company rules.
Regulators are moving, but AI moves faster
The laws are already here, and they'll likely need to be rewritten more than once to keep up.
NYC Local Law 144 requires bias audits for AI used in hiring decisions. Colorado's AI Act (effective 2026) creates obligations around high-risk AI systems. The EU AI Act classifies employment-related AI as high-risk with mandatory transparency requirements. Illinois just passed HB 3773, prohibiting AI-driven employment discrimination, including unintentional disparate impact. Texas HB 1709 is working through the legislature right now.
And that's just the U.S. More are coming, and the ones already on the books will almost certainly evolve as the technology does.
Which is exactly the problem. You can't wait for the regulatory dust to settle, because it won't. Most companies know this wave is building. What they don't have is a practical answer, and part of the reason is that each team thinks it's someone else's problem. Legal thinks it's IT. IT thinks it's HR. HR thinks it's legal. And leadership hasn't put them all in a room together on it yet. So instead, the typical response? Bolt a paragraph onto the existing acceptable AI use policy and call it done.
That's like hiring a bunch of smart MBAs, handing them the company credit card, never telling them first class isn't covered, and then never checking the statements.
Here's the thing nobody's saying out loud: leadership needs HR on this more than they realize. Your CTO can tell you what the AI agents are capable of. Your legal team can flag the regulatory exposure. But HR is the only function that knows how to onboard a workforce, set behavioral expectations, build accountability structures, and make policies stick across an entire organization. That's not IT's skill set. It's not legal's either. AI agents are a workforce problem, and workforce problems are what HR has been solving for decades.
If you're in HR and you're reading this, don't wait to be invited into the conversation. You should be the one starting it. If you're the CEO, your Head of HR needs to have an equal seat at the table.
We've been building employee handbooks for six years. So, of course, we had to build one for AI agents.
At AirMason, we build handbook and policy platforms for organizations, from startups to large enterprises. Six years of telling employees how the organization works.
Around mid-December, tools like Claude Code started changing how our own team worked. Within weeks we had AI agents handling content operations, internal tooling, customer support. And it hit me: we have this weird gap where AI agents are doing what employees do, but they don't have the same security awareness training or values training that employees went through.
No code of conduct. No SOP. No policies to govern them. We were pretending this wasn't our problem yet. Sound familiar?
We weren't the only ones thinking about this. Mike Murchison, CEO of Ada, was early to recognize that company values need to flow through AI tools. His team built ada.md, a file that encodes Ada's values and way of working, and an onboarding command that sets up new team members' AI environments in one step. It's a smart move from a brilliant founder, making sure every employee at Ada has AI tools that already understand the company's DNA from day one.
Our goal was to take this a step further. Your employees need to know how to work with AI agents: what they can trust them with, what they can't, and what to do when something goes sideways. And your AI agents need to know your rules. Not Anthropic's, not OpenAI's. Yours. So we built the building blocks and templates to create your own governance layer, for your AI agents and the employees working alongside them.
Let's unpack it
This framework was developed in collaboration with Claude (Anthropic). We used the same AI tools we're writing policies for. It felt right.
Three documents. Each one does a different job.
The AI Agent Policies — the full governance document, for leadership, legal, IT. The complete picture: AI agent lifecycle, data governance, scope-of-authority levels, incident response, regulatory compliance, vendor management.
The AI Agent Employee Handbook — for all your employees. What AI agents can and can't do, how to report concerns, what the rules are around AI in hiring. Structured to be an extension of your core employee handbook.
The AI Agent Handbook Brief — a machine-readable document that goes directly into your AI agent system prompts. Instead of hoping your AI agents will somehow absorb your policies, you hand them the handbook. Instruction hierarchy, data handling rules, escalation triggers, all in a format built for AI models to read and follow.
That last one is the whole point. If you want AI agents to follow your rules, start by giving them the rules.
It's free
For AirMason customers, these will be available as beautifully designed, ready-to-use templates on the platform. But this is important enough that we want it accessible to everyone, so we're releasing all three documents in multiple formats:
Download the framework (free):
- The AI Agent Handbook (PDF | Word | Markdown)
- The AI Agent Employee Handbook (PDF | Word | Markdown)
- The AI Agent Policy Brief (PDF | Word | Markdown)
Make it your own. And if you've got ideas to improve it, we're all ears. This should be a living framework, not a static document. Hit me up on X or LinkedIn. Let's build on it together.
How to actually use this
Start with the Brief. Include it in your AI agent system prompts today. Highest-leverage move you can make. Your AI agents immediately become aware of boundaries they didn't know existed.
Circulate the Employee Handbook. Most of your team doesn't know what AI agents can access, what decisions they're making, or what to do when something seems off.
Adapt the Policies. Every organization's risk profile is different. Use the governance document as a starting point. Your legal team will have opinions. That's the point.
This is the beginning, not the answer
Nobody has this fully figured out. The regulatory landscape is shifting fast, the technology is evolving faster, and the gap between what AI agents can do and what organizations have prepared for grows every week.
But we need to start asking the right questions. If AI agents are the new employees, what are they authorized to do? What data can they access? Who's responsible when they get it wrong? How do we make sure they know the rules?
Start by getting everyone in the same room. Leadership, legal, HR, IT. Because until that happens, nothing moves. And when it comes time to communicate these policies to your employees and your AI agents alike, drafting them, distributing them, enforcing them, that's exactly where AirMason can help. Reach out if you want to get started.