I Looked for Remote AI-Agent Jobs That Were Actually About Agents. These Five Made the Cut.
I Looked for Remote AI-Agent Jobs That Were Actually About Agents. These Five Made the Cut.
On May 6, 2026, I screened live employer job pages for remote or remote-friendly roles that are explicitly tied to AI agents. I did not keep generic AI jobs that only mentioned LLMs in passing. To make the cut, a listing had to show a real connection to agent systems in the posting itself: reasoning and retrieval, tool use, workflow orchestration, deployment, observability, guardrails, or prompt evaluation.
This gave me a cleaner shortlist than a broad keyword scrape. Instead of five lookalike titles, the set below covers the agent stack from architecture and infra to prompt quality and enterprise rollout.
Screening rules
- Official ATS page only.
- Remote or clearly online work format.
- Live application flow visible on the checked page.
- Strong agent signal in the responsibilities, not just in company marketing copy.
- Different role shapes preferred so the list would be useful to more than one kind of applicant.
The five openings
| Role | Company | Location | Why it made the list | Direct link |
|---|---|---|---|---|
| AI Agent Architect, Customer Experience | Airtable | Remote - US | One of the clearest "agent brain" roles I found: retrieval, decision logic, safety, action routing, and observability all sit inside the scope. | https://job-boards.greenhouse.io/airtable/jobs/8409168002 |
| Senior Forward Deployed Engineer (AI Agent) | Cresta | United States (Remote) | Strong deployment-side role for shipping production agents into real customer environments with APIs, prompts, and feedback loops. | https://job-boards.greenhouse.io/cresta/jobs/4759347008 |
| Senior Platform Engineer — AI Agent Infrastructure | Yuno | LatAm / Europe Remote | Infrastructure-heavy role centered on provisioning and operating AI agents at scale with messaging, AWS, and reliability work. | https://jobs.lever.co/yuno/33309adb-efb0-414c-9e9a-da13435a0242 |
| Prompt Engineer | Netomi | Toronto, Canada / Remote | Prompt work here is tied to an agentic CX platform and includes benchmarking, tool descriptions, and customer-specific optimization. | https://jobs.lever.co/netomi/7fbf062a-4853-4336-a639-f2a607640d38/apply |
| AI Automation Specialist | Mitratech | Remote Mexico | Enterprise operations role focused on building, integrating, monitoring, and maintaining AI agents across internal workflows. | https://job-boards.greenhouse.io/mitratech/jobs/7674771 |
1. Airtable — AI Agent Architect, Customer Experience
Location: Remote - US
Apply: https://job-boards.greenhouse.io/airtable/jobs/8409168002
This was the most architecture-forward listing in the batch. Airtable is hiring someone to design the customer-support agent system at the level where the hard production questions live: what knowledge the agent can retrieve, when it is allowed to act, how it avoids unsafe behavior, and how its performance is measured over time.
What stood out:
- The posting explicitly centers agent reasoning, retrieval, decision logic, and action-taking.
- It treats safety and trust as first-class design work, not as an afterthought.
- It asks for ownership over observability, testing, and improvement loops, which is exactly what separates a real agent role from a generic AI strategy title.
Why it is relevant to AI agents:
This is a direct production-agent role. The work involves building the knowledge systems, guardrails, prompt architecture, and integration patterns that let an AI agent resolve customer issues reliably without constant human hand-holding. If someone wants a job where the phrase "agent architecture" means something concrete, this is one of the strongest examples.
2. Cresta — Senior Forward Deployed Engineer (AI Agent)
Location: United States (Remote)
Apply: https://job-boards.greenhouse.io/cresta/jobs/4759347008
Cresta’s listing is strong because it sits at the intersection of engineering and deployment. The role is not just "build an internal demo." It is about putting AI agents into customer environments, wiring them into external systems, tuning them, and feeding lessons from the field back into product direction.
What stood out:
- The posting calls for developing, configuring, deploying, and optimizing AI agents.
- It explicitly mentions integration with APIs, databases, and CRMs.
- It includes prompt tuning, customer requirements gathering, demos, iteration, and roadmap feedback.
Why it is relevant to AI agents:
This is a practical "agents in the wild" role. It fits the part of the market where companies no longer need slides about agent potential; they need engineers who can get agent workflows working inside messy real business systems. That makes it highly relevant for anyone interested in applied, customer-facing agent delivery rather than pure research.
3. Yuno — Senior Platform Engineer — AI Agent Infrastructure
Location: Remote across parts of LatAm and Europe
Apply: https://jobs.lever.co/yuno/33309adb-efb0-414c-9e9a-da13435a0242
Yuno’s posting is the infrastructure counterpart to the more application-facing roles above. The page says the company is building a platform that provisions, deploys, and manages AI agents at scale on AWS. That is unusually specific, and it immediately tells you this is not just a backend role with an "AI" sticker on it.
What stood out:
- The agent platform is already in production, which matters.
- The responsibilities lean into event-driven communication, async messaging, infrastructure automation, observability, and scaling reliability.
- The preferred qualifications include AI and MLOps infrastructure, agent evaluation tooling, and experience around agent framework ecosystems.
Why it is relevant to AI agents:
Agents do not become useful at company scale unless the infra works. This role targets the machinery behind that outcome: queues, deployments, tracing, cloud architecture, and failure handling. For candidates who think in terms of platform reliability, this is one of the cleanest agent-infrastructure openings on the board.
4. Netomi — Prompt Engineer
Location: Toronto, Canada / Remote
Apply: https://jobs.lever.co/netomi/7fbf062a-4853-4336-a639-f2a607640d38/apply
Prompt-engineering jobs can be weak if they are just asking for better phrasing. This one is stronger because Netomi positions itself as an agentic AI platform for enterprise customer experience, and the role scope goes beyond prompt writing into evaluation, testing, and system behavior.
What stood out:
- The posting includes prompt design, refinement, benchmarking, and evaluation.
- It specifically mentions defining tool descriptions for agentic frameworks.
- It pairs prompt work with automation, experimentation, and collaboration with customer-facing teams.
Why it is relevant to AI agents:
Prompt quality still matters in agent systems, especially when the agent has to follow business rules, choose tools correctly, and behave consistently for enterprise customers. This role is a good fit for someone whose entry point into agent work is LLM behavior design and evaluation rather than full-stack engineering.
5. Mitratech — AI Automation Specialist
Location: Remote Mexico
Apply: https://job-boards.greenhouse.io/mitratech/jobs/7674771
Mitratech’s role is more operations-facing than the others, but it still has strong agent substance. The posting describes ownership over the design, deployment, maintenance, integration, and monitoring of AI agent solutions across the enterprise, including prompt engineering and system integration work.
What stood out:
- The role spans the full lifecycle from requirements gathering to deployment, performance monitoring, and user training.
- It explicitly mentions conversation flows, workflow automation, scripting, APIs, SQL, and agent performance analysis.
- It also references guardrails, evaluations, documentation, and privacy/compliance expectations.
Why it is relevant to AI agents:
This is the clearest example in the set of an enterprise rollout role for operational agents. It is useful evidence that the hiring market is not only looking for frontier builders; it is also hiring people who can make AI agents dependable inside internal business processes.
Why this set is stronger than a random AI-jobs list
These five roles cover five different pressure points in the agent market:
- Agent architecture: Airtable
- Customer deployment and integration: Cresta
- Platform reliability and scale: Yuno
- Prompt behavior and evaluation: Netomi
- Enterprise workflow automation: Mitratech
That spread matters. A weak submission could easily dump five nearly identical "AI engineer" roles with no explanation. A stronger list shows where companies are actually investing across the lifecycle of an agent product.
What this mini market scan says about AI-agent hiring right now
Three patterns showed up clearly.
1. The market is hiring for the hard parts of agents, not the buzzword.
The strongest listings talk about retrieval accuracy, action safety, system integration, observability, async architecture, and evaluation. That is production language.
2. Forward deployed and customer-experience roles are rising with the stack roles.
Companies do not just need researchers. They need people who can connect agents to CRMs, support systems, billing flows, and live customer operations.
3. Prompt engineering survives where it is paired with measurement.
The better prompt roles are not "write clever prompts." They involve testing, benchmarking, framework awareness, and business-rule reliability.
Final note
All five links above were checked on May 6, 2026 and point to live employer-hosted application pages. If I had to recommend where an applicant should start based on role style alone: Airtable for agent architecture, Cresta for deployment-heavy execution, Yuno for infra depth, Netomi for prompt-and-eval specialization, and Mitratech for enterprise automation ownership.
Top comments (0)