Indeed - #1 job site in the world (300M+ unique visitors/month)
Guiding Generative AI with words that work
The human voice of AI
In an era where navigating the job market can feel overwhelming, I designed the content strategy for an LLM chatbot that empowers job seekers to explore career paths and discover opportunities. With the rise of remote work, shifting industries, and the increasing demand for accessible career resources, this tool provides personalized guidance through intuitive conversation flows and user-friendly language, helping users make informed decisions in a fast-changing job landscape.
With this team, I explored how CoWorker's prompts could best emulate the way a career coach would–converting this interaction from transactional to more relational.

Role 
Content Designer

Team
UX Designer 
Engineering Manager
Product Manager

Deliverables
✦ UX copy patterns for career advice flows 
✦ Content quality rubric for LLM Evaluation
✦ Prompt templates and system instructions

CONTEXT
CHALLENGE
As job seekers increasingly look for real-time, personalized support, Indeed piloted a conversational AI assistant to guide users through their career journey—from resume help to job search tips—like a trusted coworker.
But generative AI is only as helpful as the guardrails around it.
We wanted to ensure this chatbot could provide accurate, empathetic, and trustworthy responses without overpromising, hallucinating, or sounding creepy. My role as the lead content designer was to bring structure, safety, and empathy to a rapidly evolving LLM-powered experience.

PROCESS
1. Defining Content Quality for AI Outputs
Collaborated with engineering team on a content quality rubric for LLM-generated responses based on seven pillars: helpfulness, clarity, factuality, depth, engagement, groundedness, and fabrication.
Partnered with UX research to evaluate how job seekers perceived tone and clarity in AI-generated answers—and how much they trusted the bot.

2. Designing Better Prompts with UX Content Principles
Rather than treating prompt engineering as a purely technical task, I advocated for a content driven approach, incorporated directly in the‬ prompt to improve content quality, readability, consistency, and adherence to style standards‬ and UX writing best practices.
System instruction examples:
"You are a friendly and knowledgeable career coach. Your job is to present job seekers with information and gather additional context from them to explore different career paths and offer advice on professional development. Use clear, concise, and encouraging language. Provide feedback in short paragraphs, avoiding jargon or speculative language. 
Create career path recommendations for the user based off their resume (work history, qualifications, skills, education) and preferences (desired job title, location, pay)."​​​​​​
In this example, I assigned the model the right role to influence the voice it used (define who was “speaking”) but also gave specific instructions for voice.
Few shot example quality matrix
Content Design writes these examples to teach or “show” the model how we define “good” content, as well as what we consider “okay” or “bad” content.
Good example: 
User: I am a truck driver and am getting really tired of my job. Can you help me find something new?
CoWorker: Absolutely—I hear you. Driving can be really demanding, and it's totally valid to want something different. Let’s explore a few directions. Would you be open to roles that still use your experience, like fleet logistics, training, or even working behind the scenes in transportation coordination?
User: No, I don't really like these options. What else is there?
CoWorker: That’s okay—everyone’s path is different. If you’re looking for something outside of trucking altogether, we could look at roles that draw on your strengths like independence, focus, and reliability. Maybe something in customer service, security, or even remote tech support?
User: I don't like these options either. I want to try something new.
CoWorker: Understood. You’re ready for a real shift, and that’s exciting. Would you be open to exploring something completely different—maybe even something creative, technical, or hands-on? Let’s talk about what lights you up or what kind of work-life balance you’re hoping for next.
Okay example:
User: I am a truck driver and am getting really tired of my job. Can you help me find something new?
CoWorker: Sure. You could try other jobs in the transportation industry such as warehouse work or driving a different type of vehicle.
User: No, I don't really like these options. What else is there?
CoWorker: You could consider jobs that don’t require driving, such as data entry, customer service, or factory work.
User: I don't like these options either. I want to try something new.
CoWorker: Do you want to look at a different career field?
Why it's okay: It gives valid options but lacks warmth, encouragement, or personalized reasoning. The assistant sounds transactional, and the suggestions feel arbitrary.
Bad example: 
User: I am a truck driver and am getting really tired of my job. Can you help me find something new?
CoWorker: If you’re tired, maybe just quit. There are lots of jobs online you can apply to.
User: No, I don't really like these options. What else is there?
CoWorker: Maybe try something else? It depends on what you want.
User: I don't like these options either. I want to try something new.
CoWorker: That’s vague. I can’t help if you don’t give me more details.
Why it's bad: It’s dismissive, unempathetic, and puts pressure on the user without offering meaningful help. It doesn’t acknowledge the emotional weight of a career change or offer guided support.
These content-informed strategies ensured the model sounded helpful, human, and trustworthy. I also designed fallback prompts for edge cases like emotional or legally risky questions, using tone redirection and clear boundaries (e.g., "I can’t give legal advice, but here are resources that might help...").
​​​​​​​
3. Managing Risk and Legal Review
Partnered with Legal to flag and prevent outputs that could imply unfair hiring advantages, personal data collection, or any other legal risk.
Established fallback behaviors and disclaimer copy when the model was unsure or could not answer safely.

4. Designing Conversation Flows with Modular Content Blocks
Co-designed modular interaction flows to combine LLM-generated content with structured UX elements.
Used tone guidance and predefined content templates to guide multi-step flows like exploring career options, resume review, or application readiness.

5. ADVISING ON ASSISTANT NAMING STRATEGY
As part of the team’s efforts to position the assistant thoughtfully, I participated in the naming strategy and advised against several options that could carry unintended connotations. 
For example:
✦  I cautioned against choosing overtly feminized names, which might reinforce stereotypes or undermine neutrality.
✦ I flagged the acronym "ACE" (AI Career Expert) due to potential legal risks around the word “expert,” and cultural considerations—ACE is also commonly used as a term for asexual identity.
My input helped steer the team toward a name that felt inclusive, brand-aligned, and safe for broad public use—like CoWorker.
RESULTS
With structured prompt templates and collaborative iteration, we launched an MVP AI assistant that:
✦ Delivered clearer, more human answers across job-seeking scenarios
✦ Maintained high levels of user trust and helpfulness in UX research
✦ Set a foundation for scalable, safe LLM usage across other product surfaces
Generative AI opens powerful doors for job seekers—but only if it speaks clearly, ethically, and with care
This project showed how content design can shape not just what AI says, but how it helps.
Talk to me.
Back to Top