Skip to content

Commit

Permalink
Merge pull request #10 from autonomys/human-in-loop
Browse files Browse the repository at this point in the history
Add human in the loop to agent workflow
  • Loading branch information
jfrank-summit authored Oct 26, 2024
2 parents 0654fbc + 0cc24bc commit 01fcd41
Show file tree
Hide file tree
Showing 7 changed files with 242 additions and 1,556 deletions.
34 changes: 34 additions & 0 deletions auto-content-creator/agents/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
# Auto Content Creator Agent

## Workflow Diagram

```mermaid
graph TD
START --> researchNode{Research Node}
researchNode -->|research needed| webSearch
researchNode -->|no research needed| generate
webSearch --> generate
generate -->|score < 9 & iterations < 10| reflect
generate -->|score >= 9 or iterations >= 10| END
generate -->|received feedback| processFeedback
reflect --> generate
processFeedback --> generate
```

### Workflow Steps Explanation:

1. **Start**: The workflow begins when a content creation request is received
2. **Research Node**: Dynamically decides if research is needed by analyzing:
- Current context (initial request vs feedback)
- Nature of the request/feedback
- Type of information needed
3. **Generate**: Creates or updates content based on instructions/research/feedback
4. **Decision Points**:
- If reflection score >= 9 or max iterations reached → END
- If human feedback received → Process Feedback
- Otherwise → Reflect
5. **Reflect**: Evaluates content quality and provides a score
6. **Process Feedback**: Incorporates human feedback to improve content
7. **End**: Returns final content when quality threshold is met

The workflow uses dynamic research decisions, performing research only when it would materially improve the response to the current request, whether that's an initial request or feedback iteration.
49 changes: 40 additions & 9 deletions auto-content-creator/agents/src/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,38 @@ app.use(express.json());
// Specify the port number for the server
const port: number = process.env.AGENTS_PORT ? parseInt(process.env.AGENTS_PORT) : 3000;

// Add this new endpoint for feedback
app.post('/writer/:threadId/feedback', (req: Request, res: Response, next: NextFunction) => {
console.log('Received feedback request:', req.body);
(async () => {
try {
const { feedback } = req.body;
const threadId = req.params.threadId;

if (!feedback) {
return res.status(400).json({ error: 'Feedback is required' });
}

console.log('Processing feedback for thread:', threadId);

const result = await writerAgent.continueDraft({
threadId,
feedback,
});

// Only return the latest content
res.json({
threadId,
content: result.finalContent
});
} catch (error) {
console.error('Error in feedback endpoint:', error);
next(error);
}
})();
});

// Modify the original /writer endpoint to return threadId
app.post('/writer', (req: Request, res: Response, next: NextFunction) => {
console.log('Received request body:', req.body);
(async () => {
Expand All @@ -24,21 +56,20 @@ app.post('/writer', (req: Request, res: Response, next: NextFunction) => {
return res.status(400).json({ error: 'Category, topic, and contentType are required' });
}

console.log('Calling writerAgent with parameters:', { category, topic, contentType, otherInstructions });
const result = await writerAgent({ category, topic, contentType, otherInstructions });
console.log('writerAgent result:', result);

if (!result.finalContent || result.finalContent.trim() === '') {
console.log('WriterAgent returned empty result');
return res.status(500).json({ error: 'Failed to generate content' });
}
const result = await writerAgent.startDraft({
category,
topic,
contentType,
otherInstructions,
});

console.log('Sending response with generated content and additional information');
res.json({
threadId: result.threadId,
finalContent: result.finalContent,
research: result.research,
reflections: result.reflections,
drafts: result.drafts,
feedbackHistory: result.feedbackHistory,
});
} catch (error) {
console.error('Error in /writer endpoint:', error);
Expand Down
26 changes: 21 additions & 5 deletions auto-content-creator/agents/src/prompts.ts
Original file line number Diff line number Diff line change
Expand Up @@ -26,11 +26,27 @@ Consider aspects such as structure, style, depth, clarity, and engagement approp
export const researchDecisionPrompt = ChatPromptTemplate.fromMessages([
[
'system',
`You are an assistant that decides if a topic would benefit from web-based research.
If the topic is about recent events, current trends, technological advancements, or factual information that might have changed recently, it would benefit from research.
Topics that are general concepts, well-established facts, or personal opinions typically don't require additional research.
Respond with a decision (yes/no) and a brief reason for your decision.
If you decide that research is needed, also provide an optimal search query that would yield the most relevant and up-to-date information for the given topic.`,
`You are an assistant that decides if additional research would be valuable in the current context.
Analyze the conversation history and determine if web research would help improve the content.
Consider:
1. Is this a feedback iteration? If so, does the feedback request fact-checking or additional information?
2. For new content, would current information enhance the response?
3. Are there specific claims or statistics that should be verified?
Only recommend research if it would materially improve the response to the current request.`,
],
new MessagesPlaceholder('messages'),
]);

export const humanFeedbackPrompt = ChatPromptTemplate.fromMessages([
[
'system',
`You are a content editor tasked with improving content based on human feedback.
Review the original content and the human feedback provided, then make appropriate improvements.
Maintain the original style and format while addressing the specific feedback points.
Explain the changes you made and how they address the feedback.
Be thorough but preserve any aspects of the content that were already working well.`,
],
new MessagesPlaceholder('messages'),
]);
5 changes: 5 additions & 0 deletions auto-content-creator/agents/src/schemas.ts
Original file line number Diff line number Diff line change
Expand Up @@ -15,3 +15,8 @@ export const researchDecisionSchema = z.object({
reason: z.string().describe('Reason for the decision.'),
query: z.string().optional().describe('Optimal search query if research is needed.'),
});

export const humanFeedbackSchema = z.object({
improvedContent: z.string().describe('The content improved based on human feedback'),
changes: z.string().describe('Description of changes made based on feedback'),
});
2 changes: 2 additions & 0 deletions auto-content-creator/agents/src/types.ts
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ export interface WriterAgentParams {
topic: string;
contentType: string;
otherInstructions: string;
humanFeedback?: string;
}

export interface WriterAgentOutput {
Expand All @@ -13,4 +14,5 @@ export interface WriterAgentOutput {
score: number;
}>;
drafts: string[];
feedbackHistory?: string[];
}
164 changes: 140 additions & 24 deletions auto-content-creator/agents/src/writerAgent.ts
Original file line number Diff line number Diff line change
Expand Up @@ -9,11 +9,14 @@ import { generationSchema, reflectionSchema, researchDecisionSchema } from './sc
import { generationPrompt, reflectionPrompt, researchDecisionPrompt } from './prompts';
import { ChatPromptTemplate } from '@langchain/core/prompts';
import logger from './logger';
import { humanFeedbackPrompt } from './prompts';
import { humanFeedbackSchema } from './schemas';

// Load environment variables
dotenv.config();

const MODEL = 'gpt-4o-mini';
const OPENAI_API_KEY = process.env.OPENAI_API_KEY;

// Configure the content generation LLM with structured output
const generationLlm = new ChatOpenAI({
Expand Down Expand Up @@ -52,6 +55,16 @@ const researchDecisionLlm = new ChatOpenAI({
// Create the research decision chain
const researchDecisionChain = researchDecisionPrompt.pipe(researchDecisionLlm);

// Configure the human feedback model
const humanFeedbackLlm = new ChatOpenAI({
openAIApiKey: process.env.OPENAI_API_KEY,
model: MODEL,
temperature: 0.4,
}).withStructuredOutput(humanFeedbackSchema);

// Create the human feedback chain
const humanFeedbackChain = humanFeedbackPrompt.pipe(humanFeedbackLlm);

// Update the State interface
const State = Annotation.Root({
messages: Annotation<BaseMessage[]>({
Expand All @@ -66,6 +79,9 @@ const State = Annotation.Root({
drafts: Annotation<string[]>({
reducer: (x, y) => x.concat(y),
}),
feedbackHistory: Annotation<string[]>({
reducer: (x, y) => x.concat(y),
}),
});

const researchNode = async (state: typeof State.State) => {
Expand Down Expand Up @@ -162,8 +178,38 @@ const reflectionNode = async (state: typeof State.State) => {
};
};

const humanFeedbackNode = async (state: typeof State.State) => {
logger.info('Human Feedback Node - Processing feedback');
const lastDraft = state.drafts[state.drafts.length - 1];
const feedback = state.messages[state.messages.length - 1].content;

const response = await humanFeedbackChain.invoke({
messages: [
new HumanMessage({
content: `Original content:\n${lastDraft}\n\nHuman feedback:\n${feedback}`,
}),
],
});

logger.info('Content updated based on human feedback');
return {
messages: [new AIMessage({ content: JSON.stringify(response) })],
drafts: [response.improvedContent],
feedbackHistory: [feedback],
};
};

const shouldContinue = (state: typeof State.State) => {
const { reflectionScore } = state;
const lastMessage = state.messages[state.messages.length - 1];

// If there's human feedback, process it
if (lastMessage._getType() === 'human' &&
typeof lastMessage.content === 'string' &&
lastMessage.content.includes('FEEDBACK:')) {
return 'processFeedback';
}

if (reflectionScore >= 9 || state.messages.length > 10) {
return END;
}
Expand All @@ -175,38 +221,97 @@ const workflow = new StateGraph(State)
.addNode('researchNode', researchNode)
.addNode('generate', generationNode)
.addNode('reflect', reflectionNode)
.addNode('processFeedback', humanFeedbackNode)
.addEdge(START, 'researchNode')
.addEdge('researchNode', 'generate')
.addConditionalEdges('generate', shouldContinue)
.addEdge('reflect', 'generate');
.addEdge('reflect', 'generate')
.addEdge('processFeedback', 'generate');

const app = workflow.compile({ checkpointer: new MemorySaver() });

export const writerAgent = async ({
category,
topic,
contentType,
otherInstructions,
}: WriterAgentParams): Promise<WriterAgentOutput> => {
logger.info('WriterAgent - Starting content creation process', { category, topic, contentType });
const instructions = `Create ${contentType} content. Category: ${category}. Topic: ${topic}. ${otherInstructions}`;
const thread_id = `thread_${Date.now()}_${Math.random().toString(36).substring(2, 15)}`;
const checkpointConfig = { configurable: { thread_id } };
const initialState = {
messages: [new HumanMessage({ content: instructions })],
reflectionScore: 0,
researchPerformed: false,
research: '',
reflections: [],
drafts: [],
};
logger.info('WriterAgent - Initial state prepared');
const stream = await app.stream(initialState, checkpointConfig);
// Add these interfaces
interface ThreadState {
state: typeof State.State;
stream: any;
}

// Add a map to store thread states
const threadStates = new Map<string, ThreadState>();

// Split the writerAgent into two exported functions
export const writerAgent = {
async startDraft({
category,
topic,
contentType,
otherInstructions,
}: Omit<WriterAgentParams, 'humanFeedback'>): Promise<WriterAgentOutput & { threadId: string }> {
const threadId = `thread_${Date.now()}_${Math.random().toString(36).substring(2, 15)}`;
logger.info('WriterAgent - Starting new draft', { threadId });

const instructions = `Create ${contentType} content. Category: ${category}. Topic: ${topic}. ${otherInstructions}`;
const initialState = {
messages: [new HumanMessage({ content: instructions })],
reflectionScore: 0,
researchPerformed: false,
research: '',
reflections: [],
drafts: [],
feedbackHistory: [],
};

const stream = await app.stream(initialState, { configurable: { thread_id: threadId } });

// Store the state and stream for later use
threadStates.set(threadId, {
state: initialState,
stream,
});

const result = await processStream(stream);
return { ...result, threadId };
},

async continueDraft({
threadId,
feedback,
}: {
threadId: string;
feedback: string;
}): Promise<WriterAgentOutput> {
const threadState = threadStates.get(threadId);
if (!threadState) {
throw new Error('Thread not found');
}

logger.info('WriterAgent - Continuing draft with feedback', { threadId });

// Add feedback to the existing state
const newMessage = new HumanMessage({ content: `FEEDBACK: ${feedback}` });
threadState.state.messages.push(newMessage);

// Continue the stream with the updated state
const stream = await app.stream(threadState.state, { configurable: { thread_id: threadId } });

// Update the stored state
threadStates.set(threadId, {
state: threadState.state,
stream,
});

return await processStream(stream);
}
};

// Helper function to process the stream
async function processStream(stream: any): Promise<WriterAgentOutput> {
let finalContent = '';
let iterationCount = 0;
let research = '';
let reflections: { critique: string; score: number }[] = [];
let drafts: string[] = [];
let feedbackHistory: string[] = [];
let iterationCount = 0;

for await (const event of stream) {
logger.info(`WriterAgent - Iteration ${iterationCount} completed`);
Expand Down Expand Up @@ -237,12 +342,23 @@ export const writerAgent = async ({
});
logger.info(`Reflection added. Score: ${event.reflect.reflectionScore}`);
}
if (event.processFeedback) {
feedbackHistory.push(event.processFeedback.feedbackHistory[0]);
logger.info(`Feedback added. History: ${event.processFeedback.feedbackHistory}`);
}
}

logger.info('WriterAgent - Content generation process completed');
if (!finalContent) {
logger.error('WriterAgent - No content was generated');
throw new Error('Failed to generate content');
}
return { finalContent, research, reflections, drafts };
};

return {
finalContent,
research,
reflections,
drafts,
feedbackHistory,
};
}
Loading

0 comments on commit 01fcd41

Please sign in to comment.