Skip to content

How to use the output of AxMiPRO Optimizer beyond bootstrapped demos ? #268

@enitrat

Description

@enitrat

I'm using AxMiPRO to optimize an AxChainOfThought program for generating code responses. After running the optimization process, I get a result object that includes demos (bootstrapped examples) and finalConfiguration (which seems to contain optimized instructions or prompt refinements). I understand how to load and apply the generated demos to improve the program's performance in future sessions, but I'm unclear on how to properly incorporate the rest of the optimizer's output - particularly any "special instructions" or refined configurations - when instantiating or updating the program.

From what I've observed:

  • The demos can be set via program.setDemos(result.demos) to enhance few-shot learning.
  • However, the finalConfiguration (saved as JSON in my setup) appears to include potentially useful elements like refined instructions, but the library documentation doesn't clearly explain how to apply them beyond the demos.
  • Is there a recommended way to load and apply these optimized instructions (e.g., updating the program's description, signature, or internal prompt)? Or are they only meant for internal use during optimization?

In my case, this is the content of the saved finalConfiguration:

{
  "instruction": "Analyze the input systematically and provide a precise, well-reasoned response. Be very specific and detailed in your instructions.",
  "bootstrappedDemos": 1,
  "labeledExamples": 1,
  "numCandidates": 5,
  "numTrials": 8,
  "sampleCount": 1
}

This is confusing because if the optimizer refines the program's instructions, I'd expect a way to persist and reuse them across sessions for consistent improvements. Without this, it feels like we're only getting partial value from the optimization. I've only been recently looking into DSPy / AX so i'm not sure this makes sense?

Minimal Reproducible Example (MRE)

Here's a simplified setup to reproduce the scenario. Assume you have @ax-llm/ax installed and an API key for a supported AI model (e.g., Google Gemini).

1. Dataset and Program Definition (program.ts)

import { AxChainOfThought, f, s } from '@ax-llm/ax';
import fs from 'fs';
import path from 'path';

// Sample dataset (array of examples for optimization/evaluation)
const dataset = [
  {
    query: 'Generate a simple function',
    context: 'Use basic syntax',
    answer: 'fn example() {}'
  },
  // Add more examples as needed...
];

// Program signature and definition
const signature = s`
query:${f.string('User query')},
context:${f.string('Relevant context')} ->
answer:${f.string('Generated response')}
`;

export const myProgram = new AxChainOfThought<
  { query: string; context: string },
  { answer: string }
>(signature, {
  description: 'Generate code based on query and context.'
});

// Optionally load optimized demos if they exist
const applyOptimizedOutput = () => {
  try {
    const demos = JSON.parse(
      fs.readFileSync(path.join(__dirname, 'optimized-demos.json'), 'utf8')
    );
    myProgram.setDemos(demos); // This works for demos
  } catch (error) {
    // File not found, skip
  }

  // Question: How to load/apply optimized instructions here?
  try {
    const config = JSON.parse(
      fs.readFileSync(path.join(__dirname, 'optimized-config.json'), 'utf8')
    );
    // TODO: What to do with config? e.g., myProgram.setInstructions(config.instructions)?
  } catch (error) {
    // File not found, skip
  }
};

applyOptimizedOutput();

2. Optimization Script (optimize.ts)

import { AxAI, AxMiPRO } from '@ax-llm/ax';
import { myProgram } from './program';
import fs from 'fs';
import path from 'path';

// Setup AI (student model for optimization)
const ai = new AxAI({
  name: 'google-gemini',
  apiKey: 'YOUR_API_KEY_HERE',
  // Model config...
});

// Metric function (simplified for MRE)
const metricFn: AxMetricFn = async ({ prediction, example }) => {
  // Return a score between 0 and 1 based on prediction vs. example.answer
  return prediction.answer === example.answer ? 1 : 0;
};

// Optimizer instantiation
const optimizer = new AxMiPRO({
  studentAI: ai,
  examples: dataset, // From above
  targetScore: 0.9,
  verbose: true,
});

// Run optimization
const main = async () => {
  const result = await optimizer.compile(myProgram, metricFn);

  // Save outputs
  fs.writeFileSync(
    path.join(__dirname, 'optimized-demos.json'),
    JSON.stringify(result.demos, null, 2)
  );
  fs.writeFileSync(
    path.join(__dirname, 'optimized-config.json'),
    JSON.stringify(result.finalConfiguration, null, 2)
  );

  console.log('Optimization complete. Demos and config saved.');
};

main();

Steps to Reproduce

  1. Run optimize.ts to perform optimization and generate optimized-demos.json and optimized-config.json.
  2. In a new session, import and use myProgram from program.ts.
  3. The demos load fine via setDemos(), but how do I apply the contents of optimized-config.json (e.g., any refined instructions) to myProgram for better performance?

Questions

  • After optimization, I'd like to fully reuse the results in production without re-running the optimizer.
  • Besides setDemos(), is there a method like setOptimizedInstructions() or a way to update the program's description/signature based on finalConfiguration?
  • If not, what is the purpose of finalConfiguration, and how should it be used?

Any guidance, examples, or updates to the docs would be appreciated! Thanks.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions