Guide Labs logo

Guide LabsInterpretable foundation models that are easy to align

At Guide labs, we build interpretable foundation models that can reliably explain their reasoning, and are easy to align. We provide access to these models via an API. Over the past 6 years, our team has built and deployed interpretable models at Meta, and Google. Our models provide explanations that: 1) provide human-understandable explanations, for each output token, 2) which parts of the input (prompt) is most important for each part of the generated output, and 3) which inputs, in the training data, directly led to the model's generated output. Because our models can explain their outputs, they are easier to debug, steer, and align.

2024-03-07
Active
Early
W24
2
B2B
United States of AmericaAmerica / CanadaRemotePartly Remote
Guide Labs screenshot
More About Guide Labs

Guide Labs: Interpretable Foundation Models

Revolutionizing AI with Transparent and Controllable Models

Key Features

  • Interpretable Model API: Explain and steer model outputs using human-understandable features.
  • Prompt Attribution: Identify important parts of the prompt influencing the output.
  • Data Attribution: Determine which pre-training and fine-tuning data most influence the generated output.
  • Concept Explanation: Customize and explain models using high-level human-understandable features.
  • Multi-Modal Training: Train and fine-tune models on any data modality.

Use Cases

  • Debugging and Alignment: Use reliable explanations to debug and align models effectively.
  • Custom Model Training: Fine-tune models with your own data to insert high-level concepts and control outputs.
  • Research and Development: Leverage interpretable models for novel research in machine learning.
  • Enterprise Solutions: Implement interpretable models in enterprise applications for better transparency and control.

Pricing

Guide Labs offers flexible pricing plans tailored to different needs. Contact us for detailed pricing information and custom plans.

Teams

Guide Labs is led by experts in machine learning interpretability:

  • Julius Adebayo: PhD in ML interpretability from MIT, with experience at Google Brain, Meta, and Prescient Design. Published over 10 papers on ML interpretability.
  • Fulton Wang: PhD in ML interpretability from MIT, with experience at Meta and Apple. Key developer on the Captum package.

Join our waitlist for exclusive early access to our interpretable foundation models.