Home

Autolume workshop day 1

Introduction to Generative AI and Autolume đź’ˇ

This workshop, presented by the Metacreation Lab for Creative AI at Simon Fraser University, provides an introduction to generative AI and GANs, the neural networks used in Autolume [02:10].

Key topics covered:

  • Introduction to Generative AI & GANs: The workshop provides an overview of generative AI and details how GANs are used in their “Autolume” tool [02:33].
  • Autolume Demos & Training: Participants learn about real-time generation with Autolume and the process of training custom models using their own data [02:39]. The workshop covers what’s needed for training and offers advanced tips and tricks [02:47].
  • Small Data & Model Crafting: Philip Pasquier, a professor at Simon Fraser University, introduces concepts of “small data” and “model crafting” as a vision for ethical AI that is accessible to non-coders [04:08].
  • Ethical AI Considerations ⚖️: The discussion touches on the ethical implications of AI, particularly concerning data usage and copyright in creative domains. The Metacreation Lab emphasizes participatory design, involving stakeholders in tool development [06:48]. They mention ongoing lawsuits against AI companies for alleged data theft [20:37] and discuss solutions like using clean, proprietary, or public domain datasets, as well as models that offer retribution and attribution [22:40].
  • Autolume’s Design Principles đź’»: Autolume is presented as a free, open-source tool that runs locally, requires no coding, and features built-in latent space navigation [31:25]. This allows artists and creators to control the generation process directly and in real-time, even with limited hardware (though a GPU is recommended for optimal performance) [30:05].
  • Video Training 📹: The workshop also covers using videos as a dataset for training models, discussing recommended video codecs and troubleshooting common issues with file formats [03:05:28]. Users can potentially find salient dimensions of change in the latent space, such as time, allowing for the generation of coherent temporal videos or controlling specific features like facial movements [03:03:08].

Link to Video


Published Apr 5, 2025