Machine Learning for Creativity and Design

NeurIPS 2022 Workshop

December 9ᵗʰ, virtual

Image credit: Hannah Johnston's "Art Connects Us" (2021).


Machine co-creativity continues to grow and attract a wider audience to machine learning. Generative models, for example, have enabled new types of media creation across language, images, and music including recent advances such as IMAGEN, Flamingo, and DALL·E2. This one-day workshop will broadly explore topics in the applications of machine learning to creativity and design, which includes:

State-of-the-art algorithms for the creation of new media. Machine learning models achieving state-of-the-art in traditional media creation tasks (e.g., image, audio, or video synthesis) that are also being used by the artist community will be showcased.

Artist accessibility of machine learning models. Researchers building the next generation of machine learning models for media creation will be challenged in understanding the needs of artists. Human Computer interaction and User Experience communities and those using developing machine learning-based creative tools.

The social and cultural impact of these new models. Ethical implications, ranging from the use of biased datasets, replicating artistic work, and potential eroding trust in media content.

Artistic applications. Finally, we will hear from some of the artists who are adopting machine learning–including deep learning and reinforcement learning–as part of their own artistic process.

We will aim to balance addressing the technical issues and challenges of applying the latest Machine Learning models and techniques to creativity and design with the philosophical and cultural issues that surround this area of research.

The goal of this workshop is to bring together researchers and artists interested in exploring the intersection of human creativity and machine learning. This workshop solicits viewpoints from artists and technology developers to look beyond technical issues to better understand the needs of artists and creators.

Invited Speakers


Some of the below sessions will occur on our Discord server and some will occur in our Zoom livestream. You will find links to access all sessions at our NeurIPS workshop website (registered participants only).

Time (CST) Event
09:15 Welcome and Introduction
09:30 Poster Session 1
All Posters; Discord
10:30 Speaker Presentation by Aaron Hertzmann; Zoom
11:00 Speaker Presentation by Alexa Steinbrück; Zoom
11:30 Speaker Presentation by Mohammad Norouzi; Zoom
12:00 Speaker Presentation / Stable Diffusion; Zoom
12:30 Q&A Panel Discussion 1
Aaron Hertzmann, Alexa Steinbrück, others; Moderated by Bokar N’Diaye. *Zoom + Rocketchat
13:00 Art Show
13:30 Social 1
14:00 Paper Orals; Zoom
14:30 Artwork Spotlights; Zoom
15:00 AI Performance
16:00 Poster Session 2
All Posters; Discord
17:00 Speaker Presentation by Anastasiia Raina; Zoom
17:30 Speaker Presentation by Eunsu Kang; Zoom
18:00 Speaker Presentation by Yanghua Jin; Zoom
18:30 Speaker Presentation by Kanru Hua; Zoom
19:00 Q&A Panel Discussion 2
Anastasiia Raina, Eunsu Kang, Yanghua Jin, Kanru Hua; Moderated by Yingtao Tian. Zoom + Rocketchat
19:30 Art Show (rebroadcast)
20:00 Closing remarks
20:15 Social 2
21:00 End

Accepted Papers

# Title Authors
1 Instrument Separation of Symbolic Music by Explicitly Guided Diffusion Model Sangjun Han, Hyeongrae Ihm, DaeHan Ahn, Woohyung Lim
2 Videogenic: Video Highlights via Photogenic Moments David Chuan-En Lin, Fabian Caba, Joon-Young Lee, Oliver Wang, Nikolas Martelaro
3 Co-writing screenplays and theatre scripts alongside language models using Dramatron Piotr Mirowski, Kory W. Mathewson, Jaylen Pittman, Richard EVANS
4 High-Resolution Image Editing via Multi-Stage Blended Diffusion Johannes Ackermann, Minjun Li
5 Language Does More Than Describe: On The Lack Of Figurative Speech in Text-To-Image Models Ricardo Kleinlein, Cristina Luna-Jiménez, Fernando Fernández-Martínez
6 Intentional Dance Choreography with Semi-Supervised Recurrent VAEs Mathilde Papillon, Mariel Pettee, Nina Miolane
7 Visualizing Semantic Walks Shumeet Baluja, David Marwood
8 Personalizing Text-to-Image Generation via Aesthetic Gradients Victor Gallego
9 3DGEN: A GAN-based approach for generating novel 3D models from image data Antoine Schnepf, Ugo Tanielian, Flavian Vasile
10 CICADA: Interface for Concept Sketches Using CLIP Tomas Lawton
11 VideoMap: Video Editing in Latent Space David Chuan-En Lin, Fabian Caba, Joon-Young Lee, Oliver Wang, Nikolas Martelaro
12 How do Musicians Experience Jamming with a Co-Creative “AI”? Notto J. W. Thelle, Rebecca Fiebrink
13 Botto: A Decentralized Autonomous Artist Mario Klingeman, Simon Hudson, Zivvy Epstein
14 Surreal VR Pong: LLM approach to Game Design Jasmine A Roberts, Andrzej Banburski, Jaron Lanier
15 Not All Artists Speak English: Generating images with DALL-E 2 from Portuguese Gretchen Eggers
16 Datasets That Are Not: Evolving Novelty Through Sparsity and Iterated Learning Yusong Wu, Kyle Kastner, Tim Cooijmans, Cheng-Zhi Anna Huang, Aaron Courville
17 Towards Real-Time Text2Video via CLIP-Guided, Pixel-Level Optimization Peter Schaldenbrand, Zhixuan Liu, Jean Oh
18 Sequence Modeling Motion-Captured Dance Emily Napier, Gavia Gray, Sageev Oore
19 A programmable interface for creative exploration Gerard Serra, Oriol Domingo, Pol Baladas

Accepted Artworks

This year we accepted 26 artworks to the gallery from a total of 43 artwork submissions, resulting in a 60.4% acceptance rate. Five exceptional artworks were awarded a spotlight, where they had the option to speak 5-7 min about the artwork during the workshop. All artworks were also shown in the form of the below art show video.

Special thanks to the artwork jury: Beverley-Claire Okogwu (CMU); Eunsu Kang (CMU); Evan Casey (Cadmium); Lia Coleman (CMU); Moisés Horta Valenzuela (; Nao Tokui (Keio); Terence Broad (University of the Arts London).

Artwork Spotlights

Title Artists
Flowers for Frankenstein’s Monster Derrick Schultz
Machine Reflections: A Self-Portrait Series Orsolya Szantho
Ducking Jorn Filippo Fedeli, Anna Angelica Ainio
Salvaging the beauty left behind Keito Takaishi, Asuka Ishii, Kazufumi Shibuya, Nao Tokui
Interactive Afflatus Santa Naruse
# Title Artists
1 Armored Skin& Dark Seed Yunyoung Jang
2 Artificial intelligence breeding of underwater plants MILAD HAKIMSHAFAEI
3 Compressed ideographs -visualized- Scott Allen, Keito Takaishi, Asuka Ishii, Kazufumi Shibuya, Yuma Matsuoka, Atsuya Kobayashi, Nao Tokui
4 Song of Hairs Song Tang
5 Making Dance with Intention Mathilde Papillon, Mariel Pettee, Nina Miolane
6 Psychedelic Forms Mar Canet Sola, Varvara Guljajeva
7 Dream Painter Mar Canet Sola, Varvara Guljajeva
8 Little Science Vadim Epstein
9 Salvaging the beauty left behind Keito Takaishi, Asuka Ishii, Kazufumi Shibuya, Nao Tokui
10 Hidden Clergy Diego Porres
11 Alt Nature Zihou Ng
12 Flowers for Frankenstein’s Monster Derrick Schultz
13 Interactive Afflatus Santa Naruse
14 Ducking Jorn Filippo Fedeli, Anna Angelica Ainio
15 Autolume Acedia (2022) Jonas F Kraasch, Philippe Pasquier
16 [a]life drawing David Estevez
17 Lockdown (music video) Sophia H Sun, Margaret Schedel, Sofy Yuditskaya, Ria Rajan, Susan E Green-Mateu
18 The Old Tune Anton O Wiehe
19 Old Sights, New Visions Adam Cole
20 we meet, we connect sebastian rojas, hypereikon lab
21 The Quietest Remains Ryan Thompson
22 Reach Out Vít Růžička
23 The Faded Landscape Mingyong Cheng
24 Machine Reflections: A Self-Portrait Series Orsolya Szantho
25 Tasty Piano Cédric Colas
26 Incorporation: artwork Laetitia Teodorescu


If you have any questions, please contact us at

Previous years:

How to attend

A registration ticket must be purchased on This will allow you to access our website on NeurIPS with links to the livestream, poster session and socials.

Call for Submissions

We invite submissions in the form of papers and/or artwork. Deadline for submissions is Monday 19 Sept Extended! Monday Sept 26 (11:59pm, anywhere on earth)

To Submit a Paper

We invite participants to submit 2-page papers in the NeurIPS camera-ready format (with author names visible), to be submitted to our CMT portal.

Topics may include (but are not limited to):

We encourage all authors to consider the ethical implications of their work. This can be discussed in a 1-paragraph section at the end of the paper and will not count towards the page limit.

In your submission, you may also indicate whether you would like to present a demo of your work during the workshop (if applicable).

Papers will be reviewed by committee members in a single-blind process, and accepted authors can present at the workshop in the form of a short talk, panel, and/or demo. At least one author of each accepted paper must register for and attend the workshop. Accepted papers will appear on the workshop website. Please note that we do not adhere to a formal peer review process and are normally unable to provide detailed feedback on individual submissions. We encourage submission of work that has been previously published, and work in progress on relevant topics of the workshop. This workshop will not have an official proceedings.

References and any supplementary materials provided do not count as part of the 2-page limit. However, it will be the reviewers’ discretion to read the supplementary materials.

To Submit Artwork

We welcome submission of artwork that has been created using machine learning (autonomously or with humans). We invite art submissions in any medium, including but not limited to:

This year we are asking for submissions to consist of:

Place these in a single zip archive called

Submit this zip file through our CMT portal.

The reason for this specific submission format is simply for standardization in the review / judging process of the art submissions. Do not worry too much about this.

For accepted works, eventually some text and one (or more) images or video will be displayed in an online gallery. Please see last year’s gallery site for reference. Later on, for accepted works, we will allow another opportunity to edit and/or include links in the text description to alternate formats of the work that may be better suited for the artwork’s presentation.

In addition, during the online workshop itself we will show a number of accepted art pieces as a slideshow (which is why we ask for the landscape format). We also may invite a few select creators of accepted artwork to participate in the form of a short talk, panel, and/or demo.


Samaneh Azadi
Lia Coleman
Yingtao Tian
Tom White