Machine co-creativity continues to grow and attract a wider audience to machine learning. Generative models, for example, have enabled new types of media creation across language, images, and music–including recent advances such as CLIP, VQGAN, and DALL·E. This one-day workshop will broadly explore topics in the applications of machine learning to creativity and design, which includes:
State-of-the-art algorithms for the creation of new media. Machine learning models achieving state-of-the-art in traditional media creation tasks (e.g., image, audio, or video synthesis) that are also being used by the artist community will be showcased.
Artist accessibility of machine learning models. Researchers building the next generation of machine learning models for media creation will be challenged in understanding the accessibility needs of artists. Artists and Human Computer interaction / User Experience community members will be encouraged to engage in the conversation.
The sociocultural and political impact of these new models. With the increased popularity of generative machine learning models, we are witnessing these models start to impact our everyday surroundings, ranging from racial and gender bias in algorithms and datasets used for media creation to how new media manipulation tools may erode our collective trust in media content.
Artistic applications. We will hear directly from some of the artists who are adopting machine learning–including deep learning and reinforcement learning–as part of their own artistic process as well as showcasing their work.
This workshop will aim to balance addressing the technical issues and challenges of applying the latest machine learning models and techniques to creativity and design with the broader sociocultural and political issues that surround this area of research. The goal of this workshop is to bring together researchers and artists interested in exploring the intersection of human creativity and machine learning and foster collaboration between them, as well as promote the sharing of ideas, perspectives, new research, artwork, and artistic needs.
As in previous years the workshop will include an open call for a display of artworks incorporating machine learning techniques. These works will be collected and presented online, providing a more personal forum for sharing artifacts created with machine learning techniques as well as providing a snapshot of the current creative machine learning landscape to the broader public.
Devi Parikh, Research Scientist at Facebook AI Research (FAIR) and Associate Professor in the School of Interactive Computing at Georgia Tech.
Hypereikon, Art duo specializing in Generative Visual Arts.
Mark Riedl, Professor in the Georgia Tech School of Interactive Computing and Associate Director of the Georgia Tech Machine Learning Center.
Moisés Horta Valenzuela, Sound Artist, Electronic Musician, and Creative Technologist.
Schedule and format
All times are in EST (UTC -5).
Some of the below sessions will occur on our Discord server and some will occur in our Zoom livestream. You will find links to access all sessions at our NeurIPS workshop website (registered participants only).
|11:15||Welcome and Introduction
Presented by Mattie Tesfaldet; Zoom
|11:30||Poster Session 1
All Posters; Discord
|12:30||Computers, Creativity, and Lovelace
Speaker Presentation by Mark Riedl; Zoom
|13:00||AI for Augmenting Human Creativity
Speaker Presentation by Devi Parikh; Zoom
|13:30||Interspecies Intelligence in Pharmako-AI
Speaker Presentation by Kenric Allado-McDowell; Zoom
|14:30||Q&A Panel Discussion 1
Mark Riedl, Devi Parikh, and Kenric Allado-McDowell, moderated by Mattie Tesfaldet; Zoom + Rocketchat
|15:30||StyleCLIPDraw: Coupling Content and Style in Text-to-Drawing Synthesis
Paper Oral by Peter Schaldenbrand et al.; Zoom
|15:40||Soundify: Matching Sound Effects to Video
Paper Oral by David Chuan-En Lin et al.; Zoom
|15:50||Controllable and Interpretable Singing Voice Decomposition via Assem-VC
Paper Oral by Kang-wook Kim et al.; Zoom
|16:00||Extending the Vocabulary of Fictional Languages using Neural Networks
Paper Oral by Thomas Zacharias et al.; Zoom
Artwork Spotlight by Vadim Epstein; Zoom
Artwork Spotlight by Erin Smith; Zoom
|16:20||Artificial Intelligence for High Heel Shoe Design
Artwork Spotlight by Sophia Neill; Zoom
Artwork Spotlight by Alexey Tikhonov; Zoom
|16:30||Poster Session 2
All Posters; Discord
|17:30||From Technologies to Tools
Speaker Presentation by Joel Simon; Zoom
|18:00||Okachihuali: Strategies for New Future Building with AI Art
Speaker Presentation by Moisés Horta Valenzuela; Zoom
|18:30||Imaginatory Processes with VQGANxCLIP
Speaker Presentation by Hypereikon; Zoom
|19:00||Art Show (repeated)
|19:30||Q&A Panel Discussion 2
Joel Simon and Hypereikon, moderated by David Ha; Zoom + Rocketchat
Presented by Tom White; Zoom
This year we saw 65 paper submissions, 36 of which were accepted (55% acceptance rate). Four exceptional papers were awarded an Oral acceptance in the form of a 10-min presentation during the workshop.
- ArtTension: Artistically extended generation based on single-image learning
- Jung-Jae Yu, Juwon Lee, Junmo Kim
- Audio-Guided Image Manipulation for Artistic Paintings
- Seung Hyun Lee, Nahyuk Lee, Chan Young Kim, Won Jeong Ryoo, Jinkyu Kim, Sang Ho Yoon, Sangpil Kim
- Aesthetic Evaluation of Ambiguous Imagery
- Xi Wang, Zoya Bylinskii, Aaron Hertzmann, Robert Pepperell
- Dance2Music: Automatic Dance-driven Music Generation
- Gunjan Aggarwal, Devi Parikh
- Telling Creative Stories Using Generative Visual Aids
- Safinah Arshad Ali, Devi Parikh
- Soundify: Matching Sound Effects to Video | Oral
- David Chuan-En Lin, Anastasis Germanidis, Cristóbal Valenzuela, Yining Shi, Nikolas Martelaro
- Modern Evolution Strategies for Creativity: Fitting Concrete Images and Abstract Concepts
- Yingtao Tian, David Ha
- Towards “Gestalt” Computation in Sound
- Ishwarya Ananthabhotla, David Ramsay, Joseph Paradiso
- Physically Embodied Deep Image Optimisation
- Daniela Mihai, Jonathon Hare
- Composer AI with tap-to-pitch generator
- Hyeongrae Ihm, Sangjun Han, Woohyung Lim
- Deep Learning Tools for Audacity: Helping Researchers Expand the Artist’s Toolkit
- Hugo F Flores Garcia, Aldo Aguilar, Ethan Manilow, Dmitry Vedenko, Bryan Pardo
- GANspire: Generating Breathing Waveforms for Art-Health Applications
- Hugo Scurto, Baptiste Caramiaux, Thomas Similowski, Samuel Bianchini
- Losses, Dissonances, and Distortions
- Pablo Samuel Castro
- StyleCLIPDraw: Coupling Content and Style in Text-to-Drawing Synthesis | Oral
- Peter Schaldenbrand, Zhixuan Liu, Jean Oh
- Artistic Autonomy in AI Art
- Alayt Abraham Issak
- Carlos Castellanos, Bhushan Patil, Johnny Diblasi
- GANArtworks In the Mood
- Yuling Chen, Colorado J Reed, Hongsuk Nam, Kevin Jun, Chenlin Ye, David Vaughan, Joyce Shen, David Steier
- Controllable and Interpretable Singing Voice Decomposition via Assem-VC | Oral
- Kang-Wook Kim, Junhyeok Lee
- BERTian Poetics: Constrained Composition with Masked LMs
- Christopher Akiki, Martin Potthast
- Towards Lightweight Controllable Audio Synthesis with Conditional Implicit Neural Representations
- Jan Zuiderveld, Marco Federici, Erik J Bekkers
- Inspiration Retrieval for Visual Exploration
- Nihal Jain, Praneetha Vaddamanu, Paridhi Maheshwari, Vishwa Vinay, Kuldeep Kulkarni
- Extending the Vocabulary of Fictional Languages using Neural Networks | Oral
- Thomas Zacharias, Ashutosh Taklikar, Raja Giryes
- Steerable discovery of neural audio effects
- Christian J. Steinmetz, Joshua D. Reiss
- XCI-Sketch: Extraction of Color Information from Images for Generation of Colored Outlines and Sketches
- V Manushree, Sameer Saxena, Parna Chowdhury, Manisimha Varma Manthena, Harsh Rathod, Ankita Ghosh, Sahil S Khose
- Exploring Latent Dimensions of Crowd-sourced Creativity
- Umut Kocasari, Alperen Bag, Efehan Atici, Pinar Yanardag
- Ethics and Creativity in Computer Vision
- Negar Rostamzadeh, Emily Denton, Linda Petrini
- Evolving Evocative 2D Views of Generated 3D Objects
- Eric Chu
- RiverGAN: Fluvial Landscapes Generation with Conditional GAN and Physical Simulations
- Xun Liu, Runjia Tian
- Sculpting with Words
- Joel Simon, Victor Perez Muñoz, Tal Shiri
- Neural Design Space Exploration
- Wei Jiang, Richard Davis, Kevin Gonyop Kim, Pierre Dillenbourg
- Inspiration through Observation: Demonstrating the Influence of Generated Text on Creative Writing
- Melissa Roemmele
- Gaudí: Conversational Interactions with Deep Representations to Generate Image Collections
- Victor S Bursztyn, Jennifer Healey, Vishwa Vinay
- Discriminator Synthesis: On reusing the other half of Generative Adversarial Networks
- Diego Porres
- Malakai: Music That Adapts to the Shape of Emotions
- Zack Harris, Liam Clarke, Pablo Samuel Castro, Dante Camarena, Pietro Gagliano, Manal Siddiqui
- Generating Diverse Realistic Laughterfor Interactive Art
- Mehdi Afsar, Eric Park, Etienne Paquette, Gauthier Gidel, Kory Mathewson, Eilif Muller
- ABCD: A Bach Chorale Discriminator
- Jason D’Eon, Sageev Oore
This year we saw 52 artwork submissions with a near 100% acceptance rate. Artworks were accepted unless they violated formatting requirements. Four exceptional artworks were awarded a spotlight in the form of a 5-min presentation during the workshop. Artworks will be displayed during the workshop in the form of a slideshow and on our online art gallery at a TBA post-workshop date.
Call for Submissions
We invite submissions in the form of papers and/or artwork. Update: submissions are now CLOSED.
To Submit a Paper
Topics may include (but are not limited to):
- Presentation of new machine learning techniques for generating art, music, or other creative outputs using, for instance, reinforcement learning, generative adversarial networks, novelty search and evaluation, etc
- Quantitative or qualitative evaluation of machine learning techniques for creative work and design
- Tools or techniques to improve usability or usefulness of machine learning for creative practitioners
- Descriptions, reflections, or case studies on the use of machine learning in the creation of a new art or design work
- Information-theoretic views of creativity
- Aesthetic, philosophical, social, cultural and ethical considerations surrounding the use of machine learning in creative practice
We encourage all authors to consider the ethical implications of their work. This can be discussed in a 1-paragraph section at the end of the paper and will not count towards the page limit.
In your submission, you may also indicate whether you would like to present a demo of your work during the workshop (if applicable).
Papers will be reviewed by committee members in a single-blind process, and accepted authors can present at the workshop in the form of a short talk, panel, and/or demo. At least one author of each accepted paper must register for and attend the workshop. Accepted papers will appear on the workshop website. Please note that we do not adhere to a formal peer review process and are normally unable to provide detailed feedback on individual submissions. We encourage submission of work that has been previously published, and work in progress on relevant topics of the workshop. This workshop will not have an official proceedings.
References and any supplementary materials provided do not count as part of the 2-page limit. However, it will be the reviewers’ discretion to read the supplementary materials.
To Submit Artwork
We welcome submission of artwork that has been created using machine learning (autonomously or with humans). We invite art submissions in any medium, including but not limited to:
- Dance, Performance, Installation, Physical Object, Food, etc
This year we are asking for submissions to consist of:
- The title, year, artist, description (up to 200 words), and optionally a web link for additional information, or more content of the artwork hosted elsewhere, in the file
- One 1920x1080 “title slide” that includes title, artist, and optionally a short blurb. This will be used as an intro for your artwork if it is to be displayed in slideshow format. (
- One 1920x1080 main image (
- Optionally: Up to 4 additional 1920x1080 images or a 1920x1080 mp4 video of up to 60 seconds showing the work in more detail. (
Submissions should be formatted as a single
project_name.zip file. We will display the accepted art submissions in an online gallery and will do our best to show a number of art pieces as a slideshow during the online workshop itself. We may invite creators of accepted artwork to participate in the form of a short talk, panel, and/or demo.
Submissions must be made through the CMT portal.
17 September 2021, 11:59 UTC: Submission due date for papers and art
24 September 2021, 11:59 UTC: Submission due date for papers and art (Extended)
25 September 2021, 11:59 UTC: Submission due date for papers and art (Extended+) 22 October 2021: Acceptance notification for papers and art 05 November 2021, 01:00 UTC: Camera-ready (or Revisions) due date for papers and art
11 November 2021, 01:00 UTC: Camera-ready (or Revisions) due date for papers and art (Extended) 16 November 2021: Final acceptance/rejection notification for revised artwork submissions
6–14 December 2021: NeurIPS Conference
13 December 2021: Workshop
If you have any questions, please contact us at email@example.com
Workshop website: https://neuripscreativityworkshop.github.io/2021
- 2020 workshop (Everywhere, Online)
- 2019 workshop (Vancouver, Canada)
- 2018 workshop (Montreal, Canada)
- 2017 workshop (Long Beach, CA, USA)
The art submissions from previous years can be viewed here
How to attend
Tom White, Victoria University of Wellington
Mattie Tesfaldet, McGill University / MILA
Samaneh Azadi, Facebook AI Research (FAIR)
Daphne Ippolito, University of Pennsylvania / Google Brain
Lia Coleman, Rhode Island School of Design
David Ha, Google Brain