Machine Learning for Creativity and Design

What’s the State of the 'Art' in Machine Learning? The Creativity workshop explores applying the latest ML technologies in art and design.

December 13, Online.

Elhoseiny, Mohamed and Jha, Divyansh. The Corona Monster. 2020. AI Art Gallery, Online.

Introduction

Machine co-creativity continues to grow and attract a wider audience to machine learning. Generative models, for example, have enabled new types of media creation across language, images, and music–including recent advances such as CLIP, VQGAN, and DALL·E. This one-day workshop will broadly explore topics in the applications of machine learning to creativity and design, which includes:

State-of-the-art algorithms for the creation of new media. Machine learning models achieving state-of-the-art in traditional media creation tasks (e.g., image, audio, or video synthesis) that are also being used by the artist community will be showcased.

Artist accessibility of machine learning models. Researchers building the next generation of machine learning models for media creation will be challenged in understanding the accessibility needs of artists. Artists and Human Computer interaction / User Experience community members will be encouraged to engage in the conversation.

The sociocultural and political impact of these new models. With the increased popularity of generative machine learning models, we are witnessing these models start to impact our everyday surroundings, ranging from racial and gender bias in algorithms and datasets used for media creation to how new media manipulation tools may erode our collective trust in media content.

Artistic applications. We will hear directly from some of the artists who are adopting machine learning–including deep learning and reinforcement learning–as part of their own artistic process as well as showcasing their work.

This workshop will aim to balance addressing the technical issues and challenges of applying the latest machine learning models and techniques to creativity and design with the broader sociocultural and political issues that surround this area of research. The goal of this workshop is to bring together researchers and artists interested in exploring the intersection of human creativity and machine learning and foster collaboration between them, as well as promote the sharing of ideas, perspectives, new research, artwork, and artistic needs.

As in previous years the workshop will include an open call for a display of artworks incorporating machine learning techniques. These works will be collected and presented online, providing a more personal forum for sharing artifacts created with machine learning techniques as well as providing a snapshot of the current creative machine learning landscape to the broader public.

Invited Speakers

Devi Parikh, Research Scientist at Facebook AI Research (FAIR) and Associate Professor in the School of Interactive Computing at Georgia Tech.

Hypereikon, Art duo specializing in Generative Visual Arts.

Joel Simon, Multidisciplinary Artist, Toolmaker, and Researcher. Founder and Director of Morphogen.

Kenric Allado-McDowell, Writer, Speaker, and Musician. Co-author of the book Pharmako-AI.

Mark Riedl, Professor in the Georgia Tech School of Interactive Computing and Associate Director of the Georgia Tech Machine Learning Center.

Moisés Horta Valenzuela, Sound Artist, Electronic Musician, and Creative Technologist.

Schedule and format

All times are in EST (UTC -5).

Some of the below sessions will occur on our Discord server and some will occur in our Zoom livestream. You will find links to access all sessions at our NeurIPS workshop website (registered participants only).

Time Event
11:15 Welcome and Introduction
Presented by Mattie Tesfaldet; Zoom
11:30 Poster Session 1
All Posters; Discord
12:30 Computers, Creativity, and Lovelace
Speaker Presentation by Mark Riedl; Zoom
13:00 AI for Augmenting Human Creativity
Speaker Presentation by Devi Parikh; Zoom
13:30 Interspecies Intelligence in Pharmako-AI
Speaker Presentation by Kenric Allado-McDowell; Zoom
14:00 Art Show
Zoom
14:30 Q&A Panel Discussion 1
Mark Riedl, Devi Parikh, and Kenric Allado-McDowell, moderated by Mattie Tesfaldet; Zoom + Rocketchat
15:00 Social 1
Discord
15:30 StyleCLIPDraw: Coupling Content and Style in Text-to-Drawing Synthesis
Paper Oral by Peter Schaldenbrand et al.; Zoom
15:40 Soundify: Matching Sound Effects to Video
Paper Oral by David Chuan-En Lin et al.; Zoom
15:50 Controllable and Interpretable Singing Voice Decomposition via Assem-VC
Paper Oral by Kang-wook Kim et al.; Zoom
16:00 Extending the Vocabulary of Fictional Languages using Neural Networks
Paper Oral by Thomas Zacharias et al.; Zoom
16:10 Jabberwocky
Artwork Spotlight by Vadim Epstein; Zoom
16:15 Iterative Iterative
Artwork Spotlight by Erin Smith; Zoom
16:20 Artificial Intelligence for High Heel Shoe Design
Artwork Spotlight by Sophia Neill; Zoom
16:25 text2pixelart
Artwork Spotlight by Alexey Tikhonov; Zoom
16:30 Poster Session 2
All Posters; Discord
17:30 From Technologies to Tools
Speaker Presentation by Joel Simon; Zoom
18:00 Okachihuali: Strategies for New Future Building with AI Art
Speaker Presentation by Moisés Horta Valenzuela; Zoom
18:30 Imaginatory Processes with VQGANxCLIP
Speaker Presentation by Hypereikon; Zoom
19:00 Art Show (repeated)
Zoom
19:30 Q&A Panel Discussion 2
Joel Simon and Hypereikon, moderated by David Ha; Zoom + Rocketchat
20:00 Closing remarks
Presented by Tom White; Zoom
20:15 Social 2
Discord
21:00 End

Accepted papers

This year we saw 65 paper submissions, 36 of which were accepted (55% acceptance rate). Four exceptional papers were awarded an Oral acceptance in the form of a 10-min presentation during the workshop.

  1. ArtTension: Artistically extended generation based on single-image learning
    • Jung-Jae Yu, Juwon Lee, Junmo Kim
  2. Audio-Guided Image Manipulation for Artistic Paintings
    • Seung Hyun Lee, Nahyuk Lee, Chan Young Kim, Won Jeong Ryoo, Jinkyu Kim, Sang Ho Yoon, Sangpil Kim
  3. Aesthetic Evaluation of Ambiguous Imagery
    • Xi Wang, Zoya Bylinskii, Aaron Hertzmann, Robert Pepperell
  4. Dance2Music: Automatic Dance-driven Music Generation
    • Gunjan Aggarwal, Devi Parikh
  5. Telling Creative Stories Using Generative Visual Aids
    • Safinah Arshad Ali, Devi Parikh
  6. Soundify: Matching Sound Effects to Video | Oral
    • David Chuan-En Lin, Anastasis Germanidis, Cristóbal Valenzuela, Yining Shi, Nikolas Martelaro
  7. Modern Evolution Strategies for Creativity: Fitting Concrete Images and Abstract Concepts
    • Yingtao Tian, David Ha
  8. Towards “Gestalt” Computation in Sound
    • Ishwarya Ananthabhotla, David Ramsay, Joseph Paradiso
  9. Physically Embodied Deep Image Optimisation
    • Daniela Mihai, Jonathon Hare
  10. Composer AI with tap-to-pitch generator
    • Hyeongrae Ihm, Sangjun Han, Woohyung Lim
  11. Deep Learning Tools for Audacity: Helping Researchers Expand the Artist’s Toolkit
    • Hugo F Flores Garcia, Aldo Aguilar, Ethan Manilow, Dmitry Vedenko, Bryan Pardo
  12. GANspire: Generating Breathing Waveforms for Art-Health Applications
    • Hugo Scurto, Baptiste Caramiaux, Thomas Similowski, Samuel Bianchini
  13. Losses, Dissonances, and Distortions
    • Pablo Samuel Castro
  14. StyleCLIPDraw: Coupling Content and Style in Text-to-Drawing Synthesis | Oral
    • Peter Schaldenbrand, Zhixuan Liu, Jean Oh
  15. Artistic Autonomy in AI Art
    • Alayt Abraham Issak
  16. Beauty
    • Carlos Castellanos, Bhushan Patil, Johnny Diblasi
  17. GANArtworks In the Mood
    • Yuling Chen, Colorado J Reed, Hongsuk Nam, Kevin Jun, Chenlin Ye, David Vaughan, Joyce Shen, David Steier
  18. Controllable and Interpretable Singing Voice Decomposition via Assem-VC | Oral
    • Kang-Wook Kim, Junhyeok Lee
  19. BERTian Poetics: Constrained Composition with Masked LMs
    • Christopher Akiki, Martin Potthast
  20. Towards Lightweight Controllable Audio Synthesis with Conditional Implicit Neural Representations
    • Jan Zuiderveld, Marco Federici, Erik J Bekkers
  21. Inspiration Retrieval for Visual Exploration
    • Nihal Jain, Praneetha Vaddamanu, Paridhi Maheshwari, Vishwa Vinay, Kuldeep Kulkarni
  22. Extending the Vocabulary of Fictional Languages using Neural Networks | Oral
    • Thomas Zacharias, Ashutosh Taklikar, Raja Giryes
  23. Steerable discovery of neural audio effects
    • Christian J. Steinmetz, Joshua D. Reiss
  24. XCI-Sketch: Extraction of Color Information from Images for Generation of Colored Outlines and Sketches
    • V Manushree, Sameer Saxena, Parna Chowdhury, Manisimha Varma Manthena, Harsh Rathod, Ankita Ghosh, Sahil S Khose
  25. Exploring Latent Dimensions of Crowd-sourced Creativity
    • Umut Kocasari, Alperen Bag, Efehan Atici, Pinar Yanardag
  26. Ethics and Creativity in Computer Vision
    • Negar Rostamzadeh, Emily Denton, Linda Petrini
  27. Evolving Evocative 2D Views of Generated 3D Objects
    • Eric Chu
  28. RiverGAN: Fluvial Landscapes Generation with Conditional GAN and Physical Simulations
    • Xun Liu, Runjia Tian
  29. Sculpting with Words
    • Joel Simon, Victor Perez Muñoz, Tal Shiri
  30. Neural Design Space Exploration
    • Wei Jiang, Richard Davis, Kevin Gonyop Kim, Pierre Dillenbourg
  31. Inspiration through Observation: Demonstrating the Influence of Generated Text on Creative Writing
    • Melissa Roemmele
  32. Gaudí: Conversational Interactions with Deep Representations to Generate Image Collections
    • Victor S Bursztyn, Jennifer Healey, Vishwa Vinay
  33. Discriminator Synthesis: On reusing the other half of Generative Adversarial Networks
    • Diego Porres
  34. Malakai: Music That Adapts to the Shape of Emotions
    • Zack Harris, Liam Clarke, Pablo Samuel Castro, Dante Camarena, Pietro Gagliano, Manal Siddiqui
  35. Generating Diverse Realistic Laughterfor Interactive Art
    • Mehdi Afsar, Eric Park, Etienne Paquette, Gauthier Gidel, Kory Mathewson, Eilif Muller
  36. ABCD: A Bach Chorale Discriminator
    • Jason D’Eon, Sageev Oore

Accepted artworks

This year we saw 52 artwork submissions with a near 100% acceptance rate. Artworks were accepted unless they violated formatting requirements. Four exceptional artworks were awarded a spotlight in the form of a 5-min presentation during the workshop. Artworks will be displayed during the workshop in the form of a slideshow and on our online art gallery at a TBA post-workshop date.

Call for Submissions

We invite submissions in the form of papers and/or artwork. Update: submissions are now CLOSED.

To Submit a Paper

We invite participants to submit 2-page papers in the NeurIPS camera-ready format (with author names visible), to be submitted to our CMT portal.

Topics may include (but are not limited to):

We encourage all authors to consider the ethical implications of their work. This can be discussed in a 1-paragraph section at the end of the paper and will not count towards the page limit.

In your submission, you may also indicate whether you would like to present a demo of your work during the workshop (if applicable).

Papers will be reviewed by committee members in a single-blind process, and accepted authors can present at the workshop in the form of a short talk, panel, and/or demo. At least one author of each accepted paper must register for and attend the workshop. Accepted papers will appear on the workshop website. Please note that we do not adhere to a formal peer review process and are normally unable to provide detailed feedback on individual submissions. We encourage submission of work that has been previously published, and work in progress on relevant topics of the workshop. This workshop will not have an official proceedings.

References and any supplementary materials provided do not count as part of the 2-page limit. However, it will be the reviewers’ discretion to read the supplementary materials.

To Submit Artwork

We welcome submission of artwork that has been created using machine learning (autonomously or with humans). We invite art submissions in any medium, including but not limited to:

This year we are asking for submissions to consist of:

Submissions should be formatted as a single project_name.zip file. We will display the accepted art submissions in an online gallery and will do our best to show a number of art pieces as a slideshow during the online workshop itself. We may invite creators of accepted artwork to participate in the form of a short talk, panel, and/or demo.

Submissions must be made through the CMT portal.

Important Dates

17 September 2021, 11:59 UTC: Submission due date for papers and art
24 September 2021, 11:59 UTC: Submission due date for papers and art (Extended)
25 September 2021, 11:59 UTC: Submission due date for papers and art (Extended+)

22 October 2021: Acceptance notification for papers and art

05 November 2021, 01:00 UTC: Camera-ready (or Revisions) due date for papers and art
11 November 2021, 01:00 UTC: Camera-ready (or Revisions) due date for papers and art (Extended)

16 November 2021: Final acceptance/rejection notification for revised artwork submissions

6–14 December 2021: NeurIPS Conference

13 December 2021: Workshop

Contact

If you have any questions, please contact us at neuripscreativityworkshop@googlegroups.com

Workshop website: https://neuripscreativityworkshop.github.io/2021

Previous years:

The art submissions from previous years can be viewed here

How to attend

A registration ticket must be purchased on neurips.cc. This will allow you to access our website on NeurIPS with links to the livestream, poster session and socials.

Organisers

Tom White, Victoria University of Wellington

Mattie Tesfaldet, McGill University / MILA

Samaneh Azadi, Facebook AI Research (FAIR)

Daphne Ippolito, University of Pennsylvania / Google Brain

Lia Coleman, Rhode Island School of Design

David Ha, Google Brain

Sponsors

Picsart

Google