If you have any questions, please contact us at neuripscreativityworkshop@googlegroups.com
You can also follow us on Twitter for updates: @ML4CDworkshop
Introduction
Machine co-creativity continues to grow and attract a wider audience to machine learning. Generative models, for example, have enabled new types of media creation across language, images, and music including recent advances, such as large language models like ChatGPT/Bard/Claude, high quality text to images models like Stable Diffusion/MidJourney, plus AudioLM and MusicLM. This one-day workshop will broadly explore topics in the applications of machine learning to creativity and design, which includes:
-
State-of-the-art algorithms for the creation of new media. Machine learning models achieving state-of-the-art in traditional media creation tasks (e.g., text, image, audio, or video synthesis) that are also being used by the artist community will be showcased. This year, especially, with the popular chatbot models like ChatGPT, language becomes a new focus where creativity shines. Furthermore, novel media like 3D, or multiple media in a single workflow, are taking shape, waiting to be the next attention.
-
Artist accessibility, fine-tuning and collaboration of machine learning models. Researchers building the next generation of machine learning models for media creation will be challenged in understanding the needs of artists. Human Computer interaction and User Experience communities and those using developing machine learning-based creative tools. More importantly, with new techniques like LoRA and ControlNet, artists and practitioners can fine-tune their own models, and share as well as compose many of them on community driven platforms.
-
The social and cultural impact of these new models. Ethical implications, ranging from the use of biased datasets, replicating artistic work, and potential eroding trust in media content.
-
Artistic applications. Finally, we will hear from some of the artists who are adopting machine learning–including deep learning and reinforcement learning–as part of their own artistic process.
We will aim to balance addressing the technical issues and challenges of applying the latest Machine Learning models and techniques to creativity and design with the philosophical and cultural issues that surround this area of research.
The goal of this workshop is to bring together researchers and artists interested in exploring the intersection of human creativity and machine learning. This workshop solicits viewpoints from artists and technology developers to look beyond technical issues to better understand the needs of artists and creators.
Invited Speakers
- Richard Zhang, Senior Research Scientist Adobe Research
- Cristóbal Valenzuela, RunwayML Co-Founder and CEO
- Tianwei Yin, MIT
- Misha Konstantinov and Daria Bakshandaeva, DeepFloyd Lead Researchers
- Aleksander Holynski, UC Berkeley and Google Research
- Alexander Mordvintsev, Google Research
Schedule
The following schedule is tentative.
Time (CST) | Duration | Event |
---|---|---|
8:15 AM | 0:15:00 | Welcome and Introduction |
8:30 AM | 0:30:00 | Invited Talk 1 - Tianwei Yin |
9:00 AM | 0:25:00 | Invited Talk 2 - Misha Konstantinov & Daria Bakshandaeva (remote) |
9:30 AM | 0:25:00 | Invited Talk 3 - Alexander Mordvintsev & Ettore Randazzo (remote) |
10:00 AM | 0:30:00 | Art gallery / Coffee Break / Social |
10:30 AM | 0:30:00 | Panel Discussion |
11:00 AM | 1:00:00 | Paper & Artwork Spotlights |
12:00 PM | 1:00:00 | Lunch |
1:00 PM | 0:30:00 | Invited Talk 4 - Aleksander Holynski |
1:30 PM | 0:25:00 | Invited Talk 5 - Richard Zhang (remote) |
2:00 PM | 0:30:00 | Invited Talk 6 - Cristóbal Valenzuela |
2:30 PM | 0:30:00 | Panel / Open Discussion |
3:00 PM | 0:30:00 | Art / Coffee Break / Social |
3:30 PM | 1:00:00 | Poster session |
4:30 PM | 0:10:00 | Conclusion |
4:40 PM | 1:00:00 | Free-formed discussion |
Accepted Papers
This year we accepted 32 papers from a total of 80 paper submissions, resulting in a 40% acceptance rate. Among accepted papers, 7 are selected as spotlight papers, marked with * below.
# | Title | Authors |
---|---|---|
1 | Latent Painter | Su, Shih-Chieh |
2 * | Minecraft-ify: Minecraft Style Image Generation with Text-guided Image Editing for In-Game Application | Kim, Bumsoo; Byun, Sanghyun; Jung, Yonghoon; Shin, Wonseop; Ul Amin, Sareer; Seo, Sanghyun |
3 | BioSpark: An End-to-end Generative System for Biological-Analogical Inspirations and Ideation | Kang, Hyeonsu; Lin, David Chuan-En; Martelaro, Nikolas; Kittur, Aniket; Chen, Yan-Ying; Hong, Matthew K |
4 | Interactive Machine Learning for Generative Models | Shimizu, Junichi; olowe, ireti; Broad, Terence; Vigliensoni, Gabriel; Thattai Ravikumar, Prashanth; Fiebrink, Rebecca |
5 * | SyncDiffusion: Coherent Montage via Synchronized Joint Diffusions | Lee, Yuseung; Kim, Kunho; Kim, Hyunjin; Sung, Minhyuk |
6 * | Real-time Animation Generation and Control on Rigged Models via Large Language Models | Huang, Han; De La Torre Romo, Fernanda; Fang, Cathy Mengying; Banburski, Andrzej; Amores, Judith; Lanier, Jaron |
7 | The Interface for Symbolic Music Loop Generation Conditioned on Musical Metadata | Han, Sangjun; Ihm, Hyeongrae; Lim, Woohyung |
8 | Weaving ML with Human Aesthetic Assessments to Augment Design Space Exploration | Jeon, Youngseung; Hong, Matthew K; Chen, Yan-Ying; Murakami, Kalani; Li, Jonathan; Chen, Xiang ‘Anthony’; Klenk, Matthew |
9 * | Envisioning Distant Worlds: Training a Latent Diffusion Model with NASA’s Exoplanet Data | Beaty, Marissa; Broad, Terence |
10 | Zero2Story: Novel Generation Framework for Anyone | Park, Chansung; Lee, Youngbin; Han, Sangjoon; Lee, Jungue |
11 | On the Distillation of Stories for Transferring Narrative Arcs in Collections of Independent Media | Ashley, Dylan R; Herrmann, Vincent; Friggstad, Zachary; Schmidhuber, Jürgen |
12 | Breaking Barriers to Creative Expression: Co-Designing and Implementing an Accessible Text-to-Image Interface | Taheri, Atieh; Izadi, Mohammad; Shriram, Gururaj; Rostamzadeh, Negar; Kane, Shaun |
13 | Multi-Subject Personalization | Jain, Arushi; Paliwal, Shubham Singh; Sharma, Monika; Jamwal, Vikram; Vig, Lovekesh |
14 | CalliPaint: Chinese Calligraphy Inpainting with Diffusion Model | Liao, Qisheng; Wang, Zhinuo; Abdul-Mageed, Muhammad; Xia, Gus |
15 | CAD-LLM: Large Language Model for CAD Generation | Wu, Sifan; Khasahmadi, Amir Hosein; Katz, Mor; Jayaraman, Pradeep Kumar; Pu, Yewen; Willis, Karl D.D.; Liu, Bang |
16 | Unrolling Virtual Worlds for Immersive Experiences | Tikhonov, Alexey; Repushko, Anton |
17 | Setting Switcher: Changing genre-settings in text-based game environments populated by generative agents | Wood, Oliver H; Fiebrink, Rebecca |
18 * | LEDITS++: Limitless Image Editing using Text-to-Image Models | Brack, Manuel; Tsaban, Linoy; Kornmeier, Katharina; Passos, Apolinário; Friedrich, Felix; Schramowski, Patrick; Kersting, Kristian |
19 | V2Meow: Meowing to the Visual Beat via Music Generation | Li, Judith Yue; Su, Kun; Huang, Qingqing; Kuzmin, Dima; Lee, Joonseok; Donahue, Chris; Sha, Fei; Jansen, Aren; Wang, Yu; Verzetti, Mauro; Denk, Timo I |
20 | DiffuseBot: Breeding Soft Robots With Physics-Augmented Generative Diffusion Models | Wang, Tsun-Hsuan; Zheng, Juntian; Ma, Pingchuan; Du, Yilun; Kim, Byungchul; Spielberg, Andrew; Tenenbaum, Joshua; Gan, Chuang; Rus, Daniela |
21 | An Ontology of Co-Creative AI Systems | Lin, Zhiyu; Riedl, Mark |
22 | ObjectComposer: Consistent Generation of Multiple Objects Without Fine-tuning | Helbling, Alec F; Montoya, Evan; Chau, Duen Horng |
23 | HARP: Bringing Deep Learning to the DAW with Hosted, Asynchronous, Remote Processing | Flores Garcia, Hugo F ; Benetatos, Christodoulos; O’Reilly, Patrick; Aguilar, Aldo; Duan, Zhiyao; Pardo, Bryan |
24 | Contextual Alchemy: A Framework for Enhanced Readability through Cross-Domain Entity Alignment | Shahid, Simra; Srikanth, Nikitha; Jandial, Surgan; Krishnamurthy, Balaji |
25 | JamSketch Deep α: Towards Musical Improvisation based on Human-machine Collaboration | Kitahara, Tetsuro; Yonamine, Akio |
26 | Hacking Generative Models with Differentiable Network Bending | Aldegheri, Giacomo; Rogalska, Alina; Youssef, Ahmed; Iofinova, Eugenia |
27 | Lasagna: Layered Score Distillation for Disentangled Image Editing | Bashkirova, Dina; Ray, Arijit; Mallick, Rupayan; Bargal, Sarah; Zhang, Jianming; Krishna, Ranjay; Saenko, Kate |
28 | CommonCanvas: An Open Diffusion Model Trained with Creative-Commons Images | Gokaslan, Aaron K; Cooper, A. Feder; Collins, Jasmine; Seguin, Landan; Jacobson, Austin; patel, Mihir; Frankle, Jonathan; Stephenson, Cory; Kuleshov, Volodymyr |
29 * | Personalized Comic Story Generation | PENG, WENXUAN; Schaldenbrand, Peter; Oh, Jean |
30 | Combating the “Sameness” in AI Art: Reflections on the Interactive AI Installation Fencing Hallucination | Qiu, Weihao; legrady, george |
31 | SynthScribe: Deep Multimodal Tools for Synthesizer Sound Retrieval and Exploration | Brade, Stephen; Oore, Sageev; Wang, Bryan; Sousa, Mauricio; Grossman, Tovi; Newsome, Greg |
32 * | WordArt Designer API: User-Driven Artistic Typography Synthesis with Large Language Models on ModelScope | HE, JUN-YAN; Cheng , Zhi-Qi; Li, Chenyang; Sun, Jingdong; Xiang, Wangmeng; Hu, Yusen; Lin, Xianhui; Kang, Xiaoyang; Jin, Zengke; Luo, Bin; Geng, Yifeng; Xie, Xuansong; Zhou, Jingren |
Accepted Artworks
This year we accepted 27 artworks to the gallery from a total of 52 artwork submissions, resulting in a 51.9% acceptance rate. Five exceptional artworks were awarded a spotlight, where they have the option to speak 5 min about the artwork during the workshop. All artworks will also be shown at the workshop in the form of an art show video screening.
Special thanks to the artwork jury: Eunsu Kang, Terence Broad, Yvonne Fang, August von Trapp, Yingtao Tian, Lia Coleman, Tom White, and Hannah Johnston.
Artwork Spotlights
Artwork Title | Artist Names |
---|---|
Visions of Destruction | Canet Sola, Mar*; Guljajeva, Varvara |
Unreal Pareidolia -shadows- | Allen, Scott* |
Beneath the Surface | Ozoani, Ezinwanne*; Tsaban, Linoy |
mememormee | Pettee, Mariel* |
Scores for the Virtual | Epstein, Vadim*; Winkler, Christoph |
All Artworks
Artwork Title | Artist Names |
---|---|
EvoGen: Evolutionary Diffusion Model Prompt Generation | Petersen, Magnus* |
Envisioning Distant Worlds | Beaty, Marissa*; Broad, Terence |
Movies Never Made | Wilke, J. Brad* |
mememormee | Pettee, Mariel* |
Gender Tapestry | Rosenbaum, J* |
Like a natural phenomenon | Shibuya, Kazufumi* |
THE RAVEN | Lu, Wei*; Luo, Xuehui |
Mega Giga | Kazuki, Abe* |
Potential Form | Kazuki, Abe* |
Attraction of image | Ishii, Asuka*; Nagata, Kazuki |
Tick Tock | Epstein, Vadim* |
Scores for the Virtual | Epstein, Vadim*; Winkler, Christoph |
Neuracappella | Ng, Zihou* |
GPT-ME | Meshi, Avital* |
Peekaboo | Meshi, Avital* |
SURVA.I.LANCE | Thompson, Ryan* |
nocturneInteractivePerformance1.txt | Oore, Sageev*; D’Eon, Jason; Miller, Finlay; Becker, Nic; Lowe, Scott C; Oore, Daniel |
You don’t know what this is | Shibuya, Kazufumi* |
John Connor Project (BIGYUKI × Qosmo) | Tokui, Nao; BIGYUKI; Nakajima, Ryosuke; Han, Alex; Sugano, Kaoru |
The Unknown | Porres, Diego* |
Beneath the Surface | Ozoani, Ezinwanne*; Tsaban, Linoy |
Unreal Pareidolia -shadows- | Allen, Scott* |
Fragments of Autumn | Park, Jihyeon* |
Subsumption No. 1 | Tallon, Tina*; Beacham, James |
Visions of Destruction | Canet Sola, Mar*; Guljajeva, Varvara |
Mythos Synthscribe Remix: Deus in Machina (2023) | Oore, Daniel; Staniland, Andrew; Brade, Stephen; Wang, Bryan; Sousa, Mauricio; Newsome, Gregory Lee; Grossman, Tovi; Oore, Sageev* |
Imaginary-shell | Sano, Fushi* |
How to attend
A registration ticket must be purchased on neurips.cc. This will allow you to attend the workshop in-person. The workshop will be held on Saturday Dec 16 at the New Orleans Ernest N. Morial Convention Center.
Call for Papers
We invite submissions in the form of papers. The deadline for paper submissions is Friday Oct 6 Wednesday Sep 27 at 11:59pm (anywhere on earth).
To Submit a Paper: We invite participants to submit 2-page papers in the NeurIPS camera-ready format (with author names visible), to be submitted to our CMT portal.
Topics may include (but are not limited to):
- Presentation of new machine learning techniques for generating art, music, or other creative outputs using, for instance, reinforcement learning, generative adversarial networks, novelty search and evaluation, etc
- Quantitative or qualitative evaluation of machine learning techniques for creative work and design
- Tools or techniques to improve usability or usefulness of machine learning for creative practitioners
- Descriptions, reflections, or case studies on the use of machine learning in the creation of a new art or design work
We encourage all authors to consider the ethical implications of their work. This can be discussed in a 1-paragraph section at the end of the paper and will not count towards the page limit.
In your submission, you may also indicate whether you would like to present a demo of your work during the workshop (if applicable).
Papers will be reviewed by committee members in a single-blind process, and accepted authors can present at the workshop in the form of a short talk, panel, and/or demo. At least one author of each accepted paper must register for and attend the workshop. Accepted papers will appear on the workshop website. Please note that we do not adhere to a formal peer review process and are normally unable to provide detailed feedback on individual submissions. We encourage submission of work that has been previously published, and work in progress on relevant topics of the workshop. This workshop will not have an official proceedings.
References and any supplementary materials provided do not count as part of the 2-page limit. However, it will be the reviewers’ discretion to read the supplementary materials.
Call for Artwork
We welcome submission of artwork that has been created using machine learning (autonomously or with humans). The deadline for artwork submissions is Nov 07 Oct 30 at 11:59pm (anywhere on earth).
We invite art submissions in any medium, including but not limited to:
- Image
- Video
- Music, Sound, Audio
- Writing, Poetry
- Dance, Performance,
- Installation, Physical Object, Food, etc
This year we are asking for submissions to consist of:
- In a text file, include the title, year, artist(s), and description (up to 200 words). Name the text file like: project_name_00.txt.
- One 1080 x 1920 landscape image. Name this image like: project_name_01.png. Both PNG and JPG/JPEG formats are acceptable. This image could be a work of art itself, or a single cover slide to describe the project.
- Optional: Up to 3 additional 1080 x 1920 landscape images. Name these like: project_name_03.png, project_name_04.png, project_name_05.png.
- Optional: One 1080 x 1920 MP4 landscape video of up to 60 seconds showing the work in more detail. Name this like: project_name_03.mp4.
Place these in a single zip archive called project_name.zip.
Submit this zip file through our CMT portal.
The reason for this specific submission format is because the workshop will screen the accepted art pieces as a video slideshow (which is why we ask for the landscape format). This video of all the accepted artworks will be posted on our website as well. During the workshop, we also may invite a few select creators of accepted artwork to participate in the form of a short talk.
Important Dates
- Oct 6
Sep 27: Paper submission deadline, 11:59pm (anywhere on earth) - Oct 27
Oct 25: Paper Acceptance Notifications - Nov 07
Oct 30, Artwork submission deadline, 11:59pm (anywhere on earth) - Nov 30: Artwork Acceptance Notifications
- Dec 16 (in-person): Workshop Date
Previous years:
- 2022 workshop (virtual)
- 2021 workshop (virtual)
- 2020 workshop (virtual)
- 2019 workshop (Vancouver, Canada)
- 2018 workshop (Montreal, Canada)
- 2017 workshop (Long Beach, CA, USA)
Connection with NeurIPS Creative AI Track
As you may have seen, NeurIPS is having their first creative AI track this year, for artistic submissions. We are excited that there is more interest in and visibility for creative AI now.
Both the organizers of this ML4CD workshop and the NeurIPS Creative AI track share the same goal: giving creative AI a platform! In doing so, we are actively collaborating with them, and are in close communication with the Creative AI Track chairs.
Since this is the first year that the NeurIPS Creative AI track is occurring, this first year our calls for art are separate, but in future years we may collaborate in a more direct way.
Their call for art already occurred, but you can submit to either/both!
For more in-depth specific differences between this ML4CD workshop’s call for art and the NeurIPS Creative AI Track, we have detailed them below:
- First, the requirement for thematic/conceptual focus differs.
- In this workshop, we do not have a particular thematic/conceptual focus on the artwork, aside from the fact that the art should use AI or ML.
- On the other hand, the NeurIPS Creative AI Track has a thematic focus of Celebrating Diversity this year, and all artwork there will be related to this theme.
- Additionally, the installation and presentation of the artwork will differ.
- Since this year the ML4CD workshop will take place in-person, our art show will take the form of a video slideshow screening in the ML4CD workshop room at NeurIPS. This art show video will be posted on our website and also our YouTube.
- We also will have a select number of artwork spotlights, as in previous years, for a few high-quality artworks. These selected artists will have the opportunity to deliver an oral presentation of their artwork at the workshop.
- Our workshop, and the art show, will remain mainly within our workshop room at the conference venue, on Sat Dec 16.
- In contrast, the NeurIPS Creative AI Track will have both physical installations/displays throughout the conference venue (for strongly accepted artworks), as well as a video screening (for weakly accepted artworks). This will take place over the entire duration of the conference, which is about a week long.
Organisers
- Yingtao Tian, Google Brain
- Lia Coleman, Carnegie Mellon University
- Hannah Johnston, Carleton University
- Tom White, Victoria University of Wellington
If you have any questions, please contact us at neuripscreativityworkshop@googlegroups.com